[Ffmpeg-devel-irc] ffmpeg.log.20170605

burek burek021 at gmail.com
Tue Jun 6 03:05:01 EEST 2017


[00:19:40 CEST] <c_14> windows?
[01:17:06 CEST] <kms_> can i grab video from my canon digital camera?
[01:18:27 CEST] <kms_> by using it as webcam
[02:34:28 CEST] <hxla> Hello guys, I have multiple gopro video files, that has multiple data streams, and I want to merge the video, audio and one specific data stream, but I can't, I get an error that the codec is not supported (https://pastebin.com/ZxSinT0P) is there a way to do this?
[02:40:23 CEST] <hendry> RE "hardware decoding generally doesn't work for anything other than 4:2:0" ... I assume you referring to 'pix_fmt yuv420p'. That iiuc https://en.wikipedia.org/wiki/YUV is like a file format. I wonder what yuvj420p means
[02:40:59 CEST] <hendry> initially when i saw 420p i had it confused between profiles and resolution. kinda awful naming or ?
[02:41:12 CEST] <furq> yuvj420p is full range (j for jpeg)
[02:41:18 CEST] <furq> as opposed to yuv420p which is limited range
[02:41:26 CEST] <furq> generally just don't use j ever
[02:41:57 CEST] <furq> https://en.wikipedia.org/wiki/Chroma_subsampling
[02:42:02 CEST] <furq> also that's probably the wikipedia article you want
[03:17:38 CEST] <Obliterous> I just compiled ffmpeg 3.3, and its now giving me an error: Unknown input format: 'alsa'
[03:18:14 CEST] <Obliterous> trying to capture video from an attached USB webcam
[03:19:49 CEST] <Obliterous> Did I miss something in my build, or is this new behavior?
[03:23:33 CEST] <Obliterous> https://pastebin.com/0LiAqqVx
[04:39:00 CEST] <hendry> i still don't quite understand variable bit rates. is there a tool i can view a video file where i can how bitrates were adapted/changed from scene to scene ?
[04:39:13 CEST] <hendry> furq: thanks for filling me in there :-)
[04:40:51 CEST] <hendry> furq: http://i.imgur.com/JebYP15.png left is yuv444p and right is yuv420p, correct?
[04:42:07 CEST] <furq> left is rgb
[04:42:15 CEST] <furq> 444 should look the same though
[04:42:20 CEST] <hendry> furq: wonder if you can tell how i captured https://youtu.be/MmNwwQngsv8?t=34
[04:42:30 CEST] <furq> youtube is always 420
[04:45:09 CEST] <furq> 1px wide #aa0000 text is a pathological worst case for subsampling
[04:45:35 CEST] <furq> if it's like #ffff00 on a black background or something then there's enough contrast that the luma resolution makes up for it
[04:46:24 CEST] <furq> you can see the same artifacts around the edges of much larger coloured text though
[04:46:38 CEST] <furq> that's why anyone who's any good at youtube outlines their text or takes care that it's high contast
[04:46:42 CEST] <furq> +r
[05:06:05 CEST] <zumba_addict> just heard about middle-out compression from my brother. Not sure what he's talking about. LOL. Is x265 compression better or they can't be compared to each other?
[05:06:35 CEST] <furq> compared to what
[05:07:17 CEST] <zumba_addict> middle out algorithm vs the algorithm use in x265
[05:08:12 CEST] <furq> well one of them exists and the other one doesn't
[05:09:02 CEST] <zumba_addict> got it
[05:09:29 CEST] <furq> if you mean that dropbox thing that a bunch of clickbait sites claimed was "middle-out compression" a while back, that's specifically a jpeg compressor
[05:09:35 CEST] <furq> it's not a generally applicable thing afaik
[05:09:46 CEST] <zumba_addict> yeah, i saw it - https://github.com/dropbox/lepton
[05:10:00 CEST] <zumba_addict> it's a github link which is good
[05:10:18 CEST] <zumba_addict> what are your thoughts on x265? Is it widely acceptable now?
[05:10:22 CEST] <furq> not really
[05:10:39 CEST] <furq> i personally plan to skip it and wait for av1, but then i'm not a CIO at a major video company so it doesn't matter
[05:10:52 CEST] <furq> but i suspect google are planning to do the same thing
[05:11:23 CEST] <zumba_addict> k. I've never heard about av1. You have links?
[05:11:34 CEST] <furq> https://en.wikipedia.org/wiki/AOMedia_Video_1
[05:11:42 CEST] <zumba_addict> must be successor to avc?
[05:11:48 CEST] <furq> it's more or less vp10
[05:11:49 CEST] <zumba_addict> i mean, avhdc something
[05:11:56 CEST] <zumba_addict> ok
[05:11:58 CEST] <furq> nothing to do with mpeg-la
[05:12:10 CEST] <furq> the big issue with h265 is the ridiculous licensing
[05:12:18 CEST] <furq> you need like three separate licenses to use it commercially
[05:12:27 CEST] <furq> which is why very few people are bothering
[05:12:29 CEST] <zumba_addict> ah
[05:12:42 CEST] <furq> that and there's nothing worth broadcasting in 4k yet
[05:12:47 CEST] <zumba_addict> i bought a small device called odroid c2 and it can play 4k h265
[05:12:56 CEST] <zumba_addict> yeah because of the huge data
[05:13:02 CEST] <zumba_addict> thanks for the link
[05:13:06 CEST] <furq> afaik only netflix are doing 4k h265
[05:13:24 CEST] <zumba_addict> most likely very highly compress
[05:13:49 CEST] <zumba_addict> can av1 be used in 4k?
[05:13:56 CEST] <furq> i should hope so
[05:14:10 CEST] <zumba_addict> ok
[05:15:52 CEST] <furq> h264 is still good enough for 1080p, and av1 should be A Thing within a year or two
[05:17:11 CEST] <zumba_addict> gotcha
[05:32:25 CEST] <johnjay> >tfw that feeling when a zumba addict has a better ARM device than you do
[09:01:25 CEST] <theodrim> Hello, is it possible to convert ass to srt without formating with ffmpeg (i.e. drop all colors/size)? Or I better off with some sed/awk?
[09:17:58 CEST] <Baumfaust> hi
[09:18:14 CEST] <dongs> sed/awk is never an answer regardless of waht teh question is
[09:18:30 CEST] <Baumfaust> how can i set the end and of a video?
[09:18:43 CEST] <Baumfaust> fmpeg -ss 00:19:30 -i dragon.mp4  -to 00:28:00 -vf fps=1/60 img%03d.png
[09:18:47 CEST] <johnjay> awk is the answer. Now what's the question?
[09:18:47 CEST] <Baumfaust> to does not work
[09:20:31 CEST] <theodrim> johnjay, strip formating from ass to srt (i.e. drop all <font> for example).
[09:20:37 CEST] <furq> theodrim: does ffmpeg -i foo.ass bar.srt not work
[09:20:50 CEST] <theodrim> furq, sadly no, it left formating in tact
[09:20:58 CEST] <furq> huh
[09:21:01 CEST] <theodrim> I assume it's doing it by desing
[09:21:06 CEST] <theodrim> design*
[09:21:35 CEST] <furq> oh i forgot srt actually has colours
[09:21:48 CEST] <johnjay> theodrim: i'll let you know once i master sed and awk
[09:21:57 CEST] <furq> and by "actually has" i mean it doesn't but some bastards have hacked it in and people have accepted that this is good
[09:22:19 CEST] <furq> maybe go via a format that doesn't have formatting like .vtt
[09:22:33 CEST] <theodrim> Will try.
[09:23:36 CEST] <theodrim> Yeah, it'll do, in simple way, thanks furq :)
[09:24:26 CEST] <furq> you could probably do it with sed as long as none of your tags contain a >
[09:25:41 CEST] <theodrim> It can be escaped by \>
[09:26:01 CEST] <theodrim> But not the point, most easy way is just use vtt as format.
[09:26:24 CEST] <furq> mov_text might be better because that actually doesn't support formatting
[09:26:33 CEST] <furq> whereas vtt does, i just don't think ffmpeg knows about it yet
[09:38:24 CEST] <theodrim> scodec text will also produce simple result.
[10:16:35 CEST] <Diego__> Hi there. I'm recording webRTC video with a builtin server app that creates 2 files: audio.mjr and video.mjr for each user in the videoroom. That plugin allows me to pre-process those files into .opus and .webm files, but I want to know if ffmpeg can manipulate those .mjr files too. Also, I need to know if I can create a process to create a single output video in realtime. I mean, while the rtc server is recording and adding data to .
[10:17:04 CEST] <Diego__> the information from those files and add them into a single one, even if the server is generating more files because a new user has joined the room?
[10:35:43 CEST] <Diego__> To clarify my question a bit: (e.g. 2 users in a room: a user and an administrator) Room administrator calls for a recording request via API to the server, the server sends to every client connected to that room to send a recording request to the webRTC server app. The webRTC server starts recording 2 .mjr files for each user (4 in this case. 2 for each user). While the files are being recorded, what I want is that a ffmpeg process st
[10:36:34 CEST] <robswain[m]> the mjr files are made by janus, right?
[10:36:36 CEST] <Diego__> file. If a user joins the room and the room is in recording mode, that user will send a request recording to webRTC server app and I want that ffmpeg adds that new user
[10:36:40 CEST] <Diego__> Yes
[10:36:51 CEST] <robswain[m]> it's a janus-specific format
[10:37:12 CEST] <Diego__> So there is nothing I can do with ffmpeg? :(
[10:37:43 CEST] <robswain[m]> unless you know how to code, you have to process the mjr files with the janus post processing tool and then you can do stuff with ffmpeg on those files
[10:38:33 CEST] <robswain[m]> so if you want to do something in real-time, you would need to edit the janus code, or make a feature request and wait
[10:39:18 CEST] <robswain[m]> also, i know for a fact that the janus post-processing code is not meant for real-time use
[10:39:41 CEST] <robswain[m]> it scans the entire file first and then re-orders the data in it
[10:40:05 CEST] <Diego__> Yep, thats the problem
[10:40:08 CEST] <Diego__> The question is because I want to be able to start recording, then people joins the room or start recording if they are already in the room. The problem here is that if I allow that and I want to do a mosaic output video, I need to know when the people joined the room and start adding videos in different timeline. Otherwise, I should only allow recording by administrator room own risks
[10:40:36 CEST] <Diego__> I can't see another way to do it :/
[10:41:22 CEST] <robswain[m]> these questions should go to janus really, but i think the time at which recording started is written either into the mjr file or the filename
[10:41:42 CEST] <robswain[m]> that can be used for some loose syncronisation
[10:41:52 CEST] <robswain[m]> janus' recording is pretty basic
[10:42:33 CEST] <Diego__> I tried also a JS recording for RTC, the problem is that they don't understand janus streams received in JS
[10:42:35 CEST] <robswain[m]> it basically just writes the rtp packets to disk with one file per stream and some custom format which JSON in it to easily add some metadata to the file
[10:43:22 CEST] <robswain[m]> you mean having one client use the MediaRecorder javascript API?
[10:43:30 CEST] <robswain[m]> in the browser, and then send the file somewhere else?
[10:43:31 CEST] <Diego__> Yep
[10:43:46 CEST] <Diego__> I dont like it because of the user computer consumption
[10:43:59 CEST] <Diego__> CPU usage*
[10:44:05 CEST] <robswain[m]> taking a step back, why do you need to ship the recording somewhere else in real-time?
[10:45:33 CEST] <Diego__> the output files are stored in the server and I want them in real-time to do a mosaic and "prevent" having to handle with timelines, because users joined lately to the room
[10:47:41 CEST] <Diego__> I saw a mediarecorder api usage to do it. Add streams in real time and build your mosaic, but it has problems with janus stream and the room administrator cpu usage while head to the top
[10:47:55 CEST] <Diego__> mediarecorder api plugin*
[10:47:59 CEST] <robswain[m]> without modification, you can't do that with janus
[10:48:18 CEST] <Diego__> Then I have to lead with timelines u.u
[10:48:32 CEST] <robswain[m]> kurento can do some video mixing
[10:49:37 CEST] <robswain[m]> or use some other MCU that mixes video and have janus as a receive-only client that records the mixed video
[10:49:45 CEST] Action: robswain[m] needs to work now
[10:49:46 CEST] <robswain[m]> good luck
[10:49:56 CEST] <Diego__> thanks for your help
[10:50:09 CEST] <Diego__> have a nice day :)
[11:11:45 CEST] <guther> Hi, i have an a/v-sync problem - here's my output: https://nopaste.me/view/c501910d
[11:13:25 CEST] <guther> First 17 lines is what ffplay says when i play the infile
[11:13:52 CEST] <guther> At the bottom you can see my ffmpeg cnd line
[11:14:09 CEST] <guther> ^cnd^cmd
[11:44:46 CEST] <dorvan> hello
[11:45:33 CEST] <dorvan> hi all..
[12:01:40 CEST] <hendry> furq: so the text on my tutorial video looked good to you? RE https://youtu.be/MmNwwQngsv8?t=34
[12:46:40 CEST] <Ethereco> Hello, with my self compiled minimal audio ffmpeg i got an error: Decoder (codec mp3) not found for input stream #0:0 - Full output of ffmpeg: https://pastebin.com/Zpfd4xF2
[12:52:38 CEST] <BtbN> seems like you did not build ffmpeg with an mp3 decoder
[12:54:08 CEST] <Ethereco> hmm, --enable-decoder='libfdk_aac,libopus,libvorbis,mp3,flac,pcm_s16le' mp3 is included
[12:54:34 CEST] <Ethereco> mp3 encoding was no problem, that works fine
[13:22:13 CEST] <Ethereco> ffprobe from my build: https://pastebin.com/7bAtMGsm - ffprobe from older fullbuild: https://pastebin.com/PqaTQhtc
[16:32:51 CEST] <CptnOblivious> I have another gif related question for you guys. I was using a line I found in an online tutorial for gifs. I found another tutorial with slightly different lines which produces a gif in about half the size as the first. I don't know enough about ffmpeg, can anyone tell me the difference between the two lines? Different compression? https://pastebin.com/RQfF1V0W
[17:01:09 CEST] <iive> CptnOblivious: this is an interesting question, because the reason is not obvious
[17:02:14 CEST] <iive> try removing the :flags=lanczos suboption. it is a method for aproximation of pixels when scaling.
[17:02:27 CEST] <iive> it is supposed to be higher quality.
[17:14:57 CEST] <CptnOblivious> thanks iive I'll try removing that part
[17:15:50 CEST] <iive> if this is not it, then looking at the text output might give a hint what is wrong.
[17:30:30 CEST] <CptnOblivious> The output came out around the same size removing that option
[17:53:08 CEST] <CptnOblivious> Yeah iive there was only a 300kb difference taking out that option. The difference between the original two lines is 1mb. I usually wouldn't care but the forum I'm posting these to has a size limit, so the extra savings helps
[18:14:35 CEST] <iive> CptnOblivious: try with scale=400:225:sws_dither=none
[18:31:25 CEST] <Filystyn> hello
[18:31:39 CEST] <Filystyn> Can you give me advice how to start with audio file ?
[18:31:43 CEST] <Filystyn> open it etc
[18:31:58 CEST] <Filystyn> most tutorials are about video files
[18:34:16 CEST] <c_14> I assume you mean the API?
[18:34:26 CEST] <Filystyn> yes
[18:34:28 CEST] <Filystyn> C api
[18:34:47 CEST] <c_14> https://ffmpeg.org/doxygen/trunk/decode_audio_8c-example.html
[18:35:12 CEST] <c_14> there's a bunch more here https://ffmpeg.org/doxygen/trunk/examples.html
[18:35:58 CEST] <Filystyn> thank you
[18:36:09 CEST] <Filystyn> Ill analyse and ask if problem shall be encountered
[18:53:12 CEST] <CptnOblivious> Thanks iive that seems to cut the size in half from 7.7mb to 3.5mb. I'm logged in remotely atm so can't tell the difference between quality.
[18:55:13 CEST] <iive> if you have optimized colors, the quality should be the same.
[18:55:33 CEST] <iive> there is a ffmpeg guide how to optimize the colors for your video.
[18:58:45 CEST] <CptnOblivious> I see. That's where using the palette line works, right?
[18:59:16 CEST] <iive> i guess
[19:00:53 CEST] <CptnOblivious> ok thanks, I appreciate it. I'll keep messing around with it
[19:01:39 CEST] <Filystyn> does this     codec = avcodec_find_decoder(AV_CODEC_ID_MP2);    sets codec to specified format?
[19:02:13 CEST] <iive> http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html
[19:02:59 CEST] <Filystyn> if yes how can i check what codedc uses file?
[19:05:17 CEST] <iive> Filystyn: ffprobe should be the tool for you :D
[19:06:29 CEST] <Filystyn> thank you
[19:11:13 CEST] <Filystyn> is there only pure source as referance?
[19:17:08 CEST] <Filystyn> well ?:)
[19:17:19 CEST] <Filystyn> come on guys give me few hints
[19:27:52 CEST] Action: Obliterous yawns and wakes up.
[20:54:42 CEST] <the_k> can ffmpeg keep an eye on free disk space?
[20:59:10 CEST] <ChocolateArmpits> I would guess not
[21:14:55 CEST] <Filystyn> use system functions
[21:15:15 CEST] <Filystyn> quite easy to writwe one that keeps eye on unix in few lines
[22:50:15 CEST] <pgorley> are there filters that can be hardware accelerated?
[23:03:59 CEST] <BtbN> define hardware accelerated
[23:04:04 CEST] <BtbN> a very few filters can use opencl
[23:09:01 CEST] <pgorley> hardware accelerated as in on the gpu
[23:11:45 CEST] <pgorley> BtbN: basically, i'm decoding video using vdpau, and i'd like to apply some filters to it before bringing it back to main memory
[23:11:51 CEST] <pgorley> is this possible at the moment?
[23:11:58 CEST] <BtbN> no
[23:12:15 CEST] <pgorley> oh, ok, thanks
[23:13:06 CEST] <BtbN> And also not possible to add, there are no processing functions that operate on vdpau frames
[23:13:11 CEST] <ChocolateArmpits> maybe via Avisynth ?
[23:13:16 CEST] <BtbN> no
[23:13:24 CEST] <ChocolateArmpits> Vapoursynth ... ?
[23:13:41 CEST] <BtbN> With what kinda magic should those suddenly operate on frames on the GPU?
[23:13:41 CEST] <JEEB> both of those generally work with frames in RAM
[23:13:43 CEST] <JEEB> instead of VRAM
[23:14:04 CEST] <JEEB> although you could in theory make filters run on the GPU. but that usually means copying the frames back to VRAM from RAM
[23:14:20 CEST] <BtbN> if you use cuvid for decoding, you get CUDA frames, on which you can, in theory, operate with CUDA on the GPU
[23:14:25 CEST] <JEEB> in theory you could do in-VRAM stuff in vs I guess?
[23:14:28 CEST] <BtbN> But sombody has to write all the filters first
[23:14:32 CEST] <JEEB> but yes :)
[23:14:44 CEST] <JEEB> no filters effectively means not there (yet)
[23:14:50 CEST] <BtbN> There is one
[23:14:54 CEST] <BtbN> vf_scale_cuda
[23:15:45 CEST] <pgorley> good to know, thanks
[00:00:00 CEST] --- Tue Jun  6 2017


More information about the Ffmpeg-devel-irc mailing list