[Ffmpeg-devel-irc] ffmpeg.log.20180621

burek burek021 at gmail.com
Fri Jun 22 03:05:02 EEST 2018


[00:15:55 CEST] <Elronnd\srn> does anyone know how I can extract frames of a video with ffmpeg's c api?  Googling only shows how to do it from the cli
[00:21:39 CEST] <JEEB> 1) open lavf context for reading the coded stuff (streams) from a file (AVPackets received) 2) open lavc context for the format in lavf context and use the feed/receive API to get AVFrames
[00:21:46 CEST] <JEEB> see under docs/examples
[00:21:51 CEST] <JEEB> the transcoding one for example
[00:26:16 CEST] <Elronnd\srn> ok
[00:30:51 CEST] <JEEB> and before you find "av_read_frame"
[00:30:59 CEST] <JEEB> that, yes, is a very misleading name
[00:31:04 CEST] <JEEB> since it returns AVPackets
[00:31:39 CEST] <JEEB> https://ffmpeg.org/doxygen/trunk/group__lavf__decoding.html#details
[00:31:41 CEST] <JEEB> lavf
[00:32:32 CEST] <JEEB> and then the actual lavc decoding/encoding (AVpackets to AVFrames and back) part https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[01:51:36 CEST] <kevev> Howdy
[01:52:20 CEST] <kevev> Any way to send a text stream of closed captioning to ffmpeg for embedding in a video stream? This will need to be closed captioning and not open captioning.
[01:52:36 CEST] <kevev> cea-708/eia-708
[01:52:42 CEST] <kevev> Thanks in advance :)
[01:58:39 CEST] <jkqxz> kevev:  Make the CEA-708 stream (at the cc_data_pkt level), attach it to your input frames as AV_FRAME_DATA_A53_CC, then encode with an encoder supporting that (libx264) and you'll get A/53-style closed captions in the output.
[02:00:58 CEST] <kevev> jkqxz: Thanks, but I don't understand.  :(
[02:04:11 CEST] <jkqxz> Which part?
[02:08:55 CEST] <kevev> jkqxz: cc_data_pkt level, AV_FRAME_DATA_A53_CC
[02:10:20 CEST] <jkqxz> Is your input just text associated with frames, or do you have it in CEA-708 packets?
[02:11:55 CEST] <jkqxz> You need to get it into the CEA-708 packet format, specifically a series of the 3-byte cc_data_pkt elements.
[02:13:30 CEST] <jkqxz> You then attach it to the input frames using the side-data API, the type of that side-data being AV_FRAME_DATA_A53_CC.
[02:18:41 CEST] <kevev> jkqxz: We output SDI or h264 stream.
[02:19:14 CEST] <kevev> Hoping to use https://webcaptioner.com/
[02:19:59 CEST] <kevev> Working with the developer of Web Captioner to see if it can output a text stream that is compatible.
[02:37:13 CEST] <jkqxz> That site isn't doing anything with video, right?  So you want to get the text stream from it to add to a video stream you already have?
[02:48:57 CEST] <kevev> jkqxz: Yes correct.
[03:13:16 CEST] <YokoBR> hi there
[03:13:48 CEST] <YokoBR> is there any way to generate an mp4 file by inserting chunks on the ffmpeg proccess?
[03:14:17 CEST] <YokoBR> I mean, spawn a proccess and inject stdin with vp8 or vp9 webm to generate a mp4 file
[03:15:09 CEST] <YokoBR> currently I'm doing this by inserting the chunks on an open file, then converting it to mp4.
[03:15:23 CEST] <YokoBR> but it takes more time than I've expected
[03:35:19 CEST] <kevev> jkqxz: Did you see my message?
[04:33:05 CEST] <hotbobby> is there any way to continue streaming even if you get a "WriteN, RTMP send error 104 "
[04:33:27 CEST] <hotbobby> it doesnt make sense this error is fatal because its not as though the server is no longer available
[07:56:08 CEST] <squ> how to record video from .jpg file from webcam
[07:56:51 CEST] <squ> http://abell.as.arizona.edu/~hill/AllSkyCurrentImage.JPG
[08:29:05 CEST] <Jonno_FTW> anyone got experience running ffmpeg on an rpi3?
[08:29:38 CEST] <Jonno_FTW> I'm trying to do a youtube livestream from an ip camera but it's stuck at "Starting"
[09:07:16 CEST] <squ> I think odroid would be more appropriate
[09:08:20 CEST] <kalidasya> hi, is there a way to get info with ffmpeg about non media streams? I have a TS with a custom stream in it (with psi only) and I want to at least see it if its there
[09:08:27 CEST] <kalidasya> I meant ffprobe
[09:08:31 CEST] <kalidasya> its morning here :)
[09:08:50 CEST] <Mavrik> It should show it as at least "unknown"
[09:09:39 CEST] <kalidasya> really? but it does not. I see it sees the PID
[09:10:11 CEST] <kalidasya> its PSI packets only
[09:10:56 CEST] <kalidasya> the only indication I see that it is there if I change the loglevel: `[mpegts @ 0xf26be0] Filter: pid=0x46
[09:10:56 CEST] <kalidasya> `
[09:11:12 CEST] <kalidasya> but nothing listed at the end
[09:11:37 CEST] <kalidasya> maybe because it has no pes packet?
[11:12:34 CEST] <squ> repeating my question
[11:12:57 CEST] <squ> how to make video from .jpg http://abell.as.arizona.edu/~hill/AllSkyCurrentImage.JPG
[11:13:23 CEST] <squ> I'm using -loop 1 and -framerate at the moment
[11:16:24 CEST] <squ> how are -rate and -framerate different?
[11:16:54 CEST] <squ> let's say I want it to request webcam 1 time each second
[11:17:17 CEST] <squ> and make video where that 1 frame is 1 second
[11:17:39 CEST] <squ> or 2 frames is 1 second
[11:20:46 CEST] <paulk-gagarine> hi there
[11:21:03 CEST] <paulk-gagarine> is anyone around here familiar with using hwdevice/hwframe for VAAPI
[11:21:15 CEST] <paulk-gagarine> in order to do DRM_PRIME mapping
[11:21:24 CEST] <paulk-gagarine> by any chance?
[11:21:45 CEST] <paulk-gagarine> I'm not really sure about the sequence of calls and I can't really find any example out there
[11:21:52 CEST] <jkqxz> Yes.
[11:22:45 CEST] <jkqxz> What do you want to do?
[11:24:25 CEST] <paulk-gagarine> one sec, pasting code
[11:24:54 CEST] <paulk-gagarine> jkqxz, the idea is to end up with a DRM Prime ffmpeg format
[11:24:59 CEST] <paulk-gagarine> for vaapi
[11:25:11 CEST] <paulk-gagarine> jkqxz, I'm doing this: http://leonov.paulk.fr/collins/~paulk/paste/index.php?paste=51a291824b73ad809138639c74454b23&raw=true
[11:26:51 CEST] <jkqxz> How are you setting up the AVCodecContext?
[11:27:20 CEST] <paulk-gagarine> m_pCodecContext = avcodec_alloc_context3(pCodec);
[11:27:34 CEST] <jkqxz> If you set AVCodecContext.hw_device_ctx only then libavcodec creates the output frames context internally on that device.
[11:28:07 CEST] <jkqxz> If you want to use a frames context you created yourself then you need to set AVCodecContext.hw_frames_ctx in the get_format() callback.
[11:28:07 CEST] <paulk-gagarine> oh so that's one part I'm missing already!
[11:28:26 CEST] <paulk-gagarine> jkqxz, can the frames context be created automatically for me?
[11:29:39 CEST] <jkqxz> Yes.
[11:30:27 CEST] <jkqxz> I suggest looking at the hw_decode example in doc/examples/hw_decode.c for that case.
[11:38:22 CEST] <paulk-gagarine> will do, thanks jkqxz
[11:38:29 CEST] <paulk-gagarine> I'll get back to you if I have more questions :)
[11:55:29 CEST] <sado> Hello, where I can download the source of ffmpeg without git  and it's possible to compile a ffmpeg version with all libraries, which exist for ffmpeg ?
[11:56:49 CEST] <twnqx> i am for some annoying reason trying to convert my flac collection to alac. the flacs all have cover art embedded. ffmpeg can read cover art from existing alac files, where it has been added with itunes. however, it claims it doesn't know a codec id for mjpeg cover art in .m4a when writing: https://pastebin.com/BCi8SNYR - any ideas?
[12:01:28 CEST] <twnqx> oh, i see there's an open issue that has been closed 2 months ago...
[12:03:32 CEST] <twnqx> well, and fixed last friday.
[12:03:45 CEST] <twnqx> is there already a release out with that, or do i have to switch to git?
[12:05:42 CEST] <sado> Are these the supported libs only ? --> https://www.ffmpeg.org/general.html
[12:09:51 CEST] <sado> Can I get an answer pls ?
[12:29:55 CEST] <hao> how to compile ffmpeg with enabling ffmpeg?
[12:30:03 CEST] <hao> ./configure --prefix=/home/hao/repo/Transcoder/Release/ffmpeg --enable-shared --enable-demuxer='mpegts,mpegvideo,image2' --enable-muxer=mpegts --enable-protocol='file,udp,rtp,srt' --enable-filter=overlay --enable-zlib --enable-libsrt --disable-doc
[12:30:13 CEST] <hao> libavformat/libsrt.c:24:10: fatal error: srt/srt.h: No such file or directory  #include <srt/srt.h>
[12:30:29 CEST] <hao> how to compile ffmpeg with enabling libsrt
[13:32:18 CEST] <sado> Are these the supported libs only ? --> https://www.ffmpeg.org/general.html
[13:38:19 CEST] <jkqxz> Supported in what sense?  There are a lot of usable external libraries; "configure --help" should list everything.
[13:48:23 CEST] <sado> ah thy :D
[13:51:32 CEST] <paulk-gagarine> jkqxz, I'm still very confused about the API and the example does not cover my use case :(
[13:51:41 CEST] <paulk-gagarine> jkqxz, more specifically, am I supposed to use av_hwframe_get_buffer for my use case?
[13:52:02 CEST] <paulk-gagarine> and if so, should I do it every time I want to decode a frame, or just once at first?
[13:52:14 CEST] <paulk-gagarine> It's rather unclear what role each function plays
[13:58:16 CEST] <jkqxz> If you use the method where you only provide the device (as in that example), then the decoder will return AV_PIX_FMT_VAAPI frames from a frames context it created internally on the device you gave it.
[13:58:26 CEST] <jkqxz> You don't need to deal with get_buffer in that case.
[14:00:15 CEST] <paulk-gagarine> jkqxz, but I can't do that if I want DRM_PRIME format in the end, right>
[14:00:17 CEST] <paulk-gagarine> ?
[14:00:32 CEST] <paulk-gagarine> my whole goal is to use the map function to translate AV_PIX_FMT_VAAPI to AV_PIX_FMT_DRM_PRIME
[14:01:03 CEST] <paulk-gagarine> sofar, I understood that I need hw device and hw frames
[14:01:05 CEST] <paulk-gagarine> for that
[14:01:56 CEST] <atomnuker> yes, the most correct way is to create a device ref and frames ref, attach them to an avframe with pix format of drm_prime and them map the vaapi frame to the drm frame
[14:02:13 CEST] <jkqxz> You want AV_PIX_FMT_VAAPI frames out of the decoder.  You then map them to AV_PIX_FMT_DRM_PRIME after.
[14:02:22 CEST] <jkqxz> Or at least, that's what you'll get in this case.
[14:02:31 CEST] <paulk-gagarine> do I need two distinct frames then?
[14:02:45 CEST] <paulk-gagarine> one for getting VAAPI out of the encoder and one for the mapping result?
[14:03:02 CEST] <jkqxz> If you need the mapping the other way around (you start with DRM objects and want to VAAPI decode to them) that is also possible but needs to use a slightly different API.
[14:03:03 CEST] <paulk-gagarine> (two distinct AVFrame)
[14:03:15 CEST] <paulk-gagarine> right, I'm only interested in decode at this point
[14:03:22 CEST] <paulk-gagarine> so one way will do
[14:03:51 CEST] <jkqxz> Yes, you have two distinct AVFrame structures, one of each type.  (Which you pass to av_hwframe_map().)
[14:04:58 CEST] <paulk-gagarine> and in order to set the type, I just have to alloc and manually set frame->format for the dst frame?
[14:05:10 CEST] <paulk-gagarine> (to DRM_PRIME)
[14:07:14 CEST] <atomnuker> yes
[14:08:38 CEST] <paulk-gagarine> thanks
[14:08:43 CEST] <paulk-gagarine> and which frame needs hw_frames_ctx set?
[14:11:21 CEST] <atomnuker> both, the vaapi one with vaapi's hw_frames_ctx and the drm one with the drm frames context
[14:11:53 CEST] <paulk-gagarine> wait, so I need two hw frame context?
[14:12:43 CEST] <paulk-gagarine> I only have one for VAAPI
[14:12:51 CEST] <paulk-gagarine> or do I just take two refs
[14:12:55 CEST] <paulk-gagarine> from the VAAPI one
[14:14:48 CEST] <paulk-gagarine> I definitely don't need to mmap the frame in the end
[14:15:29 CEST] <paulk-gagarine> atomnuker, jkqxz: is a drm hw frame context required, even if only to satisfy API requirements?
[14:15:37 CEST] <paulk-gagarine> I supposed I had misunderstood what mapping does
[14:15:54 CEST] <paulk-gagarine> I thought it was mapping between formats, but maybe it's mapping between hw frame contexts?
[14:16:30 CEST] <paulk-gagarine> I'd love some clarification here, sorry I'm so confused about all this :)
[14:20:07 CEST] <atomnuker> for maximum correctness, yes, its required, though you should be able to just set the drm prime pixformat on the second frame and map it
[14:21:37 CEST] <ariyasu_> https://i.imgur.com/LYE0aIx.jpg
[14:21:47 CEST] <ariyasu_> anyone know why my subs display out of order?
[14:22:04 CEST] <ariyasu_> using ffmpeg to go from .vtt to .srt or .ass causes them to display like this
[14:26:23 CEST] <paulk-gagarine> atomnuker, thanks -- also, what should I set as sw_format and format for the DRM hw frames context? both to AV_PIX_FMT_DRM_PRIME?
[14:28:43 CEST] <atomnuker> no, the sw format should match the vaapi swformat
[14:29:37 CEST] <paulk-gagarine> ok
[14:29:38 CEST] <paulk-gagarine> thanks
[14:29:42 CEST] <CoreX> ariyasu_ have you tried using subtitle edit
[14:29:54 CEST] <CoreX> also what version of ffmpeg are you using
[14:30:13 CEST] <ariyasu_> just testing with subtitle edit now
[14:30:44 CEST] <ariyasu_> im using the latest windows build from zeranoe
[14:30:46 CEST] <paulk-gagarine> atomnuker, is there a way to enumerate what mappings exist between two hw frame contexts?
[14:30:48 CEST] <jkqxz> paulk-gagarine:  If you need the matching DRM device/frames contexts they can be made with av_hwdevice_ctx_create_derived() / av_hwframe_ctx_create_derived().
[14:31:15 CEST] <CoreX> if you use an older build and its fine then its something to do with the newer build
[14:31:23 CEST] <jkqxz> In the general case that is required, but since DRM doesn't need it you can shortcut it if you never intend to use this code for anything else.
[14:31:57 CEST] <paulk-gagarine> jkqxz, ahh, so I can create a hwframe context for VAAPI and then derive it with the drm hw device?
[14:32:02 CEST] <jkqxz> There is no way to enumerate what mappings are supported in general - it's basically just "try it and see".
[14:32:11 CEST] <jkqxz> Yes.
[14:32:54 CEST] <paulk-gagarine> jkqxz, alright, noted
[14:34:37 CEST] <atomnuker> you should really avoid deriving any devices from the drm device context
[14:34:48 CEST] <atomnuker> because for the drm device context you need to be root
[14:36:03 CEST] <paulk-gagarine> atomnuker, that's the case here
[14:37:44 CEST] <atomnuker> well, if you've got the display fd open go ahead and populate it
[14:38:07 CEST] <atomnuker> alloc a drm device context via av_hwdevice_ctx_alloc(), plug the fd in hwctx->fd and call init av_hwdevice_ctx_init
[14:41:20 CEST] <ariyasu_> i got it working with subtitle edit and merging same timestamps, https://i.imgur.com/frQ2cGW.png
[14:41:20 CEST] <jkqxz> [That fd isn't actually necessary for this case, you can just not set it.]
[14:41:22 CEST] <ariyasu_> thaniks CoreX
[14:45:55 CEST] <atomnuker> jkqxz: yes, I already menioned that 2 times
[14:46:05 CEST] <atomnuker> plz review my patches
[14:53:02 CEST] <paulk-gagarine> ok so with all that stuff setup, I'm now at avcodec_receive_frame
[14:53:16 CEST] <paulk-gagarine> and the populated frame has ->format == 0
[14:58:11 CEST] <paulk-gagarine> atomnuker, jkqxz: anything else I should be doing?
[14:58:29 CEST] <paulk-gagarine> between avcodec_receive_frame and av_hwframe_map
[14:59:19 CEST] <paulk-gagarine> oh wait, I think I inverted something
[15:30:13 CEST] <jkqxz> paulk-gagarine:  Have you set a get_format() callback like the example does?  That is required, because it doesn't know it has to give you VAAPI frames unless you tell it.
[15:30:55 CEST] <jkqxz> (Make sure the VAAPI format is actually in the list - if you give it an unsupported stream then it won't be (e.g. 10-bit H.264), and only software decode will work in that case.)
[15:38:18 CEST] <paulk-gagarine> jkqxz, oh okay, I wasn't doing that
[15:38:18 CEST] <paulk-gagarine> thanks
[15:40:38 CEST] <MindSpark> hi, I tried asking this in mplayer, but this channel seems to be more active. I have a very slow CPU and a GoPro video which I would like to play using mplayer. It plays very slowly and I am thinking if I can scale down the video before playing it that would help.
[15:41:00 CEST] <MindSpark> Does anyone know what options I should pass to lessen the processing to render it as much as possible?
[15:41:45 CEST] <Mavrik> You'd like to do that in realtime?
[15:45:11 CEST] <iive> that's not what you asked in mplayer :P
[16:26:18 CEST] <MindSpark> Mavrik, what I mean is that I don't want to convert to another file and then [;lay that file, but rather encode it and play it at the same time
[16:26:45 CEST] <Mavrik> MindSpark: yeah, decoding and resampling will be more expensive than just decoding and playing it :)
[16:26:55 CEST] <Mavrik> MindSpark: don't you have a hardware video decoder in your machine?
[16:28:51 CEST] <MindSpark> I doubt it. It's a EEE Pc, probably first gen, with a 1.6Gh Atom processor
[16:30:00 CEST] <Mavrik> uhh.
[16:30:08 CEST] <Mavrik> Yeah, that won't work.
[16:35:04 CEST] <kepstin> first gen eeepc had a pentium 3 mobile, so that's a somewhat newer system ;)
[16:40:36 CEST] <MindSpark> So the processor is simply too weak?
[16:43:17 CEST] <kepstin> processor is too weak, and the igpu is too old to be able to hardware decode, yeah.
[17:06:57 CEST] <MindSpark> alright, thanks
[19:26:01 CEST] <TheAMM> Is there some flag to fix seeking in the broken 10bit 444 x264 encodes?
[19:26:50 CEST] <TheAMM> I mean, seeking works, but the input data is jumbled because some packet about the 10bit or the 444 (I don't know the specific details) is skipped when seeking directly into the file
[19:27:19 CEST] <TheAMM> Looks like this https://mygi.ga/rrn/adSLX.webm
[19:27:29 CEST] <kepstin> TheAMM: what container format?
[19:27:46 CEST] <TheAMM> mkv in this instance
[19:28:15 CEST] <kepstin> very strange, I'd expect seeking to work fine in that assuming keyframes are correctly marked
[19:28:31 CEST] <TheAMM> wm4 made a patch for mpv to go back and read the tidbit
[19:29:12 CEST] <kepstin> what tool was used to make the file in the first place? ffmpeg?
[19:29:28 CEST] <TheAMM> Probably
[19:29:35 CEST] <TheAMM> Not my file
[19:29:39 CEST] <kepstin> (if not, does simply remuxing the file with ffmpeg -c copy fix it?)
[19:29:44 CEST] <TheAMM> Nope
[19:29:55 CEST] <TheAMM> I don't have an issue with it
[19:30:00 CEST] <TheAMM> mpv handles it, that's enough for me
[19:30:13 CEST] <TheAMM> But I want to know if there's a flag to fix it that I'm unaware of
[19:30:35 CEST] <TheAMM> I can get you a sample if you want to take a look
[19:31:50 CEST] <kepstin> i haven't heard of this issue before, and I doubt there's anything specific for fixing this. It sounds like a buggy muxer :/
[19:32:34 CEST] <kepstin> if ffmpeg's seeking (with -ss) is also broken with this file - or if this broken file was made with ffmpeg, a sample and bug report would be helpful.
[19:35:13 CEST] <TheAMM> Eh, I don't care enough to do a full bug report
[19:35:23 CEST] <TheAMM> I'll get you a dd'd sample in a minute
[19:35:58 CEST] Action: kepstin doesn't really know enough about codec/format internals to do much about this issue himself.
[19:38:53 CEST] <TheAMM> https://mygi.ga/pyA/aecCO.mkv
[19:39:16 CEST] <TheAMM> Works: ffmpeg -i broken_x264_10b_444.mkv -t 2 -pix_fmt yuv422p -y out.mkv
[19:39:28 CEST] <TheAMM> Breaks: ffmpeg -ss 30 -i broken_x264_10b_444.mkv -t 2 -pix_fmt yuv422p -y out.mkv
[19:39:57 CEST] <kepstin> hmm, and that was muxed with mkvmerge (presumably to add the attachments, set metadata, etc.)
[19:42:05 CEST] <TheAMM> Here's wm4's PR for mpv https://github.com/mpv-player/mpv/pull/5290
[19:42:27 CEST] <TheAMM> The one that got merged is linked below
[19:51:37 CEST] <kepstin> oh, this is a workaround for a buggy version of libx264 :/
[19:53:06 CEST] <kepstin> adding the "-x264_build 150" option on the ffmpeg command line, like mpv does internally, appears to work for allowing ffmpeg to decode it
[19:53:13 CEST] <kepstin> (input option)
[19:53:51 CEST] <TheAMM> ere we go en
[19:55:53 CEST] <kepstin> the real way to fix the file would be to re-encode it with a non-buggy version of x264 :/
[20:01:54 CEST] <kepstin> I wonder why that's a 4:4:4 encode in the first place, I can't think of any video source for anime that would give you anything other than 4:2:0
[20:01:58 CEST] <kepstin> very strange.
[20:02:11 CEST] <TheAMM> 1080p downscaled to 720p
[20:04:27 CEST] <kepstin> so yeah. summary of the bug: "old x264 versions are buggy at encoding 10bit 4:4:4. ffmpeg's decoder has a workaround, but it only applies the workaround automatically if it sees the first stream packet - which has the x264 version in an info element. therefore starting playback at a seeked position doesn't apply the workaround"
[20:07:14 CEST] <kepstin> i don't think there's any way to modify the file - short of re-encoding the video - that would fix the issue in all players.
[20:07:49 CEST] <TheAMM> I don't mind it
[22:45:47 CEST] <remlap> hi does anyone have any av1 samples, can only find some on elecards site in webm and ffplay doesnt seem to like those
[22:49:00 CEST] <kepstin> have they actually finalized the bitstream now? Last I heard they were still making incompatible changes
[22:49:12 CEST] <TheAMM> haha, kepstin, went to look how the x264_build acts (ie. can I specify it as a precaution and not break stuff [yes, I can]) and wm4 made the commit on ffmpeg, too
[22:50:22 CEST] <JEEB> yup
[22:50:34 CEST] <JEEB> kepstin: it is not finished
[22:50:53 CEST] <JEEB> just for the (non-)amusement of poor GSoC students who were tasked to make an AV1 parser for gstreamer
[22:51:11 CEST] <JEEB> remlap: but yea basically to play AV1 you need the decoder to be the exact same version as the encoder
[22:51:14 CEST] <kepstin> remlap: if you want samples that work with your av1 decoder, encode them with the matching encoder :)
[22:51:16 CEST] <JEEB> otherwise ain't gonna work
[22:51:31 CEST] <JEEB> rav1e is one alternative encoder that can currently do 4fps at 480p
[22:51:38 CEST] <JEEB> https://github.com/xiph/rav1e
[22:51:50 CEST] <JEEB> has the same limitation that it's sync'd for some things against libaom
[22:51:58 CEST] <JEEB> so those need to match hash-wise
[22:52:13 CEST] <JEEB> but it's at least not libaom slow
[22:54:01 CEST] <someuser> Does anyone know why when I use -c:a aac -b:a 128k i am getting a 8kb output?
[22:54:16 CEST] <kepstin> someuser: are you encoding silence? :)
[22:54:17 CEST] <JEEB> post full command line and log into pastebin or so
[22:54:20 CEST] <JEEB> and link here
[22:55:06 CEST] <someuser> https://pastebin.com/raw/Kc50P0TH
[22:55:27 CEST] <kepstin> well, that's neither the full command line nor the log
[22:56:04 CEST] <JEEB> also adding -v verbose is probably a good thing, too
[23:00:34 CEST] <ntd> can ffmpeg do channel-switching on v4l2/bt878/bttv devices, deinterlace the input and make it available as h264 through rtsp on localhost?
[23:00:35 CEST] <someuser> kepstin: how is that not the full command?
[23:01:49 CEST] <kepstin> someuser: at a minimum, missing the "ffmpeg" executable, and the input and output filenames.
[23:02:07 CEST] <someuser> i didnt think that was relevant to the question.
[23:02:19 CEST] <kepstin> someuser: but the log output would probably be more helpful than that info
[23:02:55 CEST] <someuser> how do i export a log file on conversion? i am using windows.
[23:03:16 CEST] <kepstin> someuser: run ffmpeg in a console window, copy/paste the output
[23:03:26 CEST] <JEEB> at the end of the command | 2> ffmpeg_sucks.log|
[23:03:29 CEST] <kepstin> (i think that's easier to do now that windows 10 improved the console)
[23:03:33 CEST] <JEEB> without the |
[23:03:51 CEST] <someuser> so add         2> ffmpeg_sucks.log           to the end?
[23:03:59 CEST] <JEEB> without the spacing but yes
[23:04:06 CEST] <JEEB> 2> is "redirect stderr"
[23:04:14 CEST] <kepstin> does that even work on windows?
[23:04:16 CEST] <JEEB> and then the file name to which to redirect
[23:04:17 CEST] <JEEB> yes
[23:04:19 CEST] <JEEB> yes it does
[23:04:25 CEST] <kepstin> huh, cool.
[23:04:33 CEST] <JEEB> it will give you unix endlines since CRLF is not needed in terminal
[23:04:52 CEST] <JEEB> but I think even notepad in windows can now handle it :P
[23:05:00 CEST] <JEEB> *windows 10
[23:05:11 CEST] <remlap> kepstin: thanks very much thought as much
[23:05:21 CEST] <remlap> RE: av1
[23:05:34 CEST] <JEEB> ntd: not sure, esp. dynamically
[23:05:51 CEST] <JEEB> ntd: I would just make a simple application based on the FFmpeg APIs
[23:06:07 CEST] <remlap> libaom is teeth grindingly slow as JEEB said
[23:06:17 CEST] <someuser> https://pastebin.com/raw/yXWZkCt0
[23:06:19 CEST] <someuser> log
[23:06:40 CEST] <JEEB> that didn't finish yet
[23:06:48 CEST] <JEEB> and yea, you get a lot of stats
[23:07:08 CEST] <JEEB> also are you sure you want 5.1 at 128kbps?
[23:07:26 CEST] <someuser> what would be an ideal value
[23:07:37 CEST] <JEEB> dunno, I wonder if the AAC encoder supports quantizers
[23:07:45 CEST] <kepstin> ideal would be to just do -c:a copy if possible
[23:07:50 CEST] <JEEB> or that yes
[23:07:53 CEST] <JEEB> it's just 384kbps
[23:07:59 CEST] <kepstin> but i assume you might want aac for device support or something
[23:08:10 CEST] <someuser> yeah i prefer aac
[23:08:23 CEST] <kepstin> in which case, consider downmixing to stereo with "-ac 2"
[23:08:44 CEST] <kepstin> if you just "prefer aac", well, don't - avoid doing lossy transcodes unless you have a really good reason ;)
[23:09:04 CEST] <someuser> :D
[23:09:56 CEST] Last message repeated 1 time(s).
[23:09:56 CEST] <kepstin> i suppose the only real reason to re-encode a file like this would be to make it smaller, so downmixing to stereo and using aac at 128kbit might make sense then
[23:10:22 CEST] <someuser> what would i change in my command line for that
[23:10:47 CEST] <kepstin> add the output option "-ac 2"
[23:11:05 CEST] <someuser> -ac 2 aac -b:a 128k
[23:11:08 CEST] <someuser> i assume?
[23:11:17 CEST] <JEEB> -c:a aac -b:a 128k and the thing
[23:11:34 CEST] <JEEB> although to be honest the AAC encoder (internal) has a logic by default
[23:11:36 CEST] <JEEB> to pick a bit rate
[23:11:49 CEST] <JEEB> which is for each pair channel it adds 128kbps
[23:12:11 CEST] <JEEB> so not setting a bit rate seems OK?
[23:12:19 CEST] <someuser> i was already using -c:a aac -b:a 128k
[23:12:30 CEST] <someuser> unless i need to add the -ac 2 in there as well
[23:13:00 CEST] <kepstin> someuser: right now it's encoding 5.1 at 128kbps, which is probably gonna give poor results. adding "-ac 2" will downmix to stereo.
[23:13:04 CEST] <JEEB> I'd prefer aformat with mapping set to stereo, but that will set the output context -ac
[23:13:20 CEST] <JEEB> whihc by default has the channel mapping of stereo
[23:13:30 CEST] <kepstin> that said, what were you using to find the audio bitrate originally, when you said it was only 8kbps?
[23:13:49 CEST] <kepstin> because that does seem kind of odd, and nothing in the log you pasted explains it
[23:14:01 CEST] <someuser> right clicking and viewing the settings
[23:14:35 CEST] <kepstin> err, viewing the settings with what tool?
[23:15:13 CEST] <someuser> Windows right click file -> settings -> details
[23:15:41 CEST] <JEEB> try ffprobe
[23:15:47 CEST] <JEEB> ffprobe -v verbose file
[23:15:52 CEST] <JEEB> file being the file you created
[23:16:24 CEST] <kepstin> yeah, i wouldn't be surprised if the windows thing is only looking at the first packet or is otherwise just wrong
[23:17:04 CEST] <kepstin> i mean, the exact output bitrate doesn't really matter as long as it sounds ok
[23:17:48 CEST] <kepstin> if the problem you had was "it doesn't sound very good", switching to stereo instead of 5.1 while keeping the bitrate at 128k should fix it.
[23:20:08 CEST] <someuser> https://pastebin.com/raw/iW7vLyBT
[23:24:27 CEST] <JEEB> could be the result of the encoder not having enough bits for 5.1, or if the 5.1 content is very simple (not sure if it does bit filling for no need)
[23:25:43 CEST] <someuser> not sure what that means but i assume the file origionally created is the issue?
[23:29:38 CEST] <JEEB> well that ffprobe shows it still being 5.1
[23:29:44 CEST] <JEEB> so you clearly haven't downmixed it
[23:29:56 CEST] <someuser> so is my command wrong then?
[23:30:52 CEST] <JEEB> "-af 'aformat=sample_fmts=fltp:channel_layouts=stereo'"
[23:31:03 CEST] <JEEB> try this in addition to your current parameters, post log
[23:31:08 CEST] <someuser> ok
[23:31:22 CEST] <JEEB> (I know this is longer than -ac 2, but I want my specific channel layout dang it 8))
[23:31:48 CEST] <JEEB> it might work without the sample_fmts part, but I just copied the parameters off of documentation :P
[23:32:14 CEST] <JEEB> anyways, I'm going to go sleep since in 5.5 hours I've seemingly going to be woken up
[23:32:21 CEST] <someuser> No such filter: 'aformat=sample_fmts=fltp:channel_layouts=stereo'
[23:42:41 CEST] <JOGARA> have they taken out Decklink output support with the Windows builds at zeranoe.com?
[00:00:00 CEST] --- Fri Jun 22 2018


More information about the Ffmpeg-devel-irc mailing list