[Ffmpeg-devel-irc] ffmpeg.log.20171110

burek burek021 at gmail.com
Sat Nov 11 03:05:01 EET 2017


[00:00:10 CET] <JEEB> but yea, that looks really similar to how decoding hwaccels work
[00:00:18 CET] <JEEB> you create the hwdevice
[00:01:31 CET] <JEEB> and then you create the codec, and then you pass the device to the AVCodecContext's hw_device_ctx
[00:02:40 CET] <JEEB> although it lacks the pix_fmt jumping
[00:02:51 CET] <JEEB> which you do with hwaccel decoding
[00:03:06 CET] <SortaCore> how does that jumping work?
[00:03:16 CET] <JEEB> see the hwaccel decoding example
[00:03:30 CET] <JEEB> I wouldn't be surprised if it worked by just adding the QSV pix_fmt there
[00:03:41 CET] <JEEB> and then calling for the qsv hwaccel
[00:04:16 CET] <JEEB> SortaCore: I will just guess this is different in how it works between decoding and encoding
[00:05:01 CET] <JEEB> ah ok, AV_PIX_FMT_QSV is supported by the QSV H.264 encoder
[00:05:12 CET] <JEEB> so you can most likely feed the images from the QSV decoder
[00:05:15 CET] <JEEB> uhh
[00:05:16 CET] <JEEB> fuck
[00:05:25 CET] <JEEB> when I'm talking about the QSV decoder I mean hwaccel
[00:05:36 CET] <SortaCore> XD
[00:05:39 CET] <JEEB> because "hwaccel" is "you get HW textures/whatever" out
[00:05:52 CET] <JEEB> and "decoder" is "you get 'normal' pixel formats back"
[00:06:09 CET] <JEEB> that's why hwaccels generally tend to need additional poking
[00:07:21 CET] <JEEB> SortaCore: also do note that probing at the start is always done with the software decoder since you don't generally want to have the HW decoder doing that
[00:07:44 CET] <JEEB> "AV_CODEC_CAP_AVOID_PROBING" is set for pretty much all hw decoders/hwaccels
[00:08:09 CET] <JEEB> so you might see the native decoder logging at the very start if you're using libavformat for reading the input stream
[00:08:29 CET] <JEEB> the native one will be dropped afterwards of course
[00:08:35 CET] <JEEB> since you're not using it for actual decoding
[00:09:38 CET] <SortaCore> so it's used to start things off?
[00:09:49 CET] <JEEB> it's used by libavformat to probe the input streams
[00:09:57 CET] <SortaCore> ah yea I remember that
[00:10:26 CET] <JEEB> so if you were using some other framework for RTSP etc you wouldn't be having that of course
[00:11:15 CET] <SortaCore> ffmpeg's normally really good, it's just with QSV it seems to become ethereal
[00:11:35 CET] <JEEB> but yea, so far unless I'm going to be really surprised the QSV stuff seems like your bog standard usual hwaccel business
[00:11:43 CET] <JEEB> although not like I'm able to build it
[00:12:41 CET] <JEEB> oh wat
[00:12:46 CET] <SortaCore> QSV does have a fullback for full software mode, so it should be alright
[00:12:59 CET] <JEEB> "V5: remove qsv/cuda in the example and Mark have test dxva2|d3d11va, videotoolbox might work as well."
[00:13:08 CET] <JEEB> from the change log of that hwaccel decoding example
[00:13:17 CET] <SortaCore> I have a longgg document on how to build ffmpeg on windows
[00:13:36 CET] <SortaCore> written for someone who's completely new
[00:13:54 CET] <jkqxz> QSV isn't a hwaccel at all.  There is hackery using a fake hwaccel to make it output hardware surfaces, but that's just because lavc doesn't support it otherwise.
[00:13:58 CET] <alexpigment> SortaCore: that "full software mode" - at least for encoding - is wayyyyyy slower than x264 btw
[00:14:05 CET] <jkqxz> So it won't work using the hw_decode example, which is only for hwaccels.
[00:14:05 CET] <JEEB> jkqxz: LOL
[00:14:06 CET] <SortaCore> yea, I imagine it would be
[00:14:46 CET] <JEEB> jkqxz: so you can just open the decoder and ask for the QSV pix_fmt?
[00:14:50 CET] <SortaCore> aww, NVENC was going to be my next HWA attempt
[00:15:27 CET] <jkqxz> Yes.
[00:15:38 CET] <JEEB> oh well, ok
[00:16:23 CET] <SortaCore> err
[00:16:26 CET] <JEEB> oh lol, completely missed the separate qsvdec.c example :P
[00:16:37 CET] <JEEB> so there actually was a goddamn QSV specific example in the code base
[00:17:02 CET] <SortaCore> I actually have code I copied directly from an example, but commented
[00:17:06 CET] <SortaCore> out
[00:17:39 CET] <alexpigment> SortaCore: wait, do you also have Nvidia on your system?
[00:17:45 CET] <alexpigment> since you mentioned nvenc
[00:18:15 CET] <SortaCore> I do have it, but my client may not
[00:18:36 CET] <SortaCore> he does have QSV
[00:18:37 CET] <alexpigment> well, in my experience, trying to do hardware when you have both Intel and Nvidia on the same system is wonky
[00:18:45 CET] <alexpigment> on windows, you have to have a monitor plugged into each card
[00:19:02 CET] <alexpigment> i'd guess that's somewhat true across OSes
[00:19:20 CET] <JEEB> SortaCore: the example for decoding with qsv was updated to the new send/receive API in nov '16 and you have the intel example for encoding from august
[00:19:31 CET] <JEEB> so I'd say you're all set for basic dec-enc chain :P
[00:19:57 CET] <SortaCore> let's hope I figure it out this time
[00:20:08 CET] <SortaCore> the one from patch is using pixel format NV12, not QSV
[00:20:13 CET] <JEEB> yes
[00:20:25 CET] <JEEB> although if your image is from the QSV hwaccel you can get the QSV type
[00:20:27 CET] <SortaCore> so if I switch to QSV it uses hardware?
[00:20:36 CET] <SortaCore> or is it fakery like aforementioned
[00:20:57 CET] <JEEB> it should always use hardware if the driver/whatever supports it, but NV12 is normal memory to the "GPU" memory
[00:21:15 CET] <JEEB> while the QSV hwaccel pix_fmt requests the images in the "GPU" memory
[00:21:23 CET] <JEEB> and it seems like those can be passed on into the encoder
[00:21:33 CET] <JEEB> so I recommend you start with baby steps
[00:21:45 CET] <JEEB> if you see the blackness NV12 encoding works :P
[00:21:47 CET] <SortaCore> the thing that's annoying me is I have no status reports to go by
[00:22:04 CET] <SortaCore> when is it full encoding, when half-baked, when software, etc
[00:22:24 CET] <JEEB> if it's using the QSV avcodecs FFmpeg is using the QSV libraries
[00:22:29 CET] <JEEB> what they do is up to the heavens
[00:22:35 CET] <SortaCore> it decodes and encodes now, but the hardware and method isn't apparent
[00:22:54 CET] <JEEB> you can basically print out the name from the AVCodec struct if you want
[00:23:11 CET] <SortaCore> I can plug a second screen into the intel card
[00:24:06 CET] <JEEB> and if you request an AVCodec by the name (instead of the codec ID, which is shared between all decoders of a single v/a/s format)
[00:24:13 CET] <JEEB> you should be getting the _qsv stuff
[00:24:29 CET] <JEEB> avcodec does not fall back by itself if things fail
[00:24:49 CET] <JEEB> the API client is expected to start falling back if bricks start falling down :)
[00:25:30 CET] <SortaCore> ...don't crash, Windows
[00:25:31 CET] <JEEB> (open another AVCodecContext with the software AVCodec in most cases, if the machine is still in a state)
[00:26:17 CET] <SortaCore> I already have code to detect if intel QSV is there by opening hardware
[00:26:23 CET] <SortaCore> hwaaccel*
[00:26:38 CET] <SortaCore> let's try actually linking it to the codec
[00:26:50 CET] <JEEB> also lol, you have successfully prevented me to pull my old patches I wanted to hack on to the current FFmpeg HEAD
[00:27:03 CET] <JEEB> *me from pulling
[00:27:18 CET] <SortaCore> glad to be of assistance
[00:27:27 CET] <SortaCore> I've found bugs in Notepad before :p
[00:28:09 CET] <JEEB> anyways, good luck but I don't see anything special in the QSV hwaccel so if you utilize it like the examples it can end up as "crap that the driver has left me with"
[00:28:21 CET] <JEEB> esp. since you now have examples for both dec and enc
[00:31:12 CET] <FFstumped> I have this strange problem when using multiple filters one filter rounds differently than the other
[00:31:22 CET] <FFstumped> for example: crop=w=iw:h=iw*(3/4):x=0:y=n*(ih-oh)/(1930), pad=iw*(3/4)*(16/9):iw*(3/4):(ow-iw)/2:(oh-ih)/2:black
[00:31:30 CET] <SortaCore> yea, I spent weeks on this last time
[00:31:31 CET] <FFstumped> where 1930 is the number of frames.
[00:31:54 CET] <FFstumped> in the crop filter, iw*(3/4) works out to 378
[00:31:56 CET] <SortaCore> so if you're staying up for me, don't ;)
[00:32:11 CET] <FFstumped> in the pad filter iw*3/4 works out to 376
[00:32:25 CET] <FFstumped> when really iw * 3/4 is 377.77
[00:32:34 CET] <FFstumped> i have no idea why one filter rounds up and the other filter rounds down
[00:33:36 CET] <FFstumped> then it bails with an error that the pad doesn't have enough space:
[00:34:10 CET] <FFstumped> Input area 74:0:524:338 not within the padded area 0:0:600:336 or zero-sized
[00:37:07 CET] <FFstumped> so i tried to work around this, by precomputing the values and passing them in via a script. but then i ran into a different problem. when i get the width from ffprobe it gives me the height and width reversed sometimes, i.e. when the video has a rotation.
[01:41:35 CET] <dreamp> Hi there, how are you? What's the difference between an AVFrame->linesize[0] and AVFrame->height? When are they different?
[01:42:35 CET] <JEEB> https://ffmpeg.org/doxygen/trunk/structAVFrame.html#aa52bfc6605f6a3059a0c3226cc0f6567
[01:43:00 CET] <JEEB> so it is the amount of bytes until the same spot on the next line in the image
[01:43:13 CET] <dreamp> Thanks JEEB
[01:43:29 CET] <JEEB> and read the note as for why it is needed in addition to width
[01:43:41 CET] <dreamp> While I'm learning all these stuffs
[01:43:53 CET] <dreamp> I'm trying to write a newbie tutorial
[01:43:54 CET] <dreamp> https://github.com/leandromoreira/ffmpeg-libav-tutorial#chapter-0---the-infamous-hello-world
[01:44:18 CET] <JEEB> doc/examples has plenty of stuff, and then I recommend linking to the autogenerated doxygen
[01:44:37 CET] <JEEB> (you can also generate stuff specific to your version from your code the same way)
[01:45:09 CET] <dreamp> thanks
[01:45:39 CET] <JEEB> searching "site:ffmpeg.org/doxygen/trunk Keyword" is usually useful in addition to the examples
[01:46:51 CET] <dreamp> the doubt come to me while I was using two different videos, one 1080x1980 and other 16x16... in the first video I saw that the linesize was 1080 but on the second it was 64 and that make me think that it's different from width.
[01:47:28 CET] <k_sze> hmm, somebody said a few days ago that changing the rotation using -metadata:s:v is broken in recent versions of ffmpeg.
[01:47:37 CET] <k_sze> I forget who.
[01:47:58 CET] <JEEB> dreamp: it is
[01:48:05 CET] <JEEB> width is usable width
[01:48:11 CET] <k_sze> furq:
[01:48:29 CET] <JEEB> linesizes are the line sizes for each plane
[01:48:34 CET] <k_sze> furq: I'm using ffmpeg 3.4 on macOS.
[01:48:35 CET] <JEEB> aka "stride"
[01:48:51 CET] <k_sze> furq: changing the rotation works for me. It's just that the semantic is super weird.
[01:49:06 CET] <furq> k_sze: ?
[01:49:36 CET] <furq> it's definitely broken in 3.3.4
[01:49:40 CET] <dreamp> JEEB: supposing that the first plane is the Y why my 16x16 video would have 64 as linesize?
[01:49:52 CET] <furq> unless the command changed and nobody in that bug report thread has realised for six months
[01:50:05 CET] <JEEB> dreamp: usually because a certain alignment is needed for optimization etc
[01:50:13 CET] <JEEB> so you actually have buffer between the lines
[01:50:28 CET] <JEEB> SIMD requires alignment of data
[01:50:40 CET] <JEEB> so it is actually the side you have to expect
[01:50:50 CET] <JEEB> that width is not the same as stride (line size)
[01:50:53 CET] <furq> -metadata:s:v:0 rotate=90 works in an old build i have from february but not in 3.3.4
[01:50:58 CET] <furq> and that bug report says much the same thing
[01:52:04 CET] <dreamp> yep "For video the linesizes should be multiples of the CPUs alignment preference, this is 16 or 32 for modern desktop CPU"
[01:52:17 CET] <dreamp> the doc you provided JEEB states what you just said =)
[01:52:20 CET] <k_sze> furq: So I have an original H.264 in .MOV file. The original tag:rotate in metadata is 90. Now if I try to change the rotation, these values work: 0, 180, -270
[01:52:37 CET] <k_sze> notice it's negative 270
[01:52:44 CET] <k_sze> 90 and -90 don't do anything.
[01:53:15 CET] <dreamp> JEEB: do you think it's useful, beyond myself, to write this kind of tutorial? (that is basically explaining some of the /examples found in trunk)
[01:54:03 CET] <furq> k_sze: 180 doesn't work here
[01:54:09 CET] <furq> maybe it got fixed in 3.4
[01:54:30 CET] <furq> 90 definitely used to work, so there's obviously some kind of regression
[01:54:42 CET] <k_sze> The whole semantic is a mess.
[01:54:53 CET] <furq> paste the exact command so i can test properly
[01:54:54 CET] <k_sze> I mean, why -270??
[01:54:57 CET] <furq> yeah that's stupid
[01:55:23 CET] <k_sze> ffmpeg -i IMG_7342.MOV -c:v copy -an -metadata:s:v rotate=-270 -y IMG_7342_rotated.MOV
[01:58:00 CET] <furq> lol wtf
[01:58:40 CET] <k_sze> What's the tag:rotate in your original video?
[01:59:40 CET] <k_sze> I wish ffmpeg had a *relative* rotation command line option.
[01:59:47 CET] <furq> http://vpaste.net/TPW0U
[01:59:58 CET] <furq> also ffprobe shows no rotate metadata in either
[02:00:18 CET] <furq> even though i'm sure ffprobe shows rotate in side data
[02:00:42 CET] <k_sze> That's odd.
[02:00:54 CET] <furq> i checked with mpv as well and the first one plays right side up
[02:01:21 CET] <furq> and yeah, ffmpeg 3.2 creates the first file with the correct metadata
[02:01:26 CET] <furq> so that's all going to be a lot of fun for someone to figure out
[02:05:05 CET] <k_sze> Hmm, is it because you don't get rotate metadata when you stream from a live source?
[02:05:32 CET] <k_sze> So ffmpeg has no way to figure out what's the correct way to interpret the rotation, and just gives up?
[02:06:00 CET] <k_sze> Maybe the best you can do is to use an actual rotation filter step?
[02:06:07 CET] <k_sze> :(
[02:11:21 CET] <k_sze> Anyway, time to go to work.
[02:12:38 CET] <liyou> no time to work
[02:15:01 CET] <Cracki> filter:v setpts + c:v copy, possible?
[02:15:13 CET] <furq> no
[02:15:18 CET] <furq> you can't filter without reencoding
[02:15:23 CET] <Cracki> got an avi with no timestamps (fixed frame rate)
[02:15:41 CET] <Cracki> [mp4 @ 0000000002fc9980] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[02:15:53 CET] <furq> if you want to change between cbr framerates you can sometimes do that by demuxing the ES and remuxing with -r before -i
[02:15:59 CET] <Cracki> (vdub avi with x264 content)
[02:16:02 CET] <furq> ^
[02:16:06 CET] <Cracki> ic
[02:16:26 CET] <furq> ffmpeg -i foo.avi -c:v copy -f h264 - | ffmpeg -r 60 -i - ...
[02:16:35 CET] <furq> or er
[02:16:40 CET] <furq> ffmpeg -i foo.avi -c:v copy -f h264 - | ffmpeg -r 60 -f h264 -i - ...
[02:16:44 CET] <Cracki> uh weird
[02:16:49 CET] <Cracki> trying
[02:17:43 CET] <furq> also i meant cfr, not cbr, obviously
[02:18:53 CET] <Cracki> it's still complaining, but maybe that works better now
[03:36:57 CET] <blunaxela> I have ffmpeg sending x11grab to a ffserver feed url, and the ffserver stream is set to codec:mjpeg and format:mjpeg, but when I visit the url in FF it just offers to download. The resulting download is a valid mjpeg, but I want to stream to the browser instead of downloading to a file.
[03:42:04 CET] <blunaxela> Is there a config that I might be missing, or some more manuals that need reading? I'm trying to stream my desktop so that it can be viewed just in the browser. VLC had some interesting options, but none seemed to work very well...
[03:44:41 CET] <furq> you can't play mjpeg in browsers
[03:47:03 CET] <memeka> ffmpeg fails to run in firefox,  it fails when doing avcodec_open2, after avcodec_alloc_context3 is successful .... any idea what can be wrong, how can i debug?
[03:49:47 CET] <blunaxela> hmm, i guess i'll try some other codecs. I went for mjpeg because currently I'm streaming into a v4l2loopback device, then displaying that using https://github.com/ipartola/hawkeye to host an mjpeg stream in the browser. Works very well, but it feels too complicated.
[03:58:55 CET] <dan2wik> mpeg1video goes well with "JSMpeg", A light weight mpeg1 decoder written purely in JS.
[04:35:38 CET] <blunaxela> oh, so deeper digging into mjpeg and i realized that i actually want mpjpeg (MIME mulitpart JPEG). I'll have to look at JSMpeg too though :D
[04:37:29 CET] <zash> The multipart where each part replaces the previous? IIRC common in webcams and such.
[04:55:08 CET] <blunaxela> yeah, that's essentially what I want. It seems to work across different browsers and it's easy to set up. I just need to figure out why the quality is so low
[06:00:31 CET] <kiranbsravi> While converting RTP audio-video streams to webm file using libavformat, how do we set pts, dts and duration value of AVPacket ? Video - VP8 and audio - Opus encoded already in the stream.
[06:30:04 CET] <buu> Why does ffplay change my monitor's vertical frequency from 60 to 30?
[09:33:10 CET] <Bear10> Does anyone know how I could https://pastebin.com/jqQUaKEb achieve the same with one avfoundation input?
[10:03:52 CET] <buu> Why does ffplay change my monitor's vertical frequency from 60 to 30?
[10:08:27 CET] <Brian_> i build ffmpeg for android but getting this error: CANNOT LINK EXECUTABLE: cannot locate symbol "fseeko64" referenced by "/data/videoapp/files/ffmpeg"
[10:08:33 CET] <Brian_> anyone know how to fix this?
[10:11:29 CET] <Bear10> reason I'm not using the dash muxer (if anyone knows how to fix it, i would) is that the audio comes in before video so I thought maybe the pastebin would help solve the issue
[10:12:09 CET] <Brian_> does somebody know how to build ffmpeg with file32 instead of 64?
[11:44:37 CET] <AntHan> hi guys, first time here ... can someone help me please to do the most basic thing of streaming an MP4 file to a udp address ?
[11:50:00 CET] Last message repeated 1 time(s).
[11:55:57 CET] <Bear10> hey AntHan i'm not an expert on ffmpeg or even remotely good, but i think in this case it's something along the lines of ffmpeg -re -i {yourmp4file} udp://address
[11:56:27 CET] <Bear10> a simple example of it should be in the docs
[12:02:03 CET] <jkqxz> Brian_:  ffmpeg always builds with _FILE_OFFSET_BITS=64 because files bigger than 2GB are common and building with 32 will cause random data loss.
[12:02:40 CET] <jkqxz> For Android problems around that, see <https://android.googlesource.com/platform/bionic/+/master/docs/32-bit-abi.md>.  Building with a newer API version should work.
[12:03:01 CET] <AntHan> ok thanx i will look into it
[12:04:04 CET] <jkqxz> Brian_:  If you are /really/ sure that you will /never/ see a file bigger than 2GB, you could edit configure to remove the setting.  (Definitely not recommended.)
[12:06:12 CET] <Brian_> jkqxz: Thanks. I downgraded to ndk14 because I really need ffmpeg to work on older API versions
[12:34:45 CET] <ANTHAN_> where can i get ffserver from ??
[12:35:13 CET] <AntHan> i cant find it in the ffmpeg web page
[12:35:23 CET] <BtbN> ffserver is dead, don't use it
[12:40:17 CET] <minru> Hi, I have a feeling that something is wrong with the PTS/DTS counters in ffmpeg could someone check this log https://pastebin.com/9gKmfqek
[12:42:07 CET] <BtbN> something is wrong with the timestamps in your input.
[12:43:32 CET] <minru> this happens for all inputs after more than one day
[12:44:13 CET] <BtbN> The source does not properly handle timestamp overflows then
[12:44:17 CET] <minru> I use -copyts option, probably this is the key
[12:47:43 CET] <minru> this time there was a mistical current DTS ~14000000000 in the logs, and then it changed to the source DTS
[12:47:47 CET] <minru> and all stopped
[12:49:08 CET] <BtbN> happens when the timestamp overflows and start from the beginning again
[12:49:22 CET] <BtbN> a bit unusual for that to happen daily though
[12:52:55 CET] <Bear10> i can't figre out for the life of me why when using HLS and/or DASH I start getting audio desyncs with HLS it happens gradually with DASH it's immediately
[12:59:47 CET] <minru> BtbN: when I use -copyts option the ffmpeg takes source current pts/dts values and increasing them, even source has rollover the counters inside ffmpeg is still increasing as I understand by the log. In the output those values are replaced by input pts/dts values and all is working untill the counter inside ffmpeg has rollover at mistical 14xxxxxxxxx... maybe this is start PTS value + max value
[12:59:47 CET] <minru> of PTS :D
[13:02:36 CET] <minru> who can expain how -copyts is working without looking into to the source :)?
[13:10:58 CET] <minru> with -copyts the transcoding of live stream will broke 100% after one day and couple of  hours...
[13:14:11 CET] <DHE> 26.5 hours of mpegts?
[13:14:43 CET] <minru> very close to this
[13:15:09 CET] <minru> it looks like 33bits counters
[13:15:53 CET] <DHE> mpegts specifically is a 33 bit counter running at 90 kHz
[13:17:15 CET] <minru> yes but when I start with -copyts the counters already can be near to the end, and they rollover wthout any problem, I see it in input and output\
[13:18:06 CET] <DHE> I've seen this. there's a logic error in ffmpeg. it handles wraparound but in a crude way. as it approaches the original value from the start the wraparound handling code suddenly thinks there's nothing to do and that breaks it
[13:18:25 CET] <minru> but anyway it allways end with such log https://pastebin.com/9gKmfqek after those 26
[13:19:43 CET] <JEEB> DHE: I earlier noted that I might make a thing in the mpeg-ts demuxer that handles wrapping in the demuxer or so if an option is set
[13:20:51 CET] <DHE> it's not. I've checked
[13:21:12 CET] <DHE> ffmpeg itself handles the case where wrap-around occurs by examining the fact that the mpegts demuxer reports a timestamp width of 33 bits
[13:24:02 CET] <DHE> but the check is (currentpts < initialpts) which breaks down once you return to your initial starting point regardless of what that PTS value is
[13:24:48 CET] <JEEB> DHE: yea I know how it currently works
[13:25:02 CET] <JEEB> I mentioned on -devel before that the demuxer should just be effing fixed
[13:25:03 CET] <minru> have you checked my log? there is DTS values >14000000000, but I don't see them in the output stream with ffprobe, they a different and almost the same as in input stream
[13:25:21 CET] <JEEB> or well, some people want the original timestamps so it has to be an option
[13:25:37 CET] <JEEB> but I don't want as a lavf user to have to play game with the darn timestamps :V
[13:25:57 CET] <JEEB> I want them to be what they should be parsed as in the context of MPEG-TS
[13:29:12 CET] <minru> JEEB: I want original timestamps because when someone do some changes with the streams in the encoder of the source, the ffmpeg is loosing sync of the streams
[13:31:30 CET] <ayum> hi, anyone knows does ffmpeg supports jetson tx2 card for transcoding? I searched on google, but seems I can only use gstreamer to utilize jetson board and for transcoding
[13:32:12 CET] <sfan5> unless that card provides a vaapi or vdpau interface probably not
[13:32:13 CET] <minru> I have checked it, if I use -copyts and delete video or audio stream from the mpeg-ts, and then add it again then ffmpeg handle it well.. If without, the sync is lost 100%
[13:33:49 CET] <JEEB> minru: well original timestamps but do you need them to be monotonically rising or not. in my case I want them to be darn rising >:| in some cases you just want the value from the demuxer which I think libavformat currently really doesn't let you get (which is then one that overflows)
[13:39:35 CET] <minru> JEEB: of course I don't need them, I just want to get a sane output, even there is some changes or looses in input
[13:42:07 CET] <minru> it is why I'm looking for workaround, because ffmpeg doesn't handle it well, or I just don't know which options can help in such case
[14:03:10 CET] <maralago> I am working on building an application that will broadcast a live stream from an IP camera that I have to clients using my site. The video feed I am receiving is RTSP and I am trying to decide on whether to broadcast the video to users with DASH, HLS, or convert to base64 and send over websockets. Does anyone have any good materials that they could point me to to get a better handle on architectures used for this type of proble
[14:19:45 CET] <ayum> @maralago, I am using HLS
[14:20:24 CET] <ayum> @maralago, you can use videojs library easy to integrate HLS supports. and also you can use any website as backend to serve m3u8 and mpegts files
[14:28:11 CET] <maralago> Then you use ffmpeg to generate the m3u8 index files?
[16:50:58 CET] <k_sze> I remember there's supposed to be a command line option to regenerate the dts/pts?
[16:51:06 CET] <k_sze> Was it a vfilter? I forget.
[16:51:25 CET] <JEEB> yea there is a setpts filter
[16:51:40 CET] <sfan5> -fflags genpts ?
[16:53:59 CET] <k_sze> I'm trying to write a python wrapper script to make my life easier in cutting a bunch of MOV file at certain start and end timestamps.
[16:54:31 CET] <JEEB> sfan5: right that thing
[16:54:46 CET] <k_sze> But it seems that using `-to HH:MM:SS -c:v copy -copyts` is quite broken.
[16:56:15 CET] <k_sze> e.g. ffmpeg -ss 00:30:00 -i myvideo.MOV -to 00:30:20 -c:v copy -copyts myvideo_out.MOV
[16:56:41 CET] <k_sze> This is supposed to give me a clip of 20 seconds, starting from 00:30:00 in the original file, right?
[16:57:05 CET] <sfan5> not necessarily
[16:57:23 CET] <sfan5> because you're doing this without reencoding ffmpeg can only cut the video at certain points
[16:57:37 CET] <k_sze> Sure, at keyframes
[16:57:46 CET] <k_sze> and that's perfectly acceptable for me.
[16:57:58 CET] <k_sze> However, the output file is worse than that.
[16:58:16 CET] <k_sze> QuickTime just shows me a completely blank video.
[16:59:25 CET] <dystopia_> i don't think the =copyts is needed
[17:00:15 CET] <k_sze> dystopia_: it is, otherwise ffmpeg tries to make a clip that is 30 minutes 20 second long.
[17:00:32 CET] <sfan5> what does ffprobe say about the resulting file?
[17:00:54 CET] <dystopia_> ffmpeg -ss timpestamp -t clipduration -i inputfile -acodec copy -vcodec copy -sn output.mov
[17:01:32 CET] <k_sze> dystopia_: I know I can use -t instead of -to.
[17:01:58 CET] <k_sze> But I don't want to have to calculate the clip length myself.
[17:02:34 CET] <k_sze> ffmpeg was supposed to be able to just use the original timestamps to determine the clip length, if I use -to with -copyts.
[17:04:24 CET] <k_sze> sfan5: what would you like me to get from ffprobe?
[17:04:34 CET] <sfan5> dunno, everything?
[17:08:25 CET] <k_sze> I guess you're most interested in the pts/dts of the frames.
[17:08:52 CET] <k_sze> sfan5: http://vpaste.net/AazIo
[17:09:01 CET] <k_sze> That's the first frame, according to ffprobe
[17:10:47 CET] <sfan5> shouldn't pkt_{p,d}ts_time be starting at 0?
[17:12:00 CET] <k_sze> sfan5: they don't, because of -copyts
[17:12:13 CET] <sfan5> hm right
[17:12:29 CET] <k_sze> -fflags +genpts also seems to do nothing.
[17:13:49 CET] <k_sze> I guess my next best option is to calculate the clip length in Python.
[17:14:04 CET] <k_sze> and use -t instead of -to.
[17:15:17 CET] <dystopia_> if you building a python script
[17:15:26 CET] <dystopia_> you can get it to do the maffs for you
[17:29:57 CET] <MercadesBendz> okay um i can't figure out how to fix this build errors which includes libavformat/avformat.h and libavcodec/avcodec.h
[17:46:50 CET] <k_sze> yay, got my python script working.
[17:47:10 CET] <k_sze> Now I don't need to wrestle with the ffmpeg command line again.
[17:48:26 CET] <MercadesBendz> here is the exat errors: https://pastebin.com/tjNagBUt
[18:11:58 CET] <leandromoreira> Hi there! how are you ? I'm learning and writing (WIP) about the library ffmpeg libav, I'd like some feedbacks 'bout it. Mostly concerning how easy it is, how correct (even if it's simplified) its... anyway, any feedback will be welcomed! Thanks https://github.com/leandromoreira/ffmpeg-libav-tutorial#learn-ffmpeg-libav-the-hard-way
[19:00:57 CET] <neymarjr> What does mean: TBN and TBC in the FFmpeg output ? (ex: "25 fps, 12800 tbn, 25 tbc") What is in few words the time_base? How is this used? Where?
[19:01:12 CET] <neymarjr> I saw the following message too: "Setting 'time_base' to value '1/12800'"
[19:08:14 CET] <BtbN> One is the codec time base, and the other one the container one iirc
[19:32:12 CET] <saml> given a video file, how can I know exact video length?
[19:32:45 CET] <howudodat> I have an mp4 that is missing about 2 minutes of video.  I'd like to simple insert 2 minutes of black screen at 16:42.24 and preserve the audio track and encoding, etc
[19:43:53 CET] <MercadesBendz> is anyone home?
[20:03:50 CET] <DHE> intermittently. that's what's great about IRC
[20:23:36 CET] <ray_> hi
[21:04:53 CET] <Cracki> howudodat, so silent audio too?
[21:05:46 CET] <Cracki> as for inserting... if that's exactly a keyframe, you might be in luck. if it's not, even in principle, some frames need reencoding.
[21:06:42 CET] <Cracki> afaik ffmpeg doesn't much support you in messing around with that. you can split the video track in the right places, reencode the GOPs that need it, then concat back together
[21:30:41 CET] <ibisr> Hi all, I'm running ffmpeg in a batch process and I run into some occasional error when streaming over http that maybe you can help me with
[21:31:14 CET] <ibisr> The first issue is that when I get errors like: [tls @ 0x5af4000] An unexpected TLS packet was received.
[21:31:14 CET] <ibisr> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x45c4360] stream 1, offset 0xb36a279f: partial file
[21:31:37 CET] <ibisr> I'd like the process to exit with nonzero status code
[21:32:41 CET] <ibisr> The second is that on http errors like this, I'd like it to attempt to retry requests, if possible
[22:27:44 CET] <saml> how do I chunk a large video and encode each chunk?
[22:34:42 CET] <ChocolateArmpits> saml, use segmenter format
[22:36:16 CET] <SortaCore> It's weird, I've been going over the ffmpeg for QSV, and there doesn't appear to be a way to tell it to only use hardware implementation
[22:38:32 CET] <kepstin> the intel qsv interface does have a flag to request using only hardware implementations; ffmpeg doesn't currently support setting this.
[22:38:43 CET] <SortaCore> yea, I found it in the code
[22:41:46 CET] <kepstin> note that if you're using linux, you can use qsv hardware via the vaapi interface with stock kernel, no proprietary software, and it doesn't implement software fallbacks.
[22:42:06 CET] <SortaCore> line 252 in qsv.c, ff_qsv_init_internal_session
[22:42:17 CET] <SortaCore> atm it's Windows
[22:42:52 CET] <SortaCore> I switched D3D11 with D3D9, since it seems to be using D3D9 by default without checking if D3D11 is available
[22:44:13 CET] <saml> ChocolateArmpits, thanks. i'm trying to paralelize video encoding of videos that are large and long. 100GB and 5 hours playback.
[22:45:39 CET] <kepstin> if seeking in the file is accurate and you have enough IO, you could just write a script that encodes a few chunks with specified timestamps, and concat the result after
[22:46:56 CET] <kepstin> using the segmenter muxer with -c copy or something like that then streaming the generated chunks to other machines to encode is also an option, obviously would require some custom ffmpeg wrapper code.
[22:48:05 CET] <saml> wow man  ffmpeg -i yolo.mp4 yolo.m3u8
[22:56:10 CET] <ChocolateArmpits> kepstin, I had all kinds of problems doing this when the input wasn't specifically intra frame coded, it may look good on paper, but doesn't work 100% with ffmpeg through seeking
[23:03:47 CET] <SortaCore> is there a way to increase reorder buffer at decoding start?
[23:04:12 CET] <ChocolateArmpits> SortaCore, probesize ?
[23:05:02 CET] <kepstin> SortaCore: not sure what you mean - are you talking about rtp over udp or something like that?
[23:05:20 CET] <SortaCore> rtsp yea
[23:05:41 CET] <SortaCore> I'm currently doing it with tcp_transport setting or whatever it's called, but it keeps freaking out
[23:05:56 CET] <SortaCore> at some point down the picture a line of pixels is stretched
[23:06:20 CET] <SortaCore> stretched downwards so all the parts below is indecipherable
[23:06:33 CET] <kepstin> well, reordering isn't needed with tcp, since tcp guarantees in-order delivery
[23:06:41 CET] <kepstin> weird that you'd be seeing something like that
[23:06:49 CET] <SortaCore> it does come up that it increases reorder buffer to 1
[23:07:27 CET] <kepstin> but yeah, if you run "ffmpeg -h dmuxer=rtsp", you'll see that there is an option named "reorder_queue_size" which you could try playing with
[23:08:34 CET] <SortaCore> https://pastebin.com/raw/fRnKdnnU
[23:08:56 CET] <SortaCore> it's acting like packets are dropped but I don't recall this camera dropping them, and like you said, tcp garauntees
[23:13:53 CET] <SortaCore> I shall investigate options tho
[00:00:00 CET] --- Sat Nov 11 2017


More information about the Ffmpeg-devel-irc mailing list