[Ffmpeg-devel-irc] ffmpeg.log.20170928

burek burek021 at gmail.com
Fri Sep 29 03:05:01 EEST 2017


[00:28:35 CEST] <Hopper_> BtbN: Thanks, so I would have C/read.txt and d/read.txt, and I delete C/read.txt then move d/read.txt to /?
[00:28:42 CEST] <Hopper_> c/
[00:50:13 CEST] <kepstin> Hopper_: what os?
[01:08:45 CEST] <Hopper_> kepstin: w10
[01:26:51 CEST] <relaxed> Hopper_: if you know the timing, maybe ass subs are the way to go
[01:27:23 CEST] <Hopper_> What timing?
[01:29:38 CEST] <relaxed> if you know beforehand when the text should change, you can orchestrate it using ass subs
[01:29:57 CEST] <relaxed> based on timestamps of the video
[01:30:55 CEST] <Hopper_> No, it is a live stream, and there is a serial input creating data that I want overlayed on the video.
[01:31:08 CEST] <relaxed> ok
[01:31:58 CEST] <Hopper_> relaxed: So stop looking into ASS Subs?
[01:33:03 CEST] <relaxed> indeed. maybe write a script that backgrounds ffmpeg and send text to the drawtext file
[01:33:25 CEST] <JEEB> for dynamic live stuff maybe casparcg or something?
[01:33:32 CEST] <JEEB> used in norwegian public broadcasting, IIRC
[01:33:37 CEST] <JEEB> for overlays etc
[01:34:17 CEST] <Hopper_> JEEB: Thanks, looking at it now.
[01:34:19 CEST] <relaxed> only JEEB will break out norwegian public broadcasting
[01:34:49 CEST] <JEEB> well it's pretty obvious this person will not want to write his own app to dynamically create overlays
[01:34:59 CEST] <JEEB> and the one thing I know that seems to be working for live stuff is casparcg
[01:35:03 CEST] <JEEB> overkill? maybe
[01:35:13 CEST] <Hopper_> JEEB: Give me some credit here!
[01:35:35 CEST] <JEEB> :D
[01:35:43 CEST] <Hopper_> casparcg seems to have too high of system requirements.
[01:35:53 CEST] <furq> this seems like a good opportunity for sendcmd
[01:35:57 CEST] <furq> if only anyone actually knew how to use it
[01:36:01 CEST] <JEEB> lol
[01:36:10 CEST] <relaxed> JEEB: no, I just find you're uncanny experience funny
[01:36:18 CEST] <relaxed> your!
[01:36:34 CEST] <Hopper_> My plan is to have a python program generate a .txt file of the most recent complete serial data, and have ffmpg just keep using that file so it will display all updated info.
[01:36:47 CEST] <JEEB> ffmpeg.c is scary
[01:36:52 CEST] <JEEB> so much can be done with it
[01:36:56 CEST] <JEEB> until it is no longer possible :3
[01:37:16 CEST] <JEEB> as in, you suddenly hit a brick wall because of design of ffmpeg.c or so
[01:37:51 CEST] <JEEB> (and yes, I've done the mistake myself as well at times where I've based something on ffmpeg.c and then extending that becomes not-really-possible easily)
[01:38:00 CEST] <JEEB> (and then it's API client writing time)
[01:38:11 CEST] <JEEB> fun times dot jaypeg
[01:39:14 CEST] <Hopper_> JEEB, relaxed:  Think I can do what I'm planning on?
[01:39:33 CEST] <JEEB> API-wise yes
[01:39:37 CEST] <JEEB> ffmpeg.c, no idea
[01:39:46 CEST] <furq> that should work fine with drawtext
[01:39:51 CEST] <relaxed> scripting, yes
[01:40:33 CEST] <furq> write a line to tmpfile.NamedTemporaryFile and then move it to the path you gave ffmpeg
[01:40:41 CEST] <furq> as long as they're on the same fs that should be fine
[01:41:02 CEST] <JEEB> rename() magick
[01:41:03 CEST] <Hopper_> And if ffmpeg can't find it, will it just ditch the text for that frame, or stop functioning completely?
[01:41:28 CEST] <furq> no idea
[01:41:34 CEST] <furq> but it should always be able to find it
[01:41:36 CEST] <relaxed> just cat text to a file
[01:42:17 CEST] <relaxed> echo "new text" > drawtext.txt
[01:42:18 CEST] <furq> yeah python is overkill if you're not already using it
[01:42:28 CEST] <furq> also don't do that. that will break
[01:42:38 CEST] Action: relaxed stabs furq 
[01:42:44 CEST] <relaxed> will it?
[01:42:50 CEST] <furq> yeah you need to write to a temp file and do an atomic replace
[01:43:16 CEST] <Hopper_> If not python, what would you suggest?
[01:43:19 CEST] <relaxed> that's no fun
[01:43:50 CEST] <furq> so more like tmp=$(mktemp); cat "foo" > $tmp; mv $tmp $drawtext
[01:44:07 CEST] <furq> but if you're on windows it's probably less hassle to just use python
[01:44:40 CEST] <relaxed> or awk
[01:45:01 CEST] <Hopper_> This build is on windows.
[01:45:34 CEST] <furq> i assume if it's windows you'll need something to read from serial as well
[01:45:46 CEST] <Hopper_> Ya, that's why I was going to use python.
[01:45:53 CEST] <furq> right
[01:45:58 CEST] <Hopper_> It can do everything with the same program.
[01:46:06 CEST] <furq> well yeah just do the thing i just said except in python
[01:46:24 CEST] <furq> python has tempfile.mkstemp and stuff
[01:46:45 CEST] <Hopper_> That bit is beyond me, have a link?
[01:47:43 CEST] <relaxed> what is coming off serial?
[01:48:14 CEST] <Hopper_> a string that will end up being a CSV.
[01:48:39 CEST] <relaxed> just curious, can you be more specific?
[01:49:07 CEST] <furq> https://docs.python.org/2/library/tempfile.html
[01:49:12 CEST] <Hopper_> Sure, this is an airborne device that will be logging the data, but also sending the data over a live video stream.
[01:49:15 CEST] <Hopper_> furq: Thanks
[01:49:25 CEST] <relaxed> a drone?
[01:49:41 CEST] <Hopper_> I shouldn't get into it.
[01:50:09 CEST] <Hopper_> But when the project it live, you will all be able to see the result, I'll post it.
[01:50:17 CEST] <relaxed> we need to go back in time and take out Hopper_
[01:56:03 CEST] <Hopper_> Okay, well tomorrow we will get started on updating text overlays in FFMPEG.
[01:57:55 CEST] <relaxed> I look forward to the results
[03:42:00 CEST] <acos> Sup all
[04:30:42 CEST] <Ch3ck> I want to demux a decoded data stream
[04:33:39 CEST] <Ch3ck> Using the ffmpeg binary I run this this command "ffmpeg -i -ar 44100 -ac 2 -ab  -f mp3 out.mp3 < data_stream"
[04:35:10 CEST] <Ch3ck> Which libraries do I look at for this?
[04:35:37 CEST] <furq> i'm not really sure what you're asking, but that command won't demux, it'll reencode
[04:35:43 CEST] <furq> once it's fixed, anyway
[04:36:01 CEST] <Ch3ck> I want to implement this  in Go using cgo
[04:36:09 CEST] <Ch3ck> I don't want to call ffmpeg directly in the code
[04:36:25 CEST] <furq> you want libavformat then
[04:36:44 CEST] <Ch3ck> I wish to know which functions to look at in there
[04:36:52 CEST] <Ch3ck> furq, do you have a sample command
[04:36:59 CEST] <furq> for ffmpeg?
[04:37:05 CEST] <furq> it depends what data_stream is
[04:37:07 CEST] <Ch3ck> using libavformat
[04:37:15 CEST] <Ch3ck> it's a map
[04:37:20 CEST] <Ch3ck> map of strings
[04:37:49 CEST] <furq> ??
[04:38:12 CEST] <furq> that doesn't sound like something ffmpeg can demux
[04:38:38 CEST] <Ch3ck> furq i wish to convert this decoded byte stream to mp3
[04:38:53 CEST] <Ch3ck> I was thinking of using this: https://github.com/Ch3ck/goav
[04:39:44 CEST] <furq> https://github.com/viert/lame
[04:39:48 CEST] <furq> if you literally just want an mp3
[04:40:07 CEST] <furq> the ffmpeg libs are massive and bindings to them are usually outdated, incomplete or both
[04:40:36 CEST] <Ch3ck> furq, thanks for the link \
[04:41:14 CEST] <Ch3ck> It'll help me greatly
[05:24:15 CEST] <shank> Hello, I'm trying to install ffmpeg on a mac and I get the "yasm/nasm not found or too old" error. I have the latest nasm version installed. Is there anythign else I should try?
[05:25:06 CEST] <shank> change.log has the following output
[05:25:08 CEST] <shank> asm: fatal: unrecognised output format `macho64' - use -hf for a list type `nasm -h' for help yasm/nasm not found or too old. Use --disable-yasm for a crippled build.
[05:28:11 CEST] <Ch3ck> furq, I can't seem to find libmp3lame0
[05:28:25 CEST] <Ch3ck> Any ideas where I can find it?
[06:58:26 CEST] <andrsussa> hello, I am trying to pipe input data from another program into ffmpeg, is there anything I should take into account if that data I'm piping is a mp4 file?
[15:40:05 CEST] <zort> https://slexy.org/view/s2w0hAtEPA ffmpeg is telling me "Unsafe file name", so I added "-safe 0" as they say to do, but it says "Option safe not found."
[15:42:49 CEST] <fx1592345> Hey, I'm encoding video from my webcam into vp8 / webm and the stream it via http into my own nodejs application which then takes the chunks and sends them to a browser over a websocket, my problem is: when the browser first connects to the websocket and then the encoding is started everything works, when first starting encoding and then opening the websocket the video in the browser never starts?
[15:43:50 CEST] <fx1592345> I'm already taking the first chunk/cluster with the header/information of the webm and cache it in my nodejs application which sends it out on a new ws connection as first packet, so the client theoretically should know how to decode following chunks?
[15:45:03 CEST] <fx1592345> Restarting ffmpeg on an open connection results in a working video, I can see ffmpeg retransmits the header in wireshark and then normal webm clusters follow
[16:08:47 CEST] <chuckleplant> Hi guys, if the SPS info in my H264 stream does not contain timing_info... how does ffmpeg obtain the time_scale? I'm trying to do the same with my H264 parser for an IP camera
[16:13:54 CEST] <Mavrik> chuckleplant, well, that's essentially an external information that comes from the source that grabbed it
[16:14:02 CEST] <Mavrik> Your frames need to have timestamps
[16:14:07 CEST] <Mavrik> And by that they imply timebase as well
[16:14:40 CEST] <chuckleplant> Mavrik, could you please elaborate on what the source that grabbed it is?
[16:14:47 CEST] <Mavrik> in your case, IP camera.
[16:14:56 CEST] <Mavrik> Your IP camera is sending frames with timestamps
[16:15:03 CEST] <Mavrik> and they need some kind of timebase just by existing :)
[16:15:38 CEST] <chuckleplant> Yes, I do get timestamps, but the time_base of the codecContext is not set
[16:15:55 CEST] <chuckleplant> As far as I know, this value is set via the extradata info, in which I feed the SPS / PPS header
[16:16:03 CEST] <Mavrik> well, you need to set it to the time_base your camera uses then.
[16:16:15 CEST] <Mavrik> I don't know what kind of protocol are you using
[16:16:23 CEST] <chuckleplant> RTSP RTP H264
[16:16:24 CEST] <Mavrik> But if timebase isn't set, you'll have to set it to fit whatever your camera is sending
[16:17:28 CEST] <chuckleplant> I've also tried setting timebase manually, but even in that case I don't know the framerate and I don't get timestamps in my decoded AVFrames
[16:36:51 CEST] <zort> solved my "unsafe file name" problem by using relative paths instead of absolute
[17:07:21 CEST] <chuckleplant> On playback synchronization of H264 streams... As I'm using RTP, I found that: "The RTP timestamp is set to the sampling timestamp of the content.
[17:07:22 CEST] <chuckleplant>       A 90 kHz clock rate MUST be used"
[17:07:48 CEST] <chuckleplant> So, my time_base must be 1/90000, which matches what ffprobe was telling me
[17:07:57 CEST] <titbang> you need to set it to 60Hz
[17:08:15 CEST] <chuckleplant> titbang, why?
[17:08:54 CEST] <chuckleplant> I got that from the RTP payload spec: https://tools.ietf.org/html/rfc6184
[17:25:02 CEST] <iive> chuckleplant: i think he is just trolling you
[17:26:30 CEST] <chuckleplant> iive, thanks... anyways, I'll sent my doubts to the mailing list
[17:27:19 CEST] <iive> the timebase is usually the reverse of the fps, so 60 is ok if you are with 60fps
[17:27:31 CEST] <iive> howeve 90kHz is what mpeg-ts is using
[17:27:57 CEST] <iive> there is mode that extends it to 27MHz
[17:28:29 CEST] <acos> Haha trolling on the internet.
[17:42:26 CEST] <Loriker> 90kHz is used for RTP when encapsulating MPEG-TS
[17:42:59 CEST] <Loriker> 27 MHz is used in MPEG-TS
[17:43:31 CEST] <Loriker> no 60 Hz
[17:45:36 CEST] <bencoh> actually depends on what you're referring to, but PCR is 27mhz-based, yes
[17:57:47 CEST] <Loriker> exactly
[18:48:31 CEST] <Ch3ck> furq: I have some quick issues with lame
[18:51:05 CEST] <Ch3ck> When I take a json response body and pass to the lame writer and encode with the required input and stuff the output I get is white noise
[18:51:40 CEST] <Ch3ck> I can't actually listen to the read sound waves. I don't know if there's something I might be doing wrong
[23:55:50 CEST] <SpeakerToMeat> Hello good people.
[23:56:27 CEST] <SpeakerToMeat> Question, if I have an audio file that has 12 channels, but I want to treat the first 6 as 5.1 for using the "-ac 2" automatix 5.1 -> stereo downmixer, is there any way to do so?
[23:57:40 CEST] <JEEB> if you're going to be remapping the audio anyways you might as well do the to-stereo thing in the same af
[23:58:28 CEST] <SpeakerToMeat> Hmmm yeah I could use mapping, but then I need to specify the value of each channel or can I use the built in mapper in a map chain?
[23:58:42 CEST] <SpeakerToMeat> af chain not map chain
[00:00:00 CEST] --- Fri Sep 29 2017


More information about the Ffmpeg-devel-irc mailing list