burek021 at gmail.com
Sun Jan 29 03:05:01 EET 2017
[00:01:07 CET] <furq> wtf
[00:01:30 CET] <furq> i can fade the webm into testsrc but i can't fade testsrc into the webm
[00:01:37 CET] <furq> it just starts the fade immediately
[00:01:48 CET] <furq> this is baffling
[00:01:56 CET] <Nune> i...er...sorry :?
[00:02:13 CET] <Nune> source images: https://drive.google.com/open?id=0B3kkDhHHsCMhSGpjLW1MVzFFN2M (150 MB zip)
[00:02:34 CET] <Nune> command to produce dialog_line_cropped.webm: ffmpeg -framerate 60 -i dialog_line_%03d.png -vf cropdetect dialog_line_cropped.webm
[00:02:52 CET] <furq> ok that setpts is wrong but idk how
[00:05:36 CET] <phillipk_> The following command is intended to layer the audio (at specific times) on top of the audio for the first input (video_with_audio.ts) but the result is shifting (by a delay of about 20 seconds) the audio in that first input.
[00:05:37 CET] <phillipk_> http://pastebin.com/uN5nnSCr
[00:06:12 CET] <phillipk_> Do I need to put that input (video_with_audio.ts) into the filter_complex amix?
[00:19:40 CET] <furq> Nune: i'm a massive idiot
[00:19:43 CET] <furq> http://vpaste.net/0hZKB
[00:19:54 CET] <Nune> hush! whats different
[00:20:05 CET] <furq> setpts=PTS+(5/TB)
[00:20:09 CET] <phillipk_> the issue is that I've got an original (with audio) and I want to overlay the additional audios... that works except in the output, the original's audio is out of synch... it's audio is playing too early (or too fast so it drifts)
[00:20:31 CET] <furq> also it's actually overlaying on the black background now, that was broken before
[00:20:52 CET] <furq> that'll teach me to forget about timebases
[00:20:54 CET] <phillipk_> yeah, I think the problem is NOT your map help
[00:20:58 CET] <Nune> looks great furq :D
[00:21:04 CET] <Nune> thank you so much
[00:21:04 CET] <phillipk_> oh, maybe you're talking about Nune's issue.
[00:21:27 CET] <furq> that should work with the image sequences as input too
[00:21:29 CET] <Nune> not to see about not double-compressing with webm
[00:21:32 CET] <furq> yeah
[00:21:33 CET] <Nune> yeah ill try to wrap it all together
[00:22:32 CET] <furq> phillipk_: are those audio sources all mono
[00:22:42 CET] <phillipk_> yeah
[00:22:50 CET] <phillipk_> the layered ones
[00:22:51 CET] <furq> well that's me out of ideas
[00:23:44 CET] <phillipk_> I think I found the issue... maybe I should change [a][b][c]amix=4[out] to instead [a][b][c]amix=3[out] because my map is taking ALL of 0 input and all of "out"
[00:23:55 CET] <furq> yeah i was going to say maybe you need 0:a
[00:24:03 CET] <furq> i figured it would just do the right thing though
[00:24:16 CET] <furq> on the basis that it'd probably just throw an error otherwise
[00:25:16 CET] <furq> if you exclude [0:a] from amix then that audio track will just be discarded
[00:25:25 CET] <phillipk_> Oh,
[00:25:54 CET] <phillipk_> so make the last segment of the filter to be: [0:a][a][b][c]amix=4[out]
[00:25:56 CET] <phillipk_> ?
[00:26:06 CET] <furq> yeah, i doubt that'll help though
[00:26:16 CET] <furq> unless input 0 has multiple audio tracks
[00:26:52 CET] <phillipk_> maybe change: -map 0:v -map "[out]" to instead -map 0 -map "[out]"
[00:27:05 CET] <furq> that'll give you multiple audio tracks in the output
[00:27:17 CET] <phillipk_> what if "out" is just audio?
[00:27:41 CET] <furq> -map 0 will map all the streams from the input unfiltered
[00:27:45 CET] <phillipk_> what's weird is the original code "works" ... it just jacks with the audio in the first input
[00:27:52 CET] <furq> so you'll get 0:v, 0:a, and then [out]
[00:28:28 CET] <furq> maybe the ts has a negative offset on one of the streams or something like that
[00:28:43 CET] <phillipk_> when I play the .ts it plays nicely
[00:28:45 CET] <furq> try demuxing the audio to wav (or whatever) and then use that as an input
[00:28:47 CET] <phillipk_> synched etc.
[00:30:47 CET] <phillipk_> you mean have input 0 be video only, input 1 be the audio from that video... then inputs 2,3,4 (audio only)... then use that input 1 at the beginning of the filter_complex?
[00:30:56 CET] <furq> that probably won't work actually
[00:31:08 CET] <phillipk_> splitting out the audio from the input seems odd.
[00:31:14 CET] <furq> i'd guess[4~ the ts has an offset on the audio track which is being discarded by amix
[00:31:22 CET] <furq> but demuxing it would also discard that offset, sot hat's no good
[00:31:36 CET] <furq> maybe ffprobe -show_streams will show the offset if there is one
[00:31:46 CET] <phillipk_> I create the .ts file but not sure how I could put (or remove) the offset--Okay, I'll look
[00:34:56 CET] <Nune> furq: any way to pipe cropdetect result into a crop command?
[00:35:05 CET] <phillipk_> furq
[00:35:18 CET] <phillipk_> http://pasteboard.co/ri9GPr3tl.png
[00:35:20 CET] <furq> not that i know of
[00:35:46 CET] <Nune> so applying the result cropdetect has to be done manually?
[00:36:11 CET] <furq> phillipk_: is the source audio 1.422422 seconds offset
[00:36:19 CET] <phillipk_> ok
[00:36:39 CET] <furq> that was a question, i don't know that's what that means
[00:37:17 CET] <phillipk_> I don't know how that got in there--I'm just producing the .ts file by concating a bunch of .ts files
[00:37:45 CET] <phillipk_> but, 1.4 seconds off doesn't explain the 25 seconds off (at least after 12 minutes)
[00:38:30 CET] <furq> if it's desyncing and not just delayed then i've got no idea
[00:39:02 CET] <furq> hopefully someone who's better than me with audio knows
[00:41:40 CET] <phillipk_> I'll do some tests, but it seems like it's just off by 25 seconds at at least 2 points in the video...12 minutes and 30 minutes
[00:46:48 CET] <Nune> furq this is working great; just trying to figure out how to automate the auto-cropping
[00:46:55 CET] <Nune> thanks for the help getting it seamless
[00:49:14 CET] <mosb3rg> anyone around with experience dumping HLS feeds with a cookie, i have all the values i need, but im getting a wierd tcp:443 error i havent seen previously.
[00:49:22 CET] <mosb3rg> its on SiriusXM.com
[00:49:36 CET] <mosb3rg> there live.. i can dump but the on demand it just throws this wierd error.
[00:49:55 CET] <mosb3rg> [tcp @ 0x1c57f60] Connection to tcp://:443 failed: Connection refused
[00:50:26 CET] <mosb3rg> its not throwing a forbidden.. 403
[00:50:33 CET] <mosb3rg> so clearly it seems something is missing in the processing.
[00:51:53 CET] <mosb3rg> ahh wait.. i think it might be trying to connect outbound to verify a secret hash phrase perhaps i dunno
[00:52:10 CET] <mosb3rg> can anyone perhaps take a look at the account and see if they are able to manage it
[01:34:45 CET] <NapoleonWils0n> hi all
[01:35:12 CET] <NapoleonWils0n> trying to record a video with closed captions, would -c:s copy be the right flag to use
[01:35:50 CET] <NapoleonWils0n> and do you need to map the streams
[01:36:08 CET] <furq> if this is from dvb or something then closed captions aren't a separate stream so that won't work
[01:36:44 CET] <NapoleonWils0n> right trying to help someone in us where tv channels use closed captions, havent looked at the stream yet
[01:37:22 CET] <furq> -f lavfi -i "movie=input.ts[out0+subcc]"
[01:37:30 CET] <furq> apparently that'll give you a subtitle stream
[01:37:47 CET] <NapoleonWils0n> i have seen that command floating around
[01:37:50 CET] <furq> you'll need a container which supports that subtitle format though
[01:37:59 CET] <NapoleonWils0n> using mkv
[01:38:25 CET] <furq> that's as good a bet as any
[01:38:53 CET] <NapoleonWils0n> thats why i picked it, can dump most things into an mkv
[01:39:53 CET] <NapoleonWils0n> you can use wireshark to find and download subtitles files
[01:40:05 CET] <NapoleonWils0n> if the subs are in external file
[03:12:58 CET] <the_k> anyone ever played with motion detection with ffplay/ffmpeg?
[04:08:44 CET] <xeons> Is lavfi missing from 3.2.2? I don't see it on OS X
[04:09:27 CET] <xeons> fmpeg -filters | grep "lavfi"
[04:09:42 CET] <xeons> *ffmpeg
[04:10:36 CET] <furq> lavfi is a format, not a filter
[04:11:00 CET] <furq> or a device, even
[04:11:48 CET] <xeons> ffmpeg -f lavfi -i video.mov -> [lavfi @ 0x7fa7aa800000] No such filter: 'video.mov'
[04:12:06 CET] <furq> well yeah
[04:12:07 CET] <xeons> So I guess it is there
[04:12:22 CET] <furq> it'll be in ffmpeg -devices
[04:15:10 CET] <xeons> ok, thanks for clarifying that. I'm trying to use it to create a 2x2 grid (https://trac.ffmpeg.org/wiki/FilteringGuide#multipleinputoverlayin2x2grid)
[04:16:37 CET] <xeons> Perhaps you can explain what I'm doing wrong as that doc results inn the "No such filter" error above no mater how I try to quote the files.
[04:16:49 CET] <xeons> ffmpeg -f lavfi -i 003_003_MVI_1639.mov -filter_complex "[0:v]negate[a]" -map "[a]" -t 5 text.avi
[04:17:50 CET] <furq> just get rid of -f lavfi
[04:20:58 CET] <xeons> yep, works great. Why does the wiki say to specify it? Was that just an old requirement?
[04:27:53 CET] <DHE> "-f lavfi" means a filter generates the video rather than reading from a traditional source. eg: testsrc2
[04:28:22 CET] <DHE> you can't request the video come from a filter and then give it a filename instead
[04:37:48 CET] <xeons> So what was the wiki talking about? I thought the docs said "-f lavfi -i video" was feeding lavfi a video input.
[05:06:40 CET] <DHE> *tuhnk*
[05:06:50 CET] <DHE> they're using testsrc as the image generator. like I said above
[05:18:12 CET] <the_k> is it possible to use ffmpeg to display a stream as it's saving it to disk?
[05:18:31 CET] <the_k> i only want 1 connection to the source to save bandwidth
[05:19:30 CET] <furq> ffmpeg -i foo -c:v libx264 out.ts -c:v rawvideo -f nut - | mpv -
[05:20:14 CET] <the_k> so you use -c twice
[05:20:34 CET] <furq> you just specify two output filenames
[05:20:36 CET] <furq> one of them is -
[05:20:54 CET] <the_k> hm
[05:21:01 CET] <the_k> how is it then displayed to a window:?
[05:21:10 CET] <furq> you pipe it to mpv
[05:21:16 CET] <furq> or any movie player which accepts input on stdin
[05:21:23 CET] <furq> ffplay would work fine too but ffplay isn't very good
[05:21:29 CET] <the_k> i suspect that command won't work in window
[05:21:30 CET] <the_k> i suspect that command won't work in windows
[05:21:32 CET] <furq> it should do
[05:21:38 CET] <the_k> ok
[05:21:56 CET] <furq> ffmpeg can't display anything by itself, you need to pipe the output somewhere
[05:22:08 CET] <furq> although you should just be able to play back the .ts as it's recording
[05:22:11 CET] <the_k> does that go to an actual secoind file?
[05:22:14 CET] <furq> no
[05:22:19 CET] <the_k> ok
[05:22:21 CET] <furq> - means "pipe to stdout"
[05:22:22 CET] <the_k> yeah i can play the recording
[05:22:26 CET] <furq> mpv - means "pipe from stdin"
[05:22:31 CET] <the_k> but it's lagged, even when i skip to the 'end'
[05:23:32 CET] <furq> actually that command might not be the best idea
[05:23:40 CET] <furq> if you close the player then the recording will drop
[05:25:15 CET] <furq> there are other ways to do it but they all have the same issue afaik
[05:30:52 CET] <the_k> mpv or mpc?
[05:34:22 CET] <the_k> mpv isn't an executable
[05:35:37 CET] <furq> https://mpv.io/
[05:47:01 CET] <the_k> ah i was searching for a binary on https://github.com/mpv-player/mpv#downloads
[05:47:03 CET] <the_k> thanks
[05:47:15 CET] <the_k> do you use this as a main player?
[05:47:22 CET] <the_k> as your main *
[05:47:36 CET] <furq> no
[05:47:48 CET] <furq> i probably should but i'm too lazy to switch from mpc-hc
[05:47:57 CET] <the_k> ah
[05:55:13 CET] <the_k> damn
[05:55:25 CET] <the_k> it works but i'm getting a few errors
[05:55:35 CET] <the_k> and the player window is shaking the video about
[05:55:43 CET] <the_k> and and down by a couple pixels
[05:56:58 CET] <the_k> ah i set the I Frame interval lower on the stream and it's a lot better
[05:57:11 CET] <the_k> though it does still have the jumping thing going on a bit
[05:58:15 CET] <the_k> and the saved to disk stream also picks up the errors
[06:00:14 CET] <the_k> seems the problem is with the extra work it has to do
[13:32:09 CET] <faLUCE> Hello. In a MPEG-ts header is there any info about the codec used?
[13:32:40 CET] <DHE> yes, mpegts does have codec information, including some metadata like language for audio
[13:33:16 CET] <faLUCE> DHE: in which field? http://dvd.sourceforge.net/dvdinfo/mpeghdrs.html <-- here's a list of the fields, but I can't find the correspnding one
[13:33:47 CET] <DHE> that's the mpeg codec, not the mpegts container
[13:33:58 CET] <JEEB> you first get the PAT, then the PMT
[13:34:16 CET] <JEEB> IIRC the PMT lists what PIDs and what those PIDs contain :P
[13:34:45 CET] <DHE> JEEB: that is correct for mpegts
[13:34:54 CET] <DHE> but this document is DVD-specific which I don't think uses mpegts directly
[13:35:14 CET] <JEEB> yea, it uses mpeg-ps
[13:35:23 CET] <DHE> that makes sense
[13:35:30 CET] <faLUCE> DHE: sorry. From wikipedia: "Program Map Tables (PMTs) contain information about programs. For each program, there is one PMT. While the MPEG-2 standard permits more than one PMT section to be transmitted on a single PID (Single Transport stream PID contains PMT information of more than one program), most MPEG-2 "users" such as ATSC and SCTE require each PMT to be transmitted on a separate PID that is not used for any other
[13:35:32 CET] <faLUCE> packets. The PMTs provide information on each program present in the transport stream, including the program_number, and list the elementary streams that comprise the described MPEG-2 program. There are also locations for optional descriptors that describe the entire MPEG-2 program, as well as an optional descriptor for each elementary stream. Each elementary stream is labeled with a stream_type value." <--- It doesn't
[13:35:34 CET] <faLUCE> mention anything about the codec
[13:35:50 CET] <JEEB> > "optional descriptor for each elementary stream"
[13:36:04 CET] <JEEB> there are some listed in the spec
[13:36:08 CET] <DHE> which is the stream metadata.
[13:36:37 CET] <DHE> there's also a mandatory byte that identifies the stream type. which basically comes down to the codec (mpeg2, h264, AAC, AC3, other program metadata)
[13:36:55 CET] <faLUCE> I see, tnx
[13:37:13 CET] <JEEB> just look for the latest version of H.222 :P
[13:37:19 CET] <JEEB> that should answer your questions faster
[13:37:20 CET] <DHE> here's what I suggest you do. install wireshark, make (or download) a .ts file and open it with wireshark's File, Open... dialog
[13:38:51 CET] <faLUCE> the, a mpegts stream doesn't need to have a header at the beginning of the stream, right?
[13:38:59 CET] <faLUCE> then*
[13:39:10 CET] <JEEB> MPEG-TS is just a stream of 188 byte packets
[13:39:42 CET] <DHE> mpegts is semi-headerless. it's designed to be received in a stream that may be joined at any point, even a point where the byte alignment of the stream is unknown
[13:39:52 CET] <DHE> mpegts is the format used for ATSC over-the-air TV broadcasts
[13:40:04 CET] <JEEB> but that also of course means that you have to send out the PAT/PMT information every now and then
[13:40:18 CET] <JEEB> otherwise receivers wouldn't be able to know what streams there are and which PIDs are what
[13:40:28 CET] <faLUCE> JEEB: yes
[13:40:29 CET] <JEEB> and of course the video stream will have to have parameter sets around
[13:40:31 CET] <DHE> right. so to join a stream you 1) find and align the sync bytes 2) find and parse the PAT 3) find and parse the PMT 4) find a keyframe
[13:47:52 CET] <DHE> faLUCE: did you do the wireshark thing?
[13:51:19 CET] <faLUCE> DHE: I'll do that, because I will need to go deeper into the argument. But currently I have to understand how the PCR works. I I created with libav a MPEGts stream with both audio(mp2) and video (h264). I computed video timestamps with av_gettime(), and then rescaled them to the mpegts time_base (1/90000). For audio I computed them based on the samples frequency, and then rescaled them too to the ts time_base. Now, given
[13:51:21 CET] <faLUCE> that I sent both audio and video timestamps, why do I need a periodical PCR ?
[13:52:10 CET] <DHE> ffmpeg will take care of PCR handling
[13:52:34 CET] <faLUCE> DHE: yes, I know that it's automatically created. But why is it needed?
[13:52:37 CET] <DHE> its mainly intended for use by streamers which need to send data at a fixed rate to know at what point each packet should be "in the air"
[13:53:15 CET] <DHE> a lot of the time the transmission rate will be fixed and the transmission site needs to fill in unused time with NULL packets (pid 0x1fff). the PCR helps it get the timing right
[13:57:02 CET] <faLUCE> DHE: if I send audio only, the bitrate is fixed (in my case, I send 32kb/s with 1/16000 sample rate). But given that these packets have incremental timestamps based 1) on the samples frequency + 2) on the mpegts timestamp (1/90000), why a PCR is needed?
[13:57:38 CET] <faLUCE> the receiver has all the infos it needs.... without PCR
[13:57:48 CET] <DHE> it's a requirement of the spec. mpegts is used in over-the-air and cable broadcasts where a single mpegts stream may carry multiple live TV channels, on-demand video, and maybe more
[13:58:25 CET] <DHE> so the transmission needs to manage that all cleanly without risking buffer underrun in the players
[14:00:24 CET] <faLUCE> DHE: I see, but it doesn't have sense yet, for me... if I have monotonous timestamps, and a time_base, why would I care about a PCR? the PCR should be the time_base itself....
[14:01:27 CET] <JEEB> if you find something to be not useful for yourself you can start looking at alternatives
[14:01:40 CET] <JEEB> if the MPEG-TS spec says you need to have PCR then you need to have PCR
[14:02:07 CET] <DHE> just because you don't find it useful for your particular use case doesn't mean it's useless
[14:03:00 CET] <faLUCE> JEEB: DHE, I'm not saying that it's useless. I'm saying that I don't understand in which cases it's useful :-). Someone say that it's useful for CBR, but why ?
[14:04:10 CET] <JEEB> to know when to fill the stream with null packets to create a CBR mux
[14:04:59 CET] <DHE> CBR isn't the codec setting. it's the stream transmission setting
[14:05:29 CET] <DHE> over the air broadcasts run at 18.8 (??) megabits per second. that's a requirement. if you're streaming a black image, you still have to fill 18.8 megabits of airtime
[14:05:49 CET] <JEEB> exact numbers of course depend on your broadcast specs
[14:10:00 CET] <faLUCE> ok, if I well understand, I have two rates: the codec's rate and the ts rate
[14:10:19 CET] <faLUCE> the ts rate is higher than the codec's one
[14:10:51 CET] <faLUCE> and it's fixed.
[14:14:40 CET] <faLUCE> then the demuxer demuxes packets at a fixed rate, and these packets have a variable part with useful data, and zeroes for the remaining part?
[14:15:09 CET] <faLUCE> (null packets, not zeroes)
[14:15:27 CET] <DHE> not demuxer, but muxer
[14:15:55 CET] <faLUCE> DHE: but also the demuxer receives these packets
[14:16:06 CET] <DHE> in my 18.8 megabit estimate, that's 12,500 cells/second of the 188 byte cells (using "cell" in homage to the ATM connectivity term)
[14:16:28 CET] <DHE> but if you only need 6,000 cells/second, then yes about 6500 cells are actually NULLs
[14:17:11 CET] <Franciman> hi, how can I extract keyframes from a video?
[14:17:23 CET] <DHE> Franciman: and do what?
[14:17:28 CET] <faLUCE> then, if some bytes are LOST in the air, a PCR is useful to RESYNC
[14:17:30 CET] <faLUCE> right?
[14:17:36 CET] <DHE> ... no
[14:18:00 CET] <Franciman> DHE, I need the corresponding AVFrame's
[14:19:07 CET] <DHE> Franciman: the AVFrame contains a key_frame field you can check to see if the input frame was a keyframe
[14:19:52 CET] <Franciman> oh... sorry for the dumb question, didn't notice it. Thanks a lot
[14:20:19 CET] <DHE> eh, it happens
[14:20:27 CET] <JEEB> the "keyframe" naming of it really is old though
[14:21:04 CET] <JEEB> since nowadays it means a "random access picture", since not all keyframes are necessarily that
[14:21:13 CET] <Franciman> oh!
[14:21:33 CET] <JEEB> I think with all modern formats the keyframe flag *should* be exported only on IRAPs
[14:21:43 CET] <JEEB> if not, that is a bug
[14:26:58 CET] <faLUCE> Then I don't understand yet. JEEB said that PCR is useful fo know when to fill the stream with null packets. But given that the mpegts timebase is fixed to 90000, the muxer already knows how many null packets has to provide for creating the CBR. So, why a PCR is needed?
[14:28:27 CET] <DHE> a PCR is program-wide and the timestamps on the frames are intended for players which buffer and do out-of-order processing. the PCR is intended for the muxer/transmitter which has stricter deadlines
[14:28:31 CET] <DHE> which is why the PCR runs at 27 MHz
[14:30:39 CET] <faLUCE> DHE: then are you saying that PCR is not necessary when you have timestamps?
[14:30:48 CET] <DHE> I give up
[14:30:55 CET] Action: JEEB pats DHE
[14:31:27 CET] <faLUCE> :-(
[14:48:05 CET] <faLUCE> ok, now (maybe) I'm starting to understand. With PCR the muxer creates a stream with a CONSTANT MUXRATE (which is the bitrate of the muxer). So it sends: PCR - (pkt with data) - (pkt with data) - (pkt with data) .... (NULL packets) - PCR - etc. Where the null packets are (more or less) before the next PCR is sent
[14:49:12 CET] <faLUCE> with codecs' bitrate < muxrae
[14:50:34 CET] <faLUCE> then, the demuxer knows the quantity of real data to demux based on the interval between two PCRs
[14:51:12 CET] <faLUCE> the quantity of data per second
[14:53:11 CET] <DHE> the demuxer doesn't care about the PCRs
[14:54:37 CET] <Franciman> can I always rely on the fact that AVFormatContext's pb is not NULL if I create the context with avformat_open_input?
[14:55:09 CET] <Franciman> or is there any case in which it is closed?
[14:55:26 CET] <Franciman> (and NULL'd)
[14:56:23 CET] <faLUCE> DHE: then the demuxer simply discards null packets?
[14:57:00 CET] <DHE> yeah. demuxer doesn't need the padding
[14:59:02 CET] <faLUCE> DHE: then, why someone says that PCR is used for syncing audio and video??
[15:01:51 CET] <faLUCE> wait: the answer is "because their contents have different codec's rate". Then, it gives a common base (with higher rate) for both
[15:04:19 CET] <DHE> PCR is only attached to one stream. I think video is most common. it's indicated in the PMT
[15:04:33 CET] <DHE> the audio and video sync is maintained by the PTS/DTS of the individual streams
[15:12:49 CET] <faLUCE> DHE: thanks for all the infos. Then PCR is only used for MUXING. It has nothing to do with decoding (well... demuxers are often called "decoders", today). For displaying the decoded pkts in the correct time order, the decoder only uses the timestamps (which are provided by the muxer too)
[15:14:10 CET] <faLUCE> lot of sites say that PCR is used for syncing audio and video. This confused me a lot. PCR is only used for creating a constant muxrate stream
[15:18:00 CET] <DHE> I disagree. Factually only one of the audio or video will have a PCR attached which gives it little use in synchronization
[15:54:40 CET] <faLUCE> DHE: even if attached to both, it would not give help for sync, given that there are timestamps
[15:55:42 CET] <faLUCE> DHE: anyway, if timestamps are not provided, it could be useful for sync
[15:57:39 CET] <faLUCE> DHE: anyway, there's one thing that I don't understand. You said that PCR is used only by the muxer. Then, it's nonsense to send it, if the demuxer will not use it
[16:00:32 CET] <faLUCE> I'm seeing that a mpegts demuxer should have a parser, which checks for pcr discontinuities
[16:02:48 CET] <faLUCE> https://github.com/hepek/MPEG-TS
[16:05:54 CET] <DHE> that's a software package for demuxing mpegts, sure. the sample application that uses the library detects that as a proof of concept, not because it's strictly necessary
[16:06:15 CET] <DHE> mpegts format has built-in (albeit weak) discontinuity detection
[16:08:46 CET] <faLUCE> but it's nonsense to send this packet if it has not to be used by demuxers
[16:08:56 CET] <DHE> head
[16:08:57 CET] <DHE> hit
[16:08:58 CET] <DHE> keyboard
[17:48:34 CET] <wget> Hello everyone. I'm trying to get the content of the RT video streamer of my ISP (as the latter does not support linux) in order to be able to watch the content I paid for. The content is HLS using AES encryption provided with a FAXKS server.
[17:48:34 CET] <wget> When I try to download the M3U8 the latter contains a EXT-X-FAXS-CM tag containing garbage data (looks like base64, but this is actually AES content);
[17:48:34 CET] <wget> When I use ffmpeg on that link, ffmpeg succeeds to detect a key file and points to an URL which answers by a 404. (do not know why, as I force my cookies to be resent)
[17:48:34 CET] <wget> How can ffmpeg find that key server URL? Is it using the ERE? I thought the latter was only available for HTML5 HLS not HLS send via Adobe Flash.
[17:49:17 CET] <wget> I can provide the output in private if you want.
[17:50:31 CET] <BtbN> Sounds like a proprietary protocol extension
[17:50:43 CET] <BtbN> so, it probably can't.
[17:53:42 CET] <wget> BtbN: It doesn't seem like
[17:53:54 CET] <wget> Here is the output I have
[17:53:55 CET] <wget> https://gist.github.com/wget/d447a6120d6bc871ed70f572cbcd7e06
[18:25:50 CET] <wget> EXT-X-FAXS-CM was not containing AES content but PKCS7 encoded as base64; I was able to recover a full list of certificate chain from it
[18:26:13 CET] <wget> But still not link to any https://license.<URL>
[18:34:14 CET] <faLUCE> DHE: finally I got the meaning of the PCR. Sorry to say that it's not what you say (but I'm not polemic, I'm just discussing about it). It's sent by the muxer and received by the demuxer in order to keep in sync the muxer's and demuxer's clocks: http://www.ce.unipr.it/~petrolin/livestreamer/PCRs.gif
[18:35:01 CET] <faLUCE> obviously, it can be used by the DEMUXER in order to set the latency of the received stream
[18:37:29 CET] <faLUCE> from wikipedia: "Presentation time stamps have a resolution of 90kHz, suitable for the presentation synchronization task. The PCR or SCR has a resolution of 27MHz which is suitable for synchronization of a decoder's overall clock with that of the usual remote encoder, including driving TV signals such as frame and line sync timing, colour sub carrier, etc."
[18:40:17 CET] <faLUCE> it's also a way to determine the receiver buffer size
[18:41:07 CET] <DHE> right, but it's only any good if the transmitter is sending the PCRs on time which is a major component of what it does
[18:43:58 CET] <faLUCE> DHE: I don't understand your last answer ("on time which is a major component of what it does") what do you mean?
[18:45:01 CET] <DHE> in order to synchronize clocks, it's necessary that the transmission of the PCR arrive properly
[18:45:08 CET] <kerio> do you guys know if there's some VNC server/client combination that uses h264 for compression
[18:45:10 CET] <DHE> I mean, a 27 MHz PCR written into a file on disk does nobody much good
[18:45:42 CET] <faLUCE> DHE: then, for HTTP it's pretty overkill
[18:47:05 CET] <furq> kerio: i don't think so
[18:47:34 CET] <DHE> faLUCE: like I said, fixed rate live streaming
[18:47:47 CET] <faLUCE> DHE: so I have to remove this PCR from the ts header, when streaming HTTP mpegts
[18:48:07 CET] <DHE> oh my god...
[18:48:12 CET] <DHE> just leave it as-is. it's fine.
[18:48:58 CET] <faLUCE> DHE: the issue is that when I stream video+audio it's fine. But when I stream audio only, the vlc receiver takes care about the received PCR, and does a mess
[18:49:29 CET] <faLUCE> DHE: in fact, video ts have not the adaptation field, and NO pcr
[18:49:57 CET] <faLUCE> so, vlc ignores the PCRs, when receiving audio+video
[18:50:50 CET] <faLUCE> but when it receives audio only, it sees this adaptation field, and it messes up the pkts' latencies
[18:51:32 CET] <faLUCE> well: the best solution should be that VLC ignores PCR when it decodes HTTP mpegts
[18:51:55 CET] <faLUCE> but given that it doesn't, I have to remove the adaptation field from the mpegts audio packets
[18:52:06 CET] <faLUCE> (with mplayer there's the same problem)
[19:07:06 CET] <faLUCE> well, I just saw that my question was really NOT trivial: https://ffmpeg.org/pipermail/ffmpeg-devel/2016-February/189713.html
[19:20:27 CET] <faLUCE> at this point I wonder if is there an option for setting VBR in audio mpegts-muxing, on libav. CBR (and adaptation field with PCR) is automatically set for mp2.
[19:22:07 CET] <faLUCE> otherwise, I have to change the muxer format... what could I use instead of mpegts for HTTP audio/video streaming?
[19:26:22 CET] <furq> do you have to use mpeg2
[19:26:25 CET] <furq> er, mp2
[19:26:55 CET] <faLUCE> furq: not necessarily
[19:27:03 CET] <furq> you could try vbr aac
[19:27:27 CET] <faLUCE> furq: does mpegts include this codec?
[19:27:30 CET] <furq> yes
[19:27:38 CET] <faLUCE> furq: thnks
[19:27:39 CET] <furq> that's what hls uses
[19:28:45 CET] <faLUCE> furq: I well understand why, at this point :-)))
[19:29:18 CET] <furq> well i imagine that has more to do with the fact that apple spent a lot of money making the best aac encoder, and paying for an aac license
[19:29:28 CET] <furq> but maybe the pcr thing was a factor too
[19:30:24 CET] <JEEB> I'm pretty sure Apple licenses its AAC encoder?
[19:30:28 CET] <faLUCE> furq: I thought it was open source
[19:30:29 CET] <JEEB> from dolby or so?
[19:31:17 CET] <faLUCE> is there anything more open than AAC that can be used for audio?
[19:31:17 CET] <faLUCE> maybe vorbis?
[19:31:31 CET] <JEEB> AAC is fully open as a format
[19:31:36 CET] <JEEB> opus is the latest addition
[19:31:47 CET] <JEEB> in the audio formats, and officially supported in MPEG-TS
[19:31:53 CET] <JEEB> although not many clients support it there
[19:31:53 CET] <JEEB> yet
[19:31:58 CET] <faLUCE> I understand.
[19:32:09 CET] <faLUCE> then apple's implementation != ffmpeg implementation?
[19:32:24 CET] <JEEB> apple uses whatever is used in QT, which is I think licensed from somewhere
[19:32:37 CET] <JEEB> and FFmpeg can use various encoders through lavc
[19:34:43 CET] <furq> i don't work for apple but i've read they developed it themselves
[19:35:13 CET] <furq> http://wiki.hydrogenaud.io/index.php?title=Apple_AAC
[19:35:19 CET] <furq> wouldn't surprise me if dolby were involved in some way though
[19:41:31 CET] <thebombzen> what is this HLS thing that is all the hype of kids these days
[19:42:19 CET] <furq> https://en.wikipedia.org/wiki/HTTP_Live_Streaming
[19:42:50 CET] <furq> i wouldn't call it hyped, it's just the least shitty option in the septic tank that is the web
[19:42:51 CET] <thebombzen> I read that, but Wikipedia is pretty bad about giving explanations to someone who isn't already familiar
[19:43:12 CET] <furq> it serves mpegts fragments over http and your browser can play it without too much fucking around
[19:43:27 CET] <__jack__> HLS is just an easy way of stream video over an existing efficient & common infrastructure
[19:43:33 CET] <thebombzen> Well I got that, but it doesn't say a lot of things, like why is it Apple-oriented
[19:43:33 CET] <furq> without the need for flash or whatever the fuck you even need to do to get mpeg-dash to work
[19:43:42 CET] <furq> because apple made it for streaming to iOS
[19:43:51 CET] <furq> don't ask me why
[19:44:01 CET] <thebombzen> I mean sure Apple made it but Wikipedia doesn't explain why Apple's thing is the "best one"
[19:44:05 CET] <__jack__> because nothing sane exists ? :)
[19:44:15 CET] <furq> the only alternative that's widely supported by browsers is mpeg-dash
[19:44:33 CET] <furq> and that's not and will never be supported by iOS, for mysterious reasons no mortal can comprehend
[19:44:36 CET] <furq> so you're stuck with HLS
[19:44:43 CET] <furq> which thankfully is at least fairly simple
[19:44:51 CET] <thebombzen> Well "will never be supported by iOS" is extremely common
[19:44:54 CET] <thebombzen> and really never a surprise
[19:44:55 CET] <__jack__> owh, that's easy to understand : mpeg-dash is insanity
[19:44:57 CET] <furq> yeah
[19:45:00 CET] <c_14> Apple is special
[19:45:01 CET] <__jack__> crazy stuff, that's it
[19:45:05 CET] <thebombzen> examples: Ogg, Vorbis, FLAC, Opus
[19:45:28 CET] <furq> __jack__: that would be a better reason if apple didn't secretly support mpeg-dash for netflix.com
[19:45:33 CET] <thebombzen> You still can't use vorbis or opus music in iTunes synced to iOS
[19:45:34 CET] <__jack__> I mean: look at the HLS format, and the dash format. Try to find some docs about both. You'll soon understand why HLS rulz dash :)
[19:45:38 CET] <furq> but they do. they have a working implementation of it
[19:45:52 CET] <furq> they just won't let you use it unless you're on the whitelist, and the whitelist consists of one huge company who told them to get fucked
[19:45:59 CET] <thebombzen> I don't understand why you can't use FLAC, Vorbis, or Opus in iTunes/iOS
[19:46:05 CET] <__jack__> furq: yep, that's apple :'(
[19:46:07 CET] <furq> because google use opus
[19:46:09 CET] <thebombzen> it's like Apple said "Fuck Xiph"
[19:46:09 CET] <furq> that's why
[19:46:15 CET] <thebombzen> this predates Opus
[19:46:29 CET] <thebombzen> they refused to use Vorbis/Ogg/flac before Opus existed
[19:46:33 CET] <thebombzen> like 8 years ago
[19:46:38 CET] <furq> no browser supported flac until very recently
[19:46:47 CET] <furq> and vorbis was never widely used
[19:46:51 CET] <thebombzen> Forget browser
[19:47:18 CET] <thebombzen> as far back as 2009 I had lossless CD rips as FLACs and I had to convert them to ALAC to sync them to my iCrap
[19:47:46 CET] <furq> well yeah they have their own competing format
[19:47:54 CET] <furq> remember they didn't let you use mp3s on ipods originally, you had to use aac
[19:48:05 CET] <thebombzen> that makes even less sense
[19:48:11 CET] <furq> that's apple
[19:48:14 CET] <thebombzen> given that AAC and MP3 are both MPEG1/2 standards
[19:48:28 CET] <thebombzen> but even still not supporting flac doesn't really make sense given that libFLAC is BSD-licensed
[19:48:41 CET] <thebombzen> the CLI is GPLed but like so what
[19:49:35 CET] <furq> it's just more vendor lock-in
[19:49:37 CET] <__jack__> I guess it's easier to made working software and good integration with as few tech as possible
[19:49:54 CET] <furq> most people would probably struggle to convert their alac to flac
[19:50:14 CET] <furq> at least back when alac support was less common outside of apple world
[19:51:04 CET] <furq> in summary, apple are dicks
[19:51:12 CET] <furq> that's usually the answer
[19:51:22 CET] <thebombzen> well converting alac to flac is easy tho
[19:51:27 CET] <thebombzen> ffmpeg -i input.m4a output.flac
[19:51:28 CET] <furq> sure it is
[19:51:30 CET] <thebombzen> it's almost like
[19:51:31 CET] <__jack__> people loves dicks
[19:51:33 CET] <thebombzen> there's a program for this
[19:51:39 CET] <__jack__> wait, that explains everything .. !
[19:51:49 CET] <thebombzen> I'm not particularly a fan of dicks
[19:51:54 CET] <furq> i'm sure your average iOS user is a proficient ffmpeg user too
[19:52:06 CET] <thebombzen> lol
[19:52:24 CET] <thebombzen> you don't need to be a proficient ffmpeg user to know how to do "ffmpeg -i input.m4a output.flac"
[19:52:25 CET] <furq> and they won't end up converting it with "jimmysoft.ru ultimate alac to flac conversion master 2000 FREE EDITION"
[19:52:28 CET] <thebombzen> but then again
[19:52:37 CET] <thebombzen> knowing what a CLI is helps
[19:52:45 CET] <thebombzen> lol furq
[19:53:02 CET] <thebombzen> it's like "YouTube Downloader Audio Ripper Master Converter"
[19:53:20 CET] <thebombzen> when it's really just a gui for "youtube-dl -x"
[19:53:47 CET] <furq> my old flatmate who was reasonably good with computers had a million of those "mp4 to docx" converters which were clearly just sponsored links at the top of google results for "convert x to y"
[19:54:20 CET] <c_14> mp4 to docx? This I want to see.
[19:54:27 CET] <furq> google it
[19:54:30 CET] <furq> jimmysoft.ru has got your back
[19:54:50 CET] <furq> also there's no way any of these tools would use anything as good as youtube-dl
[19:55:46 CET] <thebombzen> well some of these tools use pretty good tools
[19:56:04 CET] <thebombzen> AVS was a big one a few years ago and remember that AVSH264Codec.dll was just x264 but renamed
[19:56:15 CET] <furq> it's all some bulgarian student's first year university project which he's now charging 200 lev for and bundling with the google toolbar and ILOVEYOU.VBS
[19:56:37 CET] <thebombzen> I think I got that joke
[19:57:13 CET] <thebombzen> I was under the impresion though that students who are learning to develop in their first year at uni will never be good programmers
[19:57:20 CET] <furq> well yeah
[19:57:56 CET] <thebombzen> people who learn to program during uni first year are usually the sort of people who write tumblr's website
[19:58:06 CET] <thebombzen> it's shocking how awful my CS peers are at code
[19:58:24 CET] <thebombzen> hell I'm a mathematician and I write better code than most of the CS majors I talk to
[19:58:25 CET] <furq> but a good programmer would just tell you to use ffmpeg instead of trying to charge 100 koruny for a tool they wrote in turbo pascal which converts gif to rmvb
[19:58:49 CET] <thebombzen> hey, I'll let you know that I wrote a graphical program (backed by ffmpeg) that converts video to GIF lmao
[19:59:24 CET] <thebombzen> bbut it abstracts away all the annoyances like ending up with a 70 MB GIF file so it does the binary search for you so it ends up under 2 MB
[19:59:28 CET] <furq> i'm not saying that's an inherently bad thing
[19:59:34 CET] <thebombzen> lol
[19:59:41 CET] <thebombzen> yea but at least I admit mines' backed by ffmpeg
[19:59:42 CET] <furq> just that there are many bad ones, and that's inevitably what people find on google
[19:59:45 CET] <thebombzen> lol
[19:59:52 CET] <furq> yeah you're already disqualified by actually using libavcodec
[20:00:01 CET] <thebombzen> mines pretty trash lol
[20:00:16 CET] <thebombzen> it's not backed by avcodec, it's backed by ffmpeg.c
[20:00:19 CET] <thebombzen> GUI written in Java
[20:00:24 CET] <thebombzen> piece of shit software amirite
[20:00:42 CET] <thebombzen> this is before I decided I'd abandon that terrible language
[20:00:55 CET] <furq> i tried searching for "mp4 to mkv converter" and ublock won't let me visit any of the links
[20:01:00 CET] <thebombzen> XD
[20:01:51 CET] <thebombzen> but yea here's my thing #shameless self promotion https://thebombzen.github.io/TumblGIFifier/
[20:01:54 CET] <thebombzen> look at my terrible Java code
[20:02:28 CET] <furq> is there any other type of java code
[20:03:04 CET] <c_14> There's the kind that was never written.
[20:03:24 CET] <furq> http://www.aviosoft.com/tips/wp-content/uploads/Aviosoft-video-converter-MP4-to-AVI.jpg
[20:03:36 CET] <furq> i knew google wouldn't let me down
[20:05:25 CET] <thebombzen> this is my favorite part tbh: https://github.com/thebombzen/TumblGIFifier/blob/master/src/thebombzen/tumblgififier/Tuple.java
[20:05:39 CET] <thebombzen> this language is a piece of crap
[20:07:56 CET] <furq> nice comments
[20:09:00 CET] <furq> is there some compelling reason to not just make e and f public
[20:09:00 CET] <JEEB> thebombzen: you should try out kotlin
[20:09:19 CET] <JEEB> it makes java environment feel saner a bit
[20:09:33 CET] <JEEB> also you can move to kotlin part by part
[20:09:42 CET] <JEEB> instead of having to completely switch a language
[20:11:19 CET] <furq> i would make a smug "you should use x" suggestion but if you need a gui your only sensible choice is C++
[20:11:23 CET] <furq> and i can't recommend that
[20:14:22 CET] <JEEB> nowadays the funky way to do a GUI seems to be to bundle chromium and add JS links to native code
[20:14:30 CET] <JEEB> and do the GUI in HTML5+JS
[20:16:24 CET] <furq> i don't think james brown would agree with you
[20:17:32 CET] <JEEB> for many purposes as much as I dislike the idea it seems to be a passable way to do it
[20:18:31 CET] <furq> was it spotify which was doing that and it turned out it was doing thousands of disk writes
[20:18:53 CET] <furq> yeah it was
[20:19:10 CET] <furq> cref was constantly running sqlite vaccum and doing ~100GB of disk writes every day
[20:19:39 CET] <furq> vacuum
[20:20:26 CET] <furq> "I just checked, and Task Manager says Spotify has written over 1TB in the past 15 days. I've played maybe 5 songs on this computer during that time."
[20:23:39 CET] <faLUCE> I don't write standalone GUI since 5 years
[20:24:03 CET] <JEEB> Qt does the multiplatform thing well enough. although I must say that the barrier of entry is most likely lower with the HTML5+CSS+JS solution
[20:24:21 CET] <JEEB> I'm just saying that I personally don't like it as much, but I must give it what's due
[20:24:52 CET] <JEEB> it's much simpler to find your web jockies to make up a nice thing than UI coders capable of Qt et al
[20:25:34 CET] <faLUCE> JEEB: Qt is multiplatform, but you have to update your GUI many times
[20:26:28 CET] <faLUCE> this is bad
[20:26:31 CET] <JEEB> I've had the exactly same code work on desktops as well as android, so I don't consider it any different from making your HTML5+CSS thing capable of resizing
[20:26:48 CET] <JEEB> whatever the newfangled way of saying that is
[20:27:19 CET] <JEEB> also do note that I'm not fighting for anything here, I have my personal preferences and that's it. I've already noted that the HTML5+CSS+JS way is most likely the thing with the lower barrier of entry
[20:27:51 CET] <faLUCE> JEEB: you have the same code that works for many platforms now, but with js+svg the same code will work for many platforms forever
[20:29:18 CET] <faLUCE> IMHO the absurdity is when you make server side js applications (nodejs)
[20:36:18 CET] <furq> what if you care about your code not using a disgusting amount of memory
[20:38:18 CET] <faLUCE> furq: listen.... today svg+js work perfectly on very cheap mobile phones
[20:38:38 CET] <furq> well yeah they're using the system webview which everything on the device uses
[20:38:55 CET] <faLUCE> if you want to do embedded stuff, then you have to do specific gui with specific libs
[20:38:57 CET] <furq> they're not bundling 80MB of dlls for a hello world app
[20:39:26 CET] <furq> if you're talking about an actual webapp which runs in my browser then sure
[20:39:45 CET] <furq> but if you make it standalone then it's going to suck on desktops
[20:40:56 CET] <faLUCE> furq: I used wxwidgets in the past, with a very very very minimal amount of memory. But today I'm not wasting my time in these things
[20:41:03 CET] <faLUCE> I just don't have time
[20:41:10 CET] <JEEB> well wxwidgets is the one thing that seems nice at first
[20:41:22 CET] <JEEB> but then you drown in "fixing a bug for each OS"
[20:41:29 CET] <faLUCE> JEEB: exactly
[20:41:38 CET] <faLUCE> and with Qt, more or less the same
[20:41:44 CET] <faLUCE> (you will not agree)
[20:42:05 CET] <furq> i also like the way those libs insist on rewriting the entire stdlib, which makes them awful to use from scripting languages
[20:42:08 CET] <JEEB> less but I do agree :P I mean, all of those things will have *some* bugs
[20:42:21 CET] <JEEB> which are OS/arch specific
[20:42:38 CET] <JEEB> and if you go low enough you will be trying to unfuck chromium on embedded
[20:42:47 CET] <furq> everything sucks at some point in the chain
[20:43:13 CET] <furq> i'd like to think i'd take the hit for a better user experience, but the reality is that i just don't write any gui stuff because it's all horrible
[20:43:32 CET] <furq> on which note, i wonder if libui is usable yet
[20:43:38 CET] <JEEB> wx is just the one that promises a lot ("we're using the native thing for each OS!"), but which will end up your crappiest alternative
[20:44:04 CET] <furq> wx is certainly the one i like most as a user
[20:44:04 CET] <JEEB> well, not crappiest but you know what I mean
[20:44:13 CET] <furq> and i've had reasonable luck using it, although i didn't use it for anything major
[20:44:22 CET] <JEEB> you can try talking about wx to Myrsloik
[20:44:29 CET] <JEEB> he probably has PTSD for it
[20:44:30 CET] <furq> and i never tested it on OSX which i bet is always the odd one out
[20:44:36 CET] <JEEB> yup
[20:45:43 CET] <faLUCE> 6 years ago wx was the only choice for embedded stuff
[20:45:55 CET] <furq> the guy who wrote bsnes had a promising libui-style lib but he's taken it offline now
[20:46:06 CET] <furq> or whatever bsnes is called now
[20:46:35 CET] <furq> probably ispeakjapanese.exe
[20:51:31 CET] <faLUCE> " The encoder 'aac' is experimental but experimental codecs are not enabled" . I added avctx->strict_std_compliance > FF_COMPLIANCE_EXPERIMENTAL but still get the error
[20:56:03 CET] <c_14> shouldn't it be < ?
[20:57:04 CET] <JEEB> faLUCE: well I'd say your first attempt would be to see if your thing still runs with newer FFmpeg :P
[20:57:10 CET] <JEEB> that has the AAC encoder improvements
[20:57:22 CET] <JEEB> otherwise you ain't going to have as much fun
[20:59:09 CET] <faLUCE> JEEB: I had to workaround that with audioEncoderCodecContext->strict_std_compliance = -2;
[20:59:53 CET] <JEEB> that requirement was removed in newer FFmpeg after the improvements to the AAC encoder
[21:00:02 CET] <JEEB> and atomnuker did work on it a lot :P
[21:05:27 CET] <faLUCE> now. This coded doesn't like AV_SAMPLE_FMT_S16. which format should I use? should I resample too?
[21:05:33 CET] <faLUCE> codec*
[21:06:07 CET] <JEEB> it just takes in float I guess
[21:06:18 CET] <JEEB> you shouldn't have to make other adjustments
[21:07:20 CET] <faLUCE> it doesn't allow to open the codec, with this format
[21:07:40 CET] <JEEB> yes, because it requires another one. use avresample or swresample to resample if you don't want to use something else
[21:07:47 CET] <JEEB> that lets you convert between audio formats
[21:09:36 CET] <faLUCE> then resample from AV_SAMPLE_FMT_S16 to AV_SAMPLE_FMT_FLT
[00:00:00 CET] --- Sun Jan 29 2017
More information about the Ffmpeg-devel-irc