[Ffmpeg-devel-irc] ffmpeg.log.20170116

burek burek021 at gmail.com
Tue Jan 17 03:05:35 EET 2017

[00:00:04 CET] <rkantos> cpu is under 30%..
[00:14:34 CET] <rkantos> JEEB: yeah.. I can get 640x480 working nicely, not increasing the framebuffer, but 720p just increases the framebuffer to the end
[00:14:49 CET] <rkantos>  e.g. doesn't have enough time to capture
[00:15:22 CET] <rkantos> I just don't get why this happens since both gpu and cpu are not working overtime
[00:16:10 CET] <rkantos> Am I faced not with a cpu but a memory limitation? Surely not
[00:17:36 CET] <rkantos> 13GB/s should be plenty :D
[00:17:46 CET] <JEEB> depends
[00:18:01 CET] <JEEB> I mean, it depends on what sort of buffers etc are in use by libavdevice etc
[00:19:11 CET] <rkantos> I do get "past duration 0.xxxxxx too large" with 640x480 too though..
[00:19:16 CET] <rkantos> which seems to skip frames
[00:24:37 CET] <rkantos> yeah.. 30% cpu usage from ffmpeg, and memory usage just increasing to the rtbuffsize.. stream lagging after every 10s or so
[00:34:30 CET] <rkantos> can I do anything with this?
[00:34:31 CET] <rkantos> http://pastebin.com/kGzynLjt
[00:34:43 CET] <rkantos> that's what I get from your command + the 1.01x decoding rate
[00:47:12 CET] <AstralStorm> hello
[00:47:19 CET] <AstralStorm> how can I set protocol whitelist for ffplay?
[00:47:36 CET] <AstralStorm> to avoid locally downloaded playlist from tossing things like: [http @ 0x7fe8f80f53a0] Protocol not on whitelist 'file,crypto'!
[00:47:43 CET] <AstralStorm> (and then failing)
[00:58:10 CET] <rkantos> JEEB: lol omg
[00:58:16 CET] <rkantos> I simply cannot believe it
[00:58:26 CET] <rkantos> I think I will burn every shit USB-cable around :D
[00:58:34 CET] <JEEB> lol
[00:58:34 CET] <rkantos> it was a short one too
[00:58:57 CET] <rkantos> now even OBS works PERFECT
[00:59:08 CET] <rkantos> (still via crappy usb hub though)
[00:59:55 CET] <rkantos> hmmh.. it is slowing down again however
[01:03:07 CET] <rkantos> JEEB: did you find anything in regards to the jpeg decoder? It would still help with battery life...
[01:03:33 CET] <JEEB> it was removed due to JPEG being rather simple to decode with the CPU
[01:03:46 CET] <JEEB> during the QSV re-work
[01:03:51 CET] <rkantos> meh.
[01:04:11 CET] <JEEB> the original maintainer seems to have left and, he was the only one who cared about JPEG (or he just implemented everything he found)
[01:04:27 CET] <rkantos> Well.. It worked for a brief moment here with OBS, but after like 2 minutes it starts slowing down with 720p
[01:20:29 CET] <olgson> Help, Help everybody. Another problem, this time with YT streaming. I've a video clip which I would like to play in a loop for some time. I'm testing right now short clip (10s) which I want to loop. File was already encoded before:
[01:20:29 CET] <olgson>  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 4007 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
[01:20:29 CET] <olgson>  Stream #0:1(und): Audio: mp3 (mp4a / 0x6134706D), 44100 Hz, stereo, s16p, 320 kb/s (default)
[01:20:29 CET] <olgson> What I would like to do now is to stream it without recompression: ffmpeg -re -stream_loop 1 -i "$SOURCE" -codec copy -f flv "$YOUTUBE_URL/$KEY"
[01:20:41 CET] <olgson> The problem I get :
[01:20:41 CET] <olgson> [flv @ 0x2299e00] Failed to update header with correct duration.ate=1158.8kbits/s speed=   1x
[01:20:41 CET] <olgson> [flv @ 0x2299e00] Failed to update header with correct filesize.
[01:22:23 CET] <olgson> + one more case - streaming of 1st run of clip works fine - I have slight problem after clip finishes and FFMPEG tries to do something between playing it once again (during clip streaming speed=   1x , after clip finishes speed rises to 2x / 3x and it "hangs" for some time before it start to stream it again)
[01:22:51 CET] <olgson> it might be correlated with header updating - my guess
[01:29:18 CET] <olgson> [libmp3lame @ 0x3227be0] Trying to remove 1152 samples, but the queue is empty
[01:29:18 CET] <olgson> [libmp3lame @ 0x3227be0] Trying to remove 1152 more samples than there are in the queue
[01:29:24 CET] <olgson> this is interesting ;)
[02:18:50 CET] <faLUCE> where can I find a simple mux esample for libavformat? I'm seeing that muxing.c inside doc/examples is a total mess, and it has changed in several releases of ffmpeg
[02:28:36 CET] <klaxa> faLUCE: maybe remuxing.c is easier i think i took a lot of code from there
[02:33:04 CET] <faLUCE> klaxa: you are right, it's much shorter... but what did they do in muxing.c ?? they made a mess. The used pointless wrapper sructs and the variables are a disaster
[07:11:42 CET] <thebombzen> TD-Linux: I meant opus inside of a streaming container with raw udp://
[07:11:43 CET] <thebombzen> but yea
[07:12:21 CET] <TD-Linux> yeah, well RTP is a streaming container just like anything else, so either would work (nesting a container in RTP is of course useless and mostly not possible)
[07:12:44 CET] <thebombzen> although I feel there should be nothing wrong with udp:// if you mux opus audio in something like mpegts or matroska
[07:13:03 CET] <thebombzen> I've had issues getting rtp to work in that it always complains about wanting some text file
[07:13:31 CET] <TD-Linux> some packetizations for rtp require out of band data to initialize the codec
[07:16:30 CET] <thebombzen> but for me RTP always complains that it requires an SDP file
[07:18:19 CET] <thebombzen> I always thought that RTP didn't require an SDP file, but when I try it it always gives me:
[07:18:20 CET] <thebombzen> [rtp @ 0x7f2948000920] Unable to receive RTP payload type 96 without an SDP file describing it
[07:20:41 CET] <TD-Linux> thebombzen, oh, so yeah RTP specifies the format of the data by a number, and there is a number to codec name mapping in the SDP. without a SDP you need to manually specify what you're receiving
[07:20:53 CET] <TD-Linux> I don't know how to do it in ffmpeg. maybe the only way is with a SDP file.
[07:21:18 CET] <thebombzen> interestingly, it apperas that if I run: ffmpeg -i input -f rtp rtp://...
[07:21:25 CET] <thebombzen> it spits out an sdp file out standard out
[07:21:38 CET] <thebombzen> I didn't notice that before. But how can I tell the rtp demuxer to read that SDP file?
[07:21:53 CET] <thebombzen> ffmpeg-protocols only mentions SDP for rtsp - not rtp
[07:24:33 CET] <thebombzen> another question - if I'm doing point-to-point between two places I control - should I be using rtp or rtsp?
[07:25:47 CET] <TD-Linux> thebombzen, hmm, I'm not sure (I usually don't use ffmpeg for this). maybe the rtsp mode will "just work" but not sure exactly what ffmpeg does
[07:26:05 CET] <thebombzen> well -f rtsp went to tcp so it autofailed
[07:26:19 CET] <TD-Linux> ah okay so you have to figure out a way to specify it for rtp protocol
[07:26:31 CET] <TD-Linux> maybe it's not possible from cli and only from libavf
[07:26:45 CET] <thebombzen> yea. I looked at rtp in ffmpeg-protocols and ffmpeg-formats at it doesn't say
[07:27:10 CET] <TD-Linux> thebombzen, rtsp adds the useful feature of being able to start and stop the RTP stream from the receiver. but if you control both endpoints it's not necessary.
[10:46:12 CET] <ZexaronS> hello
[10:46:29 CET] <ZexaronS> I heard someone said that I should be deinterlacing to full-frame rate, 50 FPS
[10:46:40 CET] <ZexaronS> then someone said that double-frame rate is called BOB
[10:47:02 CET] <ZexaronS> but i have not much experience, I didn't saw anything wrong with 25FPS i was doing for like 10 files right now
[10:48:56 CET] <ZexaronS> so I'll do a test and see
[10:49:10 CET] <ZexaronS> with more FPS, could there be interpolation ?
[10:49:40 CET] <ZexaronS> cause if this is just a percieved effect of interpolation, I don't want it since it'll make the transcoding longer, these things aren't that important
[10:51:38 CET] <hayha> i found a huge bug in VLC and they are not willing to fix it
[10:58:03 CET] <Bradan> hi
[10:58:24 CET] <Bradan> I'm trying to stream my tv card input via rtmp over the network, however the audio part has breaks
[10:58:33 CET] <Bradan> every 2 seconds it pauses for 1 second
[10:59:16 CET] <Bradan> I use this command: ffmpeg -threads 2 -f alsa -ac 2 -i hw:1,0 -f v4l2 -channel 1 -framerate 25 -i /dev/video0 -s 960:540 -vcodec libx264 -acodec aac -aac_coder fast -preset veryfast -maxrate 2048k -f flv rtmp://localhost:1935/live
[11:01:11 CET] <Bradan> I cannot even append a -b:a 96k or similar to change the 32k audio bitrate, because then the video stream will never be recognized by the client
[11:02:08 CET] <kerio> what's the client?
[11:02:19 CET] <Bradan> vlc
[11:02:48 CET] <Bradan> the rtmp server is the nginx rtmp module
[11:03:03 CET] <Bradan> I tried to use ffserver, but the quality was way worse
[11:03:33 CET] <kerio> that sounds super weird
[11:03:36 CET] <kerio> try playing back with mpv?
[11:03:46 CET] <kerio> is it just the audio that stops?
[11:03:57 CET] <Bradan> yes just the audio
[11:04:09 CET] <Bradan> the video is almost constant and never has breaks
[11:04:36 CET] <thebombzen> note that your command is encoding the video at crf 23 with a cap
[11:04:55 CET] <thebombzen> it's possible that the audio breaks if the video bitrate spikes
[11:05:38 CET] <thebombzen> you might want to target a video bitrate in addition to -maxrate
[11:06:42 CET] <BtbN> why would the audio care about the video bitrate?
[11:08:46 CET] <thebombzen> it shouldn't but it's possible that there's a cap on the total bitrate
[11:09:03 CET] <thebombzen> and if the video uses almost all of it there might not be room for 96k
[11:11:43 CET] <Bradan> eww I tried to play the stream with ffplay before lunch, now it still plays the audio ... have to kill pulse I think
[11:13:06 CET] <BtbN> what? If it still plays the stream, it's still running.
[11:18:28 CET] <Bradan> BtbN: no it was just in the audio cache
[11:19:20 CET] <Bradan> well I changed the video bitrate now, it doesn't help.
[11:21:09 CET] <Bradan> with some configurations I get an alsa buffer xrun sometimes, but not with all of them
[11:21:37 CET] <BtbN> is it running at real time speeds?
[11:21:42 CET] <BtbN> so, x1.00 ideally
[11:21:47 CET] <Bradan> yes it is
[11:25:36 CET] <debianuser> Bradan: alsa xrun usually means you don't have enough CPU or bandwidth to read audio from alsa buffer in time, for example, when video encoding is set to some slow profile and you don't have enough CPU to encode that in real time.
[11:31:05 CET] <Bradan> hmm okay, now it works with ffplay
[11:31:19 CET] <Bradan> debianuser: thanks for the explanation.
[11:33:33 CET] <kerio> Bradan: if the other end is something you control, why not vp9/opus?
[11:33:41 CET] <kerio> or h264/opus
[11:34:44 CET] <Bradan> I tried libvpx this morning, but it was much slower than mpeg1video and mp2.
[11:35:14 CET] <BtbN> of course it is, it's over a decade more advanced.
[11:35:43 CET] <Bradan> I wondered myself, maybe it tried to use some gpu functionalities which are not available
[11:36:32 CET] <BtbN> no
[11:36:36 CET] <furq> no it's just slow
[11:36:41 CET] <furq> x264 should be faster than mpeg2 though
[11:36:50 CET] <kerio> and will result in a MASSIVE quality improvement
[11:36:52 CET] <furq> or a hardware h264 encoder if you have a recent-ish cpu or gpu
[11:37:01 CET] <kerio> furq: but those are shit at low bitrates
[11:37:09 CET] <furq> well yeah but so is mpeg2
[11:37:14 CET] <kerio> fair
[11:37:53 CET] <kerio> Bradan: is this over the internet
[11:37:57 CET] <furq> oh nvm you're already using x264
[11:38:02 CET] <Bradan> no, but wireless lan
[11:38:44 CET] <kerio> you can push about one and a half orders of magnitude more data than over the internet, realistically
[11:39:06 CET] <furq> i can't imagine 960x540 h264 is exceeding your wlan speeds
[11:39:16 CET] <kerio> oh this is even standard def video
[11:39:26 CET] <kerio> crank dat bandwidth
[11:40:35 CET] <Bradan> yep I think vlc has quite some problems, and with ffplay and mplayer I got it working now with this configuration on the server:
[11:40:46 CET] <kerio> woo
[11:40:51 CET] <Bradan> ffmpeg -thread_queue_size 4096 -threads 2 -f alsa -ac 2 -i hw:1,0 -f v4l2 -channel 1 -framerate 25 -i /dev/video0 -s 960:540 -vcodec libx264 -acodec aac -b:v 512k -ar 44100 -preset veryfast -maxrate 2048k -f flv rtmp://localhost:1935/live
[11:40:52 CET] <kerio> don't use mplayer tho
[11:40:53 CET] <kerio> use mpv
[11:41:28 CET] <kerio> ...shouldn't you specify the audio rate on input
[11:41:33 CET] <kerio> rather than in the middle
[11:43:30 CET] <Bradan> hmm it works and on the client it says 44100
[11:43:47 CET] <hayha> i found a huge bug in VLC and they are not willing to fix it
[11:43:55 CET] <kerio> Bradan: yeah but why
[11:44:26 CET] <kerio> your options are all over the place ._.
[11:45:01 CET] <Bradan> well its just -ar
[11:45:12 CET] <Bradan> the other options aren't really movable
[11:45:22 CET] <olgson> Hello all ;) My yesterday question
[11:45:27 CET] <olgson> <olgson> Help, Help everybody. Another problem, this time with YT streaming. I've a video clip which I would like to play in a loop for some time. I'm testing right now short clip (10s) which I want to loop. File was already encoded before:
[11:45:28 CET] <olgson>  <olgson>  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 4007 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
[11:45:28 CET] <olgson>  <olgson>  Stream #0:1(und): Audio: mp3 (mp4a / 0x6134706D), 44100 Hz, stereo, s16p, 320 kb/s (default)
[11:45:28 CET] <olgson>  <olgson> What I would like to do now is to stream it without recompression: ffmpeg -re -stream_loop 1 -i "$SOURCE" -codec copy -f flv "$YOUTUBE_URL/$KEY"
[11:45:29 CET] <Bradan> maybe the acodec
[11:45:30 CET] <olgson>  <olgson> The problem I get :
[11:45:30 CET] <kerio> dear lord
[11:45:32 CET] <olgson>  <olgson> [flv @ 0x2299e00] Failed to update header with correct duration.ate=1158.8kbits/s speed=   1x
[11:45:35 CET] <olgson>  <olgson> [flv @ 0x2299e00] Failed to update header with correct filesize.
[11:45:37 CET] <olgson>  <olgson> + one more case - streaming of 1st run of clip works fine - I have slight problem after clip finishes and FFMPEG tries to do something between playing it once again (during clip streaming speed=   1x , after clip finishes speed rises to 2x / 3x and it "hangs" for some time before it start to stream it again)
[11:45:41 CET] <olgson>  <olgson> it might be correlated with header updating - my guess
[11:45:43 CET] <olgson>  <olgson> [libmp3lame @ 0x3227be0] Trying to remove 1152 samples, but the queue is empty
[11:45:45 CET] <olgson>  <olgson> [libmp3lame @ 0x3227be0] Trying to remove 1152 more samples than there are in the queue
[11:45:54 CET] <olgson> oh come on - 9-10 lines ;(
[11:45:56 CET] <olgson> sorry
[11:46:11 CET] <furq> i don't mind so much if it's posting themselves asking a question on irc
[11:46:12 CET] <kerio> Bradan: you have -c:v -c:a -b:v -ar -preset
[11:46:31 CET] <kerio> also you're setting the size after the video input?
[11:46:33 CET] <kerio> how does that even work
[11:46:46 CET] <kerio> are you actually scaling
[11:46:47 CET] <Bradan> well I am rescaling
[11:46:59 CET] <kerio> but why :<
[11:48:59 CET] <Bradan> because it is delivering the data in 5:4 not 16:9 like how it is supposed to be
[11:49:11 CET] <kerio> that's when you setsar
[11:49:29 CET] <furq> if your source is h264 you can set -aspect to do that without reencoding
[11:49:40 CET] <furq> although i'm guessing whatever you're streaming to is just ignoring that flag
[11:49:41 CET] <kerio> his source is v4l2 apparently
[11:49:46 CET] <furq> oh
[11:49:58 CET] <kerio> idk if that has an encoding
[11:50:01 CET] <kerio> it probably does
[11:50:04 CET] <bencoh> then he can set a different dar/sar/aspect after encoding anyway
[11:50:50 CET] <Bradan> if I put these audio things beforehand it says sample format 0x15002 is not supported
[11:51:11 CET] <kerio> can't you just tell alsa to give you the correct sample rate
[11:51:46 CET] <Bradan> hmm how?
[11:51:57 CET] <kerio> idk, it's a sound card
[11:52:02 CET] <kerio> they tend to support stuff like that
[11:52:57 CET] <furq> olgson: are those flv warnings actually causing an issue
[11:54:24 CET] <olgson> furq, yes... stream stops :)
[12:09:12 CET] <Bradan> thank you very much guys, you've helped me a lot
[12:33:06 CET] <olgson> furq, any idea how to fix my problem ? :)
[12:48:19 CET] <hayha> i found a huge bug in VLC and they are not willing to fix it
[12:49:00 CET] Last message repeated 1 time(s).
[12:49:20 CET] <BtbN> A bug without any samples is not a valid bug.
[12:49:25 CET] <olgson> :)
[12:54:02 CET] <faLUCE> Hello. Is it possible to make the http server In http_multiclient.c  NON blocking? I found only the avio_accept(server, &client) function, but not a poll function
[12:56:53 CET] <c_14> I think there's avio_poll?
[12:57:42 CET] <c_14> hmm, maybe not
[12:58:09 CET] <faLUCE> c_14: I found AVIO_FLAG_NONBLOCK but the doxy says "Warning: non-blocking protocols is work-in-progress; this flag may be silently ignored."
[12:59:04 CET] <faLUCE> in addition, it's not referenced by avio_accept
[12:59:06 CET] <c_14> probably have to thread it then
[12:59:16 CET] <c_14> That stuff isn't particularly mature
[12:59:21 CET] <faLUCE> c_14: this would be horrible
[12:59:36 CET] <faLUCE> c_14: I hate threading for separating stuff
[13:04:18 CET] <c_14> Like I said, it's not very mature.
[13:04:25 CET] <c_14> More of a proof of concept
[13:06:50 CET] <faLUCE> c_14: I see. Then I wonder how to stream h264 live frames through http. I created a program which grabs frames from a v4l2 devices and puts them into a mpegts container. Now, I wonder if the webserver has only to give them in response to a GET request of the client, or does it do more stuff?
[13:08:16 CET] <faLUCE> (my program encodes the frames too, obviously)
[13:09:25 CET] <c_14> Depends on the client, that should work for anything libav*/cURL/wget
[13:09:56 CET] <c_14> won't work for web browsers
[13:10:08 CET] <faLUCE> c_14: I need that it works for a player, like vlc
[13:10:15 CET] <c_14> should
[13:11:02 CET] <faLUCE> c_14: but, should I put a sequence of frames in response of ONE GET  request, or one frame for each GET request?
[13:11:37 CET] <c_14> all frames for one GET
[13:11:51 CET] <faLUCE> c_14: thanks
[13:12:45 CET] <c_14> The client will usually only send 1 GET request
[13:12:46 CET] <faLUCE> c_14: then, basically, I just have to feed the callback function of the webserver with these frames
[13:13:11 CET] <faLUCE> now I wonder which library could I use to add this webserver (but this is out of topic)
[13:13:15 CET] <furq> i've never had to implement this but i'm guessing you send the whole thing as a single chunked response
[13:13:30 CET] <furq> also i take it you're only using http because you want to actually write this yourself
[13:13:43 CET] <furq> otherwise there are better choices
[13:13:55 CET] <faLUCE> furq: I already implemented some rtsp servers
[13:14:01 CET] <faLUCE> but I need a http one
[13:14:23 CET] <faLUCE> furq: what do you mean with "single chunked response" ?
[13:14:28 CET] <c_14> https://en.wikipedia.org/wiki/Chunked_transfer_encoding
[13:14:29 CET] <furq> https://en.wikipedia.org/wiki/Chunked_transfer_encoding
[13:14:32 CET] <furq> snap
[13:15:27 CET] <faLUCE> tnx
[13:16:27 CET] <c_14> More importantly https://tools.ietf.org/html/rfc7230#section-4.1 (in case you have to implement it yourself)
[13:17:41 CET] <faLUCE> c_14: then I have to find a library which manages chunked transfer responses, right?
[13:19:10 CET] <c_14> You have (at least) 2 choices, either use a standard http server and hook yourself into it's cgi (or fcgi, wswhatevercgi) interface, or use a library which you can use to "emulate" a http server
[13:21:02 CET] <faLUCE> c_14: if I use a standard http server, how can I push frames to it? through cgi?
[13:21:12 CET] <furq> i'm not sure how that'll work with multiple clients
[13:21:28 CET] <furq> maybe something like libevent would be better
[13:22:23 CET] <faLUCE> furq: I would use a http server library for that
[13:22:29 CET] <furq> well yeah libevent has one of those
[13:23:12 CET] <faLUCE> furq: something like that: https://github.com/libevent/libevent/blob/master/sample/http-server.c
[13:23:31 CET] <furq> normally i wouldn't use C for this, but you've probably not got much choice if you need to feed it with libav*
[13:23:49 CET] <faLUCE> furq: I could use boost asio
[13:24:01 CET] <faLUCE> what would you use instead of C ?
[13:24:09 CET] <furq> probably go
[13:24:20 CET] <faLUCE> what would you use instead of C ?
[13:24:23 CET] <furq> ^
[13:24:30 CET] <faLUCE> boost asio?
[13:24:33 CET] <furq> go
[13:24:57 CET] <furq> https://golang.org/doc/
[13:24:57 CET] <furq> that
[13:24:59 CET] <faLUCE> furq: sorry but I don't understand you :-)
[13:25:02 CET] <faLUCE> ah!
[13:25:06 CET] <faLUCE> it's a library
[13:25:09 CET] <furq> no it's a language
[13:25:15 CET] <c_14> All these fancy languages and their fancy names
[13:25:33 CET] <faLUCE> no, no, no.... I could write something in python for that
[13:25:55 CET] <faLUCE> but I already coded the grabbed and the encoder in C
[13:25:55 CET] <furq> well yeah i wouldn't use it for this because wrapping the C api calls you'll need is probably more trouble than it's worth
[13:26:07 CET] <furq> and i'm not aware of any halfway decent bindings to libavcodec et al
[13:26:20 CET] <furq> certainly not for go
[13:26:55 CET] <JEEB> also go's ffi slows things down unless you have a wrapper layer
[13:26:59 CET] <furq> i guess you could maybe push the stream out of the encoder to a fifo and then have the server read from that
[13:27:04 CET] <furq> or something along those lines
[13:27:29 CET] <furq> not with an out of the box httpd but it would at least save you from having to write an httpd in C
[13:27:42 CET] <furq> which doesn't sound fun
[13:27:44 CET] <faLUCE> furq: yes, I was thinking about a fifo. So the webserver only reads it as a file?
[13:28:16 CET] <c_14> the problem with the fifo is that it'll block if nobody's reading and only allows 1 reader max
[13:28:17 CET] <furq> something like that
[13:28:29 CET] <furq> well yeah you can't have the httpd serve the fifo directly
[13:28:37 CET] <furq> it would have to read it into a buffer and then serve that buffer
[13:28:49 CET] <c_14> preferably a ring buffer
[13:29:05 CET] <faLUCE> then a better thing would be writing into a FILE, not a FIFO
[13:29:29 CET] <c_14> the file will grow until it's infinitely big
[13:29:30 CET] <furq> that seems like a waste of disk space
[13:29:38 CET] <faLUCE> c_14: yes, I see
[13:29:42 CET] <c_14> and any reader will get everything from the beginning
[13:29:47 CET] <furq> yeah that too
[13:30:15 CET] <faLUCE> then what could I use, if I use an external http server ?
[13:30:27 CET] <furq> i have no idea how you'd do this with an out of the box httpd
[13:30:55 CET] <furq> but if you just mean a separate process then i'd have thought a fifo would work
[13:30:56 CET] <faLUCE> furq: then libevent is the simpler choice, at the moment
[13:30:57 CET] <c_14> C libav magic with unix listening socket, server cgi connects to unix socket and passes data
[13:31:20 CET] <furq> yeah a unix socket would work too
[13:31:30 CET] <c_14> But at that point you already have listening code
[13:31:40 CET] <c_14> And if the libevent http api isn't that much harder it's not worth the effort
[13:31:49 CET] <kerio> can't you just use HLS
[13:31:52 CET] <furq> cgi will spawn a process for every client, so it's probably not great for streaming
[13:32:02 CET] <furq> but i guess it depends how many clients you expect to have
[13:33:22 CET] <furq> kerio: i assume the answer to that is no but i didn't ask
[13:33:23 CET] <c_14> FastCGI only has one long-lived process iirc
[13:33:43 CET] <furq> depends if you're using a cgi wrapper or not
[13:33:49 CET] <kerio> faLUCE: any particular reason you don't want to use HLS?
[13:33:54 CET] <faLUCE> kerio: HLS is proprietary
[13:33:56 CET] <furq> i guess there are fastcgi libs out there
[13:33:59 CET] <kerio> ...is it
[13:34:10 CET] <c_14> I think HLS is an open standard?
[13:34:22 CET] <kerio> i mean, mpegts is patented, but they're adding fragmented isobmff to it
[13:34:28 CET] <furq> it requires proprietary codecs and containers
[13:34:39 CET] <furq> but i'd assume those are being used anyway
[13:34:47 CET] <kerio> he mentioned mpegts already
[13:34:49 CET] <furq> yeah
[13:35:05 CET] <c_14> https://tools.ietf.org/html/draft-pantos-http-live-streaming-20
[13:35:06 CET] <kerio> you could use DASH instead
[13:35:14 CET] <kerio> which is basically just a worse version of HLS
[13:35:16 CET] <furq> hls is probably more complicated if he specifically wants to write his own server
[13:35:19 CET] <kerio> :^)
[13:35:27 CET] <kerio> furq: well not really
[13:35:34 CET] <kerio> because then the concerns split quite a lot
[13:35:36 CET] <c_14> Well, ffmpeg outputs hls. Just feed the files to a http server
[13:35:40 CET] <furq> actually yeah
[13:35:42 CET] <furq> what c_14 said
[13:35:42 CET] <kerio> one part has to generate the files, one part has to serve them through http
[13:36:05 CET] <kerio> there's also nginx-rtmp which is a pretty nice rtmp-to-hls converter
[13:36:32 CET] <furq> well yeah that's obviously what i'd use
[13:36:36 CET] <kerio> and most importantly, the part that has to deal with the clients is just a http server that's dealing with static files
[13:36:44 CET] <furq> i assume there are ~reasons~ though
[13:36:50 CET] <kerio> which is mostly a solved problem nowadays, i hope
[13:37:22 CET] <c_14> HLS does have some downsides though
[13:37:25 CET] <c_14> mainly latency, etc
[13:37:49 CET] <kerio> tru dat
[13:38:12 CET] <kerio> what's the "modern version" of rtmp?
[13:38:16 CET] <furq> rtmp
[13:38:22 CET] <kerio> ._.
[13:38:26 CET] <furq> yes
[13:38:31 CET] <kerio> is flv actually that good of a container
[13:38:37 CET] <furq> i mean it does the job
[13:38:38 CET] <kerio> can it fit vp9/opus
[13:38:41 CET] <furq> nope
[13:38:49 CET] <furq> mpegts over rtsp can i think
[13:38:52 CET] <furq> but rtsp is older than rtmp
[13:38:56 CET] <kerio> press F to pay your respects
[13:39:03 CET] <furq> technology is cool and good
[13:39:18 CET] <furq> i love javascript
[13:39:26 CET] <furq> i love it because i love watching processes getting oom killed
[13:39:33 CET] <kerio> mpeg-dash was a mistake
[13:39:54 CET] <c_14> furq: do you also like C++ with boost?
[13:40:00 CET] <furq> probably!
[13:40:11 CET] <furq> i've managed to live my life without ever writing a line of C++ and i hope that never changes
[13:40:21 CET] <furq> it sounds dreadful
[13:42:04 CET] <furq> my favourite thing about node is that there is an option to put an upper limit on the heap size
[13:42:16 CET] <furq> which very helpfully crashes the process if it exceeds the limit
[13:42:30 CET] <furq> it was very thoughtful of them to put that in
[13:42:34 CET] <c_14> Well, what else should it do?
[13:42:36 CET] <c_14> Return an error?
[13:42:43 CET] <furq> i mean it should just not exist
[13:42:44 CET] <c_14> Then you'd have to handle that.
[13:42:51 CET] <c_14> Much better to just occasionally restart the praocess.
[13:42:54 CET] <c_14> *process
[13:43:14 CET] <furq> i assumed it was a way of setting how aggressive the gc should be
[13:43:57 CET] <c_14> Well, isn't it? It just calls the OS and tells it to collect everything.
[13:44:26 CET] <furq> you are much more wise than i could have ever imagined
[13:48:49 CET] <klaxa> re: http_accept() non-blocking: you can set a timeout on the avio context, if you set it really low it's basically non-blocking i guess?
[13:51:24 CET] <faLUCE> klaxa: more precisely, I need of an event based (select) avio, not a simple "non blocking" avio
[13:51:56 CET] <furq> yeah there's not much point having nonblocking accept if you don't have an equivalent of poll/select
[13:53:07 CET] <faLUCE> c_14: about your idea of an external webserver. In this case I have to configure it so that it receives data on a socket and it redirects to the client this data?
[13:54:12 CET] <furq> something like that
[13:54:26 CET] <furq> read the fifo/unix socket into a buffer, then serve clients from that buffer
[13:54:49 CET] <faLUCE> furq: I wonder if this is a common feature of common webservers
[13:54:53 CET] <furq> it isn't
[13:55:00 CET] <furq> but it's probably trivial to do with python or something
[13:56:28 CET] <faLUCE> furq: why not libevent?
[13:56:37 CET] <furq> that works too if you want to write this in C
[13:56:44 CET] <furq> i'm not sure why you would though
[13:57:18 CET] <faLUCE> furq: I already wrote the grabber+encoder+scaler+muxer in C, using libav
[13:57:33 CET] <faLUCE> now I need the streaming part
[13:57:34 CET] <furq> well yeah but if the server is a separate process then you might as well use whatever's easiest
[13:57:43 CET] <furq> otherwise you might as well have it all happen in the same process
[13:58:31 CET] <faLUCE> furq: then, does libevent easily manage a single loop for multiple requests ?
[13:58:42 CET] <furq> i've never used it, i just know it has an httpd
[13:58:52 CET] <furq> but i'm pretty sure the whole point of it is that it can do that
[13:59:59 CET] <faLUCE> ok, then I have to find an example for creating a webserver and handle clients in a single event-based loop
[14:04:23 CET] <hayha> i was there there was drama between ffmpeg and libav,  is this true?
[14:04:34 CET] Last message repeated 1 time(s).
[14:04:34 CET] <hayha> i was told there was drama between ffmpeg and libav,  is this true?
[14:05:00 CET] <hayha> furq is this true?
[14:05:07 CET] <furq> is what true
[14:05:11 CET] <hayha> i was told there was drama between ffmpeg and libav,  is this true?
[14:05:19 CET] <furq> sorry i didn't quite catch that
[14:05:25 CET] <hayha> i was told there was drama between ffmpeg and libav,  is this true?
[14:05:25 CET] <furq> could you say it four more times
[14:05:32 CET] <hayha> okay sorry, i will stop
[14:05:54 CET] <hayha> furq is this true?
[14:07:13 CET] <iive> yes, there was a drama
[14:07:22 CET] <iive> and probably there still is.
[14:07:26 CET] <hayha> what happened
[14:07:52 CET] <hayha> and which side are you on?
[14:08:16 CET] <hayha> i don't want to assume you are on ffmpeg' side just because you are in #ffmpeg
[14:08:29 CET] <hayha> orphis do you nkow about the drama?
[14:09:37 CET] <iive> i think it was 2011 18 jan, when a group of ffmpeg developers took over the project. the group included the 3 server admins. they changed the way commits are aproved and literally took write access from then project leader michaelni and everybody who didn't support them.
[14:10:37 CET] <iive> after 3 months, michael and developers who supported him, took back the project by instating their own server, using that trademark and domain were controlled with their supporters
[14:11:01 CET] <hayha> okay
[14:11:06 CET] <hayha> can they just do that ?
[14:11:15 CET] <Orphis> Well, they did do that
[14:11:36 CET] <iive> at that point the take-over developers were forced to change thair project name. We call it fork, but they (probably) still insist they are the "original" and "true" ffmpeg.
[14:12:00 CET] <hayha> iive so would you say ffmpeg got better or worse after  2011 jan 18
[14:12:30 CET] <furq> it got better because ffmpeg takes most of the libav patches while libav ignores or rewrites all the ffmpeg patches
[14:12:35 CET] <iive> well, it did change how things are done.
[14:13:00 CET] <furq> didn't they spend a bunch of money paying someone to write libavresample because they didn't want to use libswresample
[14:13:04 CET] <hayha> furq so you are on ffmpeg side?
[14:13:10 CET] <iive> a lot of things were easied, it got much easier to get code into the project and this resulted in massive growth.
[14:14:48 CET] <iive> and yes, ffmpeg merged almost everything from libav, while libav merged very few things and it also redid a lot of simpler fixes.
[14:15:05 CET] <iive> so libav stagnated.
[14:15:40 CET] <hayha> so which side are you on?
[14:15:43 CET] <furq> it was probably good in the long term that ffmpeg got a bit of a kick up the arse
[14:15:50 CET] <iive> a lot of their supporters left the projects.
[14:15:54 CET] <furq> but libav did pull some stupid bullshit
[14:16:01 CET] <furq> which makes it hard for me to respect them
[14:16:18 CET] <iive> libav should have done proper fork from the start, on their own server
[14:16:20 CET] <furq> spending donations on rewriting libswresample for no reason is pretty bad
[14:16:29 CET] <furq> just because it dropped after the fork
[14:17:00 CET] <iive> it's not donation, it was ffmtech money, some of them are donations but more are money from gpl license settlements
[14:17:39 CET] <furq> that's still pretty bad
[14:21:09 CET] <hayha> iive why can't you give me honest answer
[14:21:14 CET] <iive> hayha: i've always been on ffmpeg side.
[14:21:21 CET] <hayha> ok thanks
[14:21:22 CET] <iive> i thought that's obvious.
[14:21:28 CET] <hayha> didn't want to assume
[14:21:46 CET] <iive> and i might be wrong about some details. e.g. 2 or 3 months... don't remember exactly.
[14:22:11 CET] <hayha> that's fine. date is not important
[14:22:45 CET] <iive> the thing is, all developers tried to work together under the new rules for 2 months and it was nightmare
[14:22:46 CET] <hayha> i just downloaded libav for first time..  it's not simple as  ffmpeg
[14:23:10 CET] <hayha> ffmpeg is simple  has  3  ff*.exe  files
[14:23:27 CET] <iive> they should be few av*.exe files.
[14:23:40 CET] <hayha> it has like 15 files
[14:24:01 CET] <hayha> whole bunch of .dll
[14:24:02 CET] <furq> are 12 of them dlls
[14:24:04 CET] <furq> yup
[14:24:09 CET] <furq> i can't imagine what could have happened here
[14:24:27 CET] <hayha> why can't libav  be simple with no dll
[14:24:34 CET] <hayha> like ffmpeg.exe
[14:24:47 CET] <iive> are there .dll that doesn't start with libav*.dll ?
[14:24:57 CET] <hayha> yes
[14:25:10 CET] <hayha> av*.dll
[14:25:18 CET] <hayha> swscale.dll
[14:25:19 CET] <furq> im crying
[14:25:32 CET] <iive> it's just a matter of what the person building the source is doing.
[14:25:45 CET] <hayha> how did ffmpeg.exe manage without all those .dll
[14:25:58 CET] <iive> aka, ffmpeg one used static linking, so each *.exe contains its own copy of the *.dll .
[14:27:42 CET] <hayha> iive i see, is it harder to do it that way from  dev's point of view
[14:28:09 CET] <iive> no, it just wastes more space (hdd/ram)
[14:28:26 CET] <furq> not on windows
[14:28:40 CET] <hayha> which wastes more space/ram?   static linking or  non static linking
[14:30:00 CET] <hayha> furq  but on non windows it does?
[14:37:03 CET] <juliengeekbcn> Hello all, i'm looking for the right commands to extract a define numbers of screenshots from a video with ffmpeg
[14:37:34 CET] <juliengeekbcn> i searched a lot but never found those commands
[14:38:06 CET] <juliengeekbcn> i have a video.mp4 and i just need the command to extract for example 16 screenshots from this video
[14:38:33 CET] <c_14> ffmpeg -i video.mp4 -f image2 -frames:v 16 out%02d.png
[14:38:43 CET] <c_14> That will output the first 16 frames
[14:39:03 CET] <c_14> If you want specific screenshots, or want them evenly spread out etc. that should be possible but a bit more complicated
[14:39:29 CET] <c_14> https://trac.ffmpeg.org/wiki/Create%20a%20thumbnail%20image%20every%20X%20seconds%20of%20the%20video
[14:39:38 CET] <c_14> I knew there was a wiki page somewhere
[14:40:07 CET] <juliengeekbcn> the thing is i need those 16 screenshots to be extracted egual from the total duration of the video
[14:41:09 CET] <c_14> I swear I've written a command that does that for someone before
[14:41:11 CET] <c_14> somewhere
[14:41:13 CET] <c_14> probably
[14:41:17 CET] <furq> me too
[14:41:32 CET] <juliengeekbcn> i search this commande since 2 weeks lol
[14:45:31 CET] <hayha> gnarface> ffmpeg can also use libav fyi    is this statement correct
[14:47:04 CET] <c_14> juliengeekbcn: do you want them spaced exactly throughout the video, or do you want them spread equally throughout the video on "interesting" places (for i.e. thumbnails)
[14:55:06 CET] <hayha> i have this video that works with ffplay.exe but doesn't work with avplay.exe
[14:58:59 CET] <hayha> i have this video that works with ffplay.exe but doesn't work with avplay.exe , what does this mean?
[15:00:44 CET] <c_14> juliengeekbcn: http://vpaste.net/GKtlx, replace $file with $1, and 20 with the amount of images you want, and potentially remove the scale. This will generate x "interesting" images that are evenly spaced around the input. Script written by furq
[15:18:37 CET] <furq> wasn't that broken somehow
[15:19:15 CET] <furq> i use something like that to feed the tile filter and it works, but i remember it not working when you output to image2
[15:20:51 CET] <c_14> I think it might have needed -vsync vfr
[15:21:06 CET] <thebombzen> omg
[15:21:29 CET] <thebombzen> are we back to the guy who is like "how do I extract 15 screenshots regularly spaced throughout the file"
[15:21:34 CET] <c_14> That might have only been the one with decimate though?
[15:21:40 CET] <furq> maybe
[15:21:42 CET] <furq> it's been a while
[15:22:11 CET] <thebombzen> who receives teh same answer from three people
[15:22:19 CET] <thebombzen> and then comes back and asks again an hour later
[15:24:43 CET] <thebombzen> c_14: I'm getting a deadlink on that script
[15:25:00 CET] <furq> works for me
[15:25:03 CET] <c_14> works here too
[15:25:03 CET] <thebombzen> nvm my IRC client put the , in the URL
[15:25:08 CET] <furq> oh
[15:25:09 CET] <thebombzen> cause it's weird
[15:26:48 CET] <hayha> i have this video that works with ffplay.exe but doesn't work with avplay.exe , what does this mean?
[15:30:59 CET] <klaxa> that means it's either a bug in avplay or a feature in ffplay
[15:37:48 CET] <hayha> klaxa it works with vlc and ffplay
[15:45:37 CET] <thebombzen> good for you?
[16:56:16 CET] <Ecco> Hey
[16:56:31 CET] <Zypho> I am running ffprobe version N-82786-gc188f35(?) on an .ogg container. Using the -show_streams option my output is returning a r_frame_rate value of 24/1 but the avg_frame_rate is returned as 0/0... It's my understanding I should be using avg_frame_rate as the other is deprecated?
[16:56:35 CET] <Ecco> I'd like to encode a video for easy seeking (H.264 )
[16:56:36 CET] <Zypho> Any insight on this?
[17:11:23 CET] <kepstin> Ecco: not sure what you mean by 'easy seeking'? Do you mean 'fast seeking to arbitrary points on underpowered computers'? If so, use a smaller keyframe interval. Consider using a faster preset - might reduce cpu required to decode.
[17:18:57 CET] <thebombzen> Zypho: that sounds like a bug, because avg_frame_rate should equal r_frame_rate for CFR video
[17:19:42 CET] <thebombzen> sounds like the easiest way to fix that it to just check for avg_frame_rate in your script and if it's bad use r_frame_rate
[17:25:33 CET] <durandal_1707> hayha: what file?
[17:25:49 CET] <hayha> durandal_1707 to which question
[17:26:22 CET] <durandal_1707> to avplay not playing certain files
[17:26:25 CET] <hayha> http://www.filedropper.com/handbrakeproblemvideo
[17:26:35 CET] <hayha> it works with ffplay.exe
[17:26:38 CET] <Diag> *handbrake*
[17:26:43 CET] <Diag> *ffmpeg*
[17:28:29 CET] <Diag> *x264*
[17:28:35 CET] <Diag> DONKEH
[17:31:06 CET] <hayha> durandal_1707 does it work?
[17:32:02 CET] <durandal_1707> hayha: what is file about?
[17:32:13 CET] <hayha> just downloaded it
[17:32:31 CET] <durandal_1707> im on phone
[17:32:39 CET] <hayha> oh,  then why did you ask
[17:32:47 CET] <durandal_1707> what codecs?
[17:32:56 CET] <hayha> avc
[17:33:34 CET] <durandal_1707> hmm, why that wouldnt work in avplay?
[17:33:41 CET] <hayha> no idea
[17:33:47 CET] <hayha> it works with ffplay though
[17:56:16 CET] <rkantos> JEEB: do you know if there is anyway to check what is constricting ffmpeg? I just don't see cpu usage of over 30% for ffmpeg, while rtbuffer just keeps filling up...
[18:05:23 CET] <popara> Hello, i have a question regarding ffmpeg license and nonfree. I see that nvidia encoding needs the --enable-nonfree, which means i can't provide the FFmpeg binary i will generate, and i can't use it for commercial purposes. Right? Can i ask how some commercial solutions on the internet provide nvidia encoding and they are based on FFmpeg ?
[18:06:27 CET] <bencoh> nonfree means you cannot redistribute it
[18:06:46 CET] <bencoh> it doesn't imply that you can't use it for commercial purposes
[18:06:55 CET] <DHE> just that you can't sell the resulting binary to others
[18:07:03 CET] <popara> So, assuming that i have one client and he will pay me
[18:07:09 CET] <bencoh> (unless part of the nvidia code / license says otherwise)
[18:07:13 CET] <popara> i will have to manually recompile the binary on him server
[18:07:14 CET] <popara> ?
[18:07:18 CET] <popara> his*
[18:07:22 CET] <bencoh> exactly, you can build on-site
[18:07:51 CET] <popara> so, i have to build an installer, that will auto compile the ffmpeg from scratch
[18:07:53 CET] <popara> each time i want to sell?
[18:08:09 CET] <bencoh> that'd work, yeah
[18:08:10 CET] <popara> and this script will run on customer's server?
[18:08:13 CET] <popara> ah i see
[18:08:17 CET] <bencoh> (I guess)
[18:08:34 CET] <Zypho> thebombzen: yeah I was thinking that, but when I seen r_frame_rate was deprecated because it's estimated I wasn't sure if I should fall back onto it
[18:09:28 CET] <popara> and if i have a generic solution, that clients me, but ffmpeg is provided for free along with my solution. I can't put the ffmpeg on his server, downloaded from my server for free?
[18:09:36 CET] <popara> that clients pay me*
[18:10:31 CET] <popara> i wont charge him for the ffmpeg binary, which means he can have it for free, but without compilation, but i charge him for the other solution, that needs ffmpeg
[18:10:43 CET] <furq> i don't know if it's technically a gpl violation for you to send the binary to him, but it's certainly not one anyone will notice
[18:10:52 CET] <furq> but you obviously can't make it publicly available
[18:11:24 CET] <popara> ok so , i dont provide the URL to anyone to download the binary standalone, but i guess i can download it "hidden" in the solution i build
[18:11:45 CET] <furq> if you email it to him or put it behind http auth or something you're probably fine
[18:11:56 CET] <popara> i see what you mean
[18:12:01 CET] <bencoh> furq: huhu :)
[18:12:36 CET] <furq> idk whether that's the gpl lawyers' view, but if it's not publicly visible then they can't see it anyway
[18:12:58 CET] <popara> i want to be 100% ok, thats why i ask
[18:13:12 CET] <furq> well yeah that's ok in practice but idk about in theory
[18:13:25 CET] <popara> and i see some commercial solutions that provides the ffmpeg binary with nonfree, without compilation, they just download it in background i think like i said
[18:13:42 CET] <furq> well yeah a lot of people don't give a fuck about violating the gpl
[18:13:51 CET] <furq> i wouldn't look to them for inspiration if you want to remain 100% compliant
[18:14:36 CET] <furq> practically you're fine if you ensure nobody but your client can get the binary, but whether that meets the letter of the law i'm not qualified to answer
[18:14:52 CET] <popara> ye is complicated
[18:16:37 CET] <furq> also you can charge for the binary
[18:16:48 CET] <furq> you just also have to make the source available
[18:16:58 CET] <popara> from ffmpeg not my solution right?
[18:17:02 CET] <furq> yes
[18:17:04 CET] <furq> https://www.gnu.org/licenses/gpl-faq.en.html#DoesTheGPLAllowDownloadFee
[18:17:09 CET] <popara> ok yes i know that
[18:17:33 CET] <popara> so i can compile the nonfree, and sell it to specific clients that will make sure the binary wont go to public
[18:17:54 CET] <furq> well if they distribute the binary then it's their problem, not yours
[18:18:02 CET] <furq> (afaik, i'm still not a lawyer)
[18:19:32 CET] <Zypho> :q
[18:20:58 CET] <popara> :P
[18:21:08 CET] <popara> actually as you said nobody gives a shit, and gpl layers are probably gone
[18:21:13 CET] <popara> i haven't heard anyone having problems with gpl
[18:22:40 CET] <furq> people do give a shit but they're not going to give a shit about one guy sending binaries to a client
[18:22:42 CET] <jkqxz> If you sell a binary to someone you are distributing it to them, and therefore must give them access to the whole source.  If you can't (for example, because it contains non-free components you don't have or can't distribute the source to), then you can't give them the binary.
[18:23:09 CET] <furq> if you're just one guy and not $bigcorp then you're probably not going to get anything worse than a c&d
[18:23:23 CET] <furq> it's not worth anyone's time taking you to court
[18:25:25 CET] <popara> jkqxz confused now. So, i can't sell the binary after all compiled with non-free, since i can't provide the source for non-free components
[18:25:40 CET] <popara> so nobody can use the non-free for commercial use since they can't provide the source?
[18:25:49 CET] <Zypho> onprem is 2014.. license them your cloud software.
[18:26:20 CET] <jkqxz> Correct.  That does not stop a lot of people from doing it anyway.
[18:26:58 CET] <popara> So. the main question. Who ever provides QSV/NVIDIA transcoding and is based on FFmpeg (and i think all they do)
[18:27:11 CET] <popara> they are violating the rules?
[18:27:19 CET] <furq> whether it's commercial or noncommercial is irrelevant afaik
[18:27:28 CET] <Zypho> Unless they include written permission
[18:27:31 CET] <furq> distributing binaries with nonfree components is always a license violation
[18:27:32 CET] <Zypho> Which I have never looked for
[18:28:03 CET] <furq> but realistically, if you hadn't told us you were doing it, nobody would have ever known
[18:28:10 CET] <popara> ok then wtf, everyone is violating then, who ever says he has nvidia transcoding, it's violating,
[18:28:18 CET] <furq> pretty much
[18:28:41 CET] <furq> there are plenty of binary downloads out there which are definitely violating gpl
[18:28:46 CET] <furq> and that's just stuff which is publicly available
[18:31:12 CET] <popara> Ok, in short, nonfree, means for personal use
[18:31:15 CET] <furq> but yeah "distributing" is a bit of a nebulous term
[18:31:32 CET] <furq> i imagine charging a consulting fee but providing the binary for free and building it on prem is technically not distributing
[18:31:36 CET] <furq> even though it's practically the same thing
[18:33:16 CET] <jkqxz> Both the libmfx and nvenc stuff mostly avoid the problem, with some care taken in the build.
[18:33:58 CET] <jkqxz> You can't distribute a binary with those included directly, but if no non-free components went into the build (including header files, and avoiding dynamic linking against a non-free library) and then you load at run-time the libraries as already found on the user's machine then you are ok.
[18:35:53 CET] <jkqxz> s/are ok/are probably ok (I am not a lawyer and you should consult one if you are concerned about this issue)/
[18:37:22 CET] <popara> jkqxz so in short you mean, if we provide a shared build it will be ok
[18:37:26 CET] <popara> static build is the problem
[18:37:27 CET] <faLUCE> furq, c_14: you told me to use chunked transfer encoding, for http. well, I found an example of a http server with libevent which sends a stream of chunks. But in the case of frames, is each chunk  ==  single video frame+mpegts?  Or do I have to fragment each frame?
[18:38:06 CET] <jkqxz> No, you cannot dynamically link against a non-free library, because that library has then become an input to the build process.
[18:38:08 CET] <furq> i assume each chunk would be one or more mpegts packets
[18:38:22 CET] <furq> i don't think it makes a huge difference with mpegts though
[18:38:31 CET] <klaxa> it shouldn't even matter
[18:38:35 CET] <furq> yeah
[18:38:42 CET] <klaxa> you should be able to use chunked transfer encoding with 1 byte chunks
[18:38:47 CET] <jkqxz> You may, however, be able to write a wrapper which looks at a dynamic library at run time and picks the symbols you want out of it and then calls them.
[18:38:50 CET] <furq> that's pretty wasteful though
[18:38:51 CET] <klaxa> or with 10 mb chunks
[18:38:59 CET] <klaxa> it just changes overhead and latency
[18:39:10 CET] <furq> but yeah afaik players will discard everything before the start of a new packet
[18:39:17 CET] <furq> so you ought to be able to start a chunk anywhere
[18:39:40 CET] <jkqxz> Those things already exist for both libmfx and nvenc, which is why neither of them are marked non-free.
[18:40:04 CET] <faLUCE> then, when I write a mpegts packet with:    av_interleaved_write_frame(avFormatOutputContext, &encodedPkt)  ----> is it enough to use this encodedPkt as chunk?
[18:40:16 CET] <furq> probably
[18:40:26 CET] <furq> i don't think anyone in here has ever done this on account of http streaming sucks
[18:40:43 CET] <faLUCE> furq: why?
[18:40:48 CET] <faLUCE> what would you use instead?
[18:41:02 CET] <faLUCE> rtp?
[18:41:04 CET] <popara> "
[18:41:04 CET] <popara> That text is out of date. ffmpeg can be compiled with nvenc support without --enable-nonfree."
[18:41:05 CET] <popara> oh
[18:41:10 CET] <furq> rtsp or rtmp
[18:41:13 CET] <klaxa> my (admittedly somewhat broken) server uses http for streaming https://github.com/klaxa/mkvserver_mk2
[18:41:16 CET] Action: klaxa shrugs
[18:41:21 CET] <faLUCE> furq: rtsp and rtmp are not so much portable
[18:41:26 CET] <furq> rtsp normally uses rtp so that works too
[18:41:42 CET] <furq> portable in what sense
[18:41:46 CET] <faLUCE> furq: players
[18:41:56 CET] <furq> if you're not worried about browser support then pretty much any player will work with rtsp/rtmp
[18:42:03 CET] <furq> i'd expect that to be more widely supported than http really
[18:42:28 CET] <faLUCE> furq: really not... the rtsp clients on the players are still not so common
[18:42:51 CET] <furq> granted i've not used any player but mpc-hc/mpv/mplayer/vlc for years
[18:42:52 CET] <faLUCE> furq: for example, on some android devices, the vlc doesn't support rtsp
[18:43:06 CET] <furq> nice
[18:43:22 CET] <faLUCE> furq: I used rtsp for years
[18:43:33 CET] <faLUCE> this is the first time I'm using http
[18:43:45 CET] <faLUCE> and I see it's much simpler
[18:43:49 CET] <furq> i have no idea if this will even work tbh
[18:44:07 CET] <furq> i've seen mpegts streams over http but idk how widely supported they are, or if this is how they work
[18:44:11 CET] <furq> this is just how i imagine they work
[18:44:11 CET] <klaxa> what? streaming over http or his project?
[18:44:24 CET] <faLUCE> klaxa: streaming over http
[18:44:25 CET] <furq> the method i suggested for his project
[18:44:35 CET] <klaxa> ah
[18:44:39 CET] <furq> streaming over http is pretty flaky in my experience, but i was using ffserver iirc
[18:44:43 CET] <furq> so it could have also been that
[18:44:44 CET] <klaxa> heh
[18:44:48 CET] <klaxa> probably that
[18:44:52 CET] <furq> although i've noticed it was flaky as a client as well
[18:45:05 CET] <faLUCE> furq: streaming http works perfectly with vlc, both on server and client side
[18:45:24 CET] <furq> these may not have been perfectly legal streams
[18:45:27 CET] <klaxa> yeah i also have not really any problems with http streaming
[18:45:38 CET] <furq> i'm generally happier with an rtsp or rtmp stream though
[18:46:07 CET] <faLUCE> instead, with rtsp, you have to hope that the client has a good rtsp implementation. Which is not always true
[18:46:08 CET] <furq> if i'm serving something i generally want to be using rtmp and hls
[18:46:29 CET] <furq> if you're using flv-compatible codecs and you don't mind latency then that's the route i'd go down
[18:46:41 CET] <faLUCE> furq: I don't like flv
[18:46:44 CET] <furq> shrug
[18:46:47 CET] <kerio> nobody likes flv
[18:46:52 CET] <furq> it works fine with h264 and aac and that's what i want to use
[18:47:05 CET] <furq> i'd rather be using mpegts but i can live with flv
[18:47:21 CET] <kerio> would you really rather use mpegts
[18:47:25 CET] <kerio> i don't think anyone would want that
[18:47:27 CET] <faLUCE> anyway, the problem is not the muxer, but the protocol
[18:47:29 CET] <furq> what streaming container would you rather be using
[18:47:38 CET] <kerio> nut
[18:47:40 CET] <kerio> :^)
[18:47:40 CET] <klaxa> *sigh* i think i really have to fix my server
[18:47:54 CET] <furq> nut is cool until you remember that only ffplay and mpv will play it
[18:47:58 CET] <faLUCE> http streaming is MUUUCH better than rtsp
[18:48:00 CET] <klaxa> according to the docs, matroska was even designed for http live streaming
[18:48:16 CET] <kerio> furq: to be fair you only need the pipe between source and ingest
[18:48:21 CET] <klaxa> and it works pretty well in my case
[18:48:30 CET] <kerio> and let's face it, the ingest is going to be ffmpeg anyway
[18:48:42 CET] <faLUCE> rtsp was designed to be too much complex
[18:48:48 CET] <furq> surely the ingest would be your streaming server
[18:48:52 CET] <furq> which is very much not going to be ffmpeg
[18:48:53 CET] <faLUCE> and clients don't care about those extra infos
[18:49:02 CET] <klaxa> it's just nobody has implemented it so far (or i just haven't found it after years of half-hearted searches)
[18:49:08 CET] <kerio> furq: youtube uses ffmpeg, twitch uses ffmpeg iirc
[18:49:19 CET] <kerio> ffmpeg as in libav at least
[18:49:21 CET] <furq> not as a server
[18:49:27 CET] <kerio> no? :o
[18:49:34 CET] <kerio> nginx-rtmp uses ffmpeg
[18:49:37 CET] <furq> well not as the actual bit which receives the streams
[18:49:42 CET] <furq> and no it doesn't
[18:49:44 CET] <kerio> D:
[18:49:50 CET] <kerio> does it have its own demuxer
[18:49:51 CET] <furq> it can with exec, but it doesn't use it for remuxing
[18:49:54 CET] <furq> or demuxing
[18:49:59 CET] <kerio> madness D:
[18:50:04 CET] <furq> it has its own rtmp/hls/dash implementations
[18:50:14 CET] <furq> granted the dash impl is broken according to people in here
[18:50:19 CET] <furq> but i don't use that anyway
[18:50:34 CET] <faLUCE> rtsp has less overhead than http, but today is pretty nonsense
[18:50:59 CET] <kerio> all of this is kinda meaningful when OBS only supports rtmp anyway
[18:51:01 CET] <kerio> *meaningless
[18:51:15 CET] <furq> my experience with http streaming is that it doesn't recover well from dropped packets
[18:51:20 CET] <furq> whereas rtsp and rtmp fare a lot better
[18:51:26 CET] <furq> that's purely anecdotal though
[18:51:42 CET] <furq> the dedicated streaming protocols generally seem a lot more robust to me
[18:51:56 CET] <furq> but i've not really dug deep into why that is
[18:52:08 CET] <faLUCE> furq: you are wrong
[18:52:20 CET] <faLUCE> furq: rtsp is good for complex things over udp
[18:52:42 CET] <faLUCE> but in cases of firewalls (99,999999% of cases) it's really bad
[18:53:11 CET] <furq> well yeah that's why hls and its ilk are the new way
[18:53:27 CET] <kerio> why did we end up with https/ip ._.
[18:53:28 CET] <rkantos> Can anyone else chime in on dshow capture & rtmp streaming with real-time buffer too full?
[18:53:31 CET] <furq> that and the fact that nobody wants to have to try to get browser makers to implement any new features
[18:54:03 CET] <faLUCE> furq: implenting a rtsp client is really not trivial
[18:54:10 CET] <faLUCE> and there are only libs in C++ for that
[18:54:19 CET] <furq> there are surely C libs or else ffmpeg wouldn't have one
[18:54:35 CET] <faLUCE> furq: ffmpeg rtsp  implementation is bad
[18:54:50 CET] <faLUCE> the only good implementation is live555
[18:55:10 CET] <Zypho> HLS is the new way yes, but DASH is the "newer" way
[18:55:23 CET] <faLUCE> then, all these things considered, the only good method is HTTP with mpegts
[18:56:54 CET] <klaxa> am i the only one who shows love to matroska over http? :(
[18:57:04 CET] <faLUCE> klaxa: I will try matroska too
[18:57:05 CET] <kerio> yes
[18:57:19 CET] <faLUCE> but it's less common than mpegts
[18:58:53 CET] <faLUCE> I only suspect that I have to write a header packet, before sending all the other mpegts packets
[18:59:28 CET] <faLUCE> something that informs which encoder is used in the stream
[19:00:05 CET] <faLUCE> (which replaces the sdp file)
[19:01:59 CET] <kerio> doesn't mpegts already have that info?
[19:02:03 CET] <kerio> :\
[19:03:02 CET] <faLUCE> kerio: if so, thanks, I was wondering exactly that
[19:03:12 CET] <kerio> i mean
[19:03:23 CET] <kerio> you can ffprobe a .ts and get info
[19:03:31 CET] <kerio> i don't know if it's just ffmpeg being magic tho
[19:04:07 CET] <infinito> Im trying to use a m60 2xgpu card whit ffmpeg nvenc, but i cant select the gpu.  i can list -gpu list and view both gpus, but ffmpeg allways use the same gpu.
[19:04:53 CET] <Diag> infinito: i dont think you can use two gpus for one encoding job
[19:05:18 CET] <Diag> or are you saying you want to pick a different gpu
[19:05:21 CET] <faLUCE> https://en.wikipedia.org/wiki/MPEG_transport_stream <--- I don't see any "codec" field
[19:06:55 CET] <infinito> Diag im running different ffmpeg instances, gpu1 is full and do not accept more process in paralell. because of this, i want to use the other gpu to do more parallels jobs
[19:07:35 CET] <infinito> i want to pick 1 gpu for some task, and another for more tasks
[19:07:47 CET] <faLUCE> kerio: but wait... I streamed h264 over udp with mpegts (with ffmpeg). Vlc recognized the codec while playing the stream, so there's no need for a header
[19:08:32 CET] <Diag> infinito: is there not a flag to specify which gpu?
[19:08:54 CET] <infinito> yes, -gpu -c:v h264_nvenc but it not have effect
[19:09:16 CET] <infinito> setting -gpu 0 or 1 is the same
[19:09:28 CET] <Diag> infinito: does the card show as two seperate cards?
[19:09:54 CET] <infinito> if i do a -gpu list shows me two gpus to select
[19:10:16 CET] <infinito>  GPU #0 - < Tesla M60 > has Compute SM 5.2 and  GPU #1 - < Tesla M60 > has Compute SM 5.2
[19:11:23 CET] <Diag> huh
[19:11:29 CET] <Diag> likely a driver issue knowing nvidia lol
[19:12:14 CET] <infinito> i think so, 2xgpu are in the same internal pcie switch
[19:14:34 CET] <Diag> is #nvidia a thing
[19:14:49 CET] <Diag> lol it is, ask them?
[19:15:05 CET] <furq> if it is then there won't be anyone authoritative in there
[19:15:30 CET] <furq> trying to find useful official information on nvenc is like trying to find hens' teeth
[19:16:03 CET] <Diag> coughihatenvidiacough
[19:16:12 CET] <furq> shame your alternative is amd really
[19:16:19 CET] <Diag> i love amds gpus
[19:16:26 CET] <Diag> im STILL running my 7970s
[19:16:30 CET] <Diag> from 2012
[19:16:39 CET] <Diag> and they match a 1080s performance
[19:16:48 CET] <furq> it makes you long for the good old days of 3dfx
[19:16:57 CET] <Diag> eh
[19:17:04 CET] <Diag> my first real card was an ati rage 128
[19:17:14 CET] <Diag> my brother started out with a voodoo card
[19:17:22 CET] <infinito> im using azure vm, i think i can just use one gpu for vm, but is more convenient get them into one vm
[19:17:40 CET] <infinito> voodoo cards rules
[19:17:49 CET] <Diag> amd catches way too much shit for "driver issues"
[19:17:51 CET] <infinito> very old stuff
[19:18:08 CET] <furq> shrug
[19:18:10 CET] <furq> they're both pretty shit
[19:18:13 CET] <Diag> eh
[19:18:16 CET] <furq> as are intel
[19:18:16 CET] <Diag> Ive tried both
[19:18:25 CET] <infinito> nvidia quadro works very good, 15 ffmpeg instances in parallell
[19:19:35 CET] <Diag> Ive gone rage128->9550->9800gt->hd3850->gtx550ti->2xgtx550tis->hd6970->hd7970ghz->2xhd7970ghz
[19:19:55 CET] <Diag> and im never buying nvidia again
[19:19:56 CET] <furq> do you not get horrible stuttering in xfire
[19:20:00 CET] <Diag> nope
[19:20:02 CET] <furq> it's been a while since i had dual gpus but it sucked
[19:20:04 CET] <Diag> it works wonderfully
[19:20:09 CET] <Diag> yeah it used to be shit
[19:20:15 CET] <Diag> like circa 2009
[19:20:25 CET] <furq> yeah i had a pair of gtx260s and it was awful
[19:20:29 CET] <Diag> lol
[19:20:35 CET] <Diag> SLI has always sucked more than crossfire
[19:20:52 CET] <Diag> best case was like 85% eff with SLI, was always low to mid 90s with crossfire
[19:20:55 CET] <furq> i like that they called it SLI
[19:21:05 CET] <Diag> it wasnt even theirs though
[19:21:06 CET] <Diag> lol
[19:21:07 CET] <furq> as if it's still scan-line interlave
[19:21:10 CET] <furq> +e
[19:21:13 CET] <Diag> they took it from 3dfx right
[19:21:16 CET] <furq> yeah
[19:21:19 CET] <Diag> yeah
[19:21:22 CET] <Diag> thats what nvidia does
[19:21:25 CET] <Diag> they buy awesome tech
[19:21:30 CET] <Diag> make it proprietary
[19:21:32 CET] <Diag> and fuck it to death
[19:21:35 CET] <furq> original sli had each card draw alternate scanlines
[19:21:36 CET] <furq> hence the name
[19:21:39 CET] <Diag> sure
[19:21:50 CET] <furq> nvidia sli is just some backronym so they could use an extant acronym
[19:22:00 CET] <furq> i said acronym a lot just there
[19:22:01 CET] <Diag> imagine if physx hadnt gone proprietary
[19:22:03 CET] <kepstin> it currently stands for 'scalable link interface', yeah
[19:22:11 CET] <Diag> wed all have real water physics in games by now
[19:22:25 CET] <kepstin> most games use it in alternate frame rendering mode
[19:22:29 CET] <Diag> but nooooo
[19:22:34 CET] <Diag> nvidia had to make it for them only
[19:22:40 CET] <Diag> and fuck it into the ground with 2 or 3 games
[19:22:52 CET] <Diag> best usage of physx was when it was still ageia or whatever it was
[19:22:56 CET] <Diag> mafia 2 was excellent
[19:23:07 CET] <Diag> smoke, cloth, broken glass, wood chips
[19:23:10 CET] <Diag> tire deformation
[19:23:22 CET] <Diag> nvidia killed something amazing all for a quick buck
[19:23:35 CET] <furq> shrug
[19:23:39 CET] <Diag> thats what they do
[19:23:41 CET] <furq> the only 3d game i regularly play is quakeworld
[19:23:44 CET] <Diag> Look at gsync
[19:23:47 CET] <Diag> Expensive
[19:23:49 CET] <furq> that wouldn't really benefit from cloth physics
[19:23:51 CET] <Diag> requires extra hardware
[19:24:02 CET] <Diag> good guy AMD makes a standard anyone can use
[19:24:16 CET] <Diag> takes little extra hardware
[19:24:22 CET] <furq> i have no doubt they'd do the same shit if they weren't playing catchup
[19:24:32 CET] <Diag> amd has always had better morals
[19:24:39 CET] <Diag> look at HBM
[19:24:50 CET] <Diag> amd was the main reason that became a thing
[19:24:55 CET] <Diag> then nvidia picks it up
[19:24:58 CET] <Diag> *look what we made*
[19:25:07 CET] <Diag> What about vulkan
[19:25:12 CET] <Diag> Amd comes out with mantle
[19:25:23 CET] <Diag> Has actually pretty good perf increases
[19:25:30 CET] <Diag> nearly 30% on low end cpus
[19:25:37 CET] <Diag> microsoft starts announcing dx12
[19:25:44 CET] <Diag> Nvidia calls mantle shit
[19:25:56 CET] <Diag> Amd hands mantle off to third party that renames it vulkan
[19:26:04 CET] <Diag> all the sudden nvidias sucking vulkans dick
[19:26:09 CET] <Diag> and its literally the SAME THING
[19:26:17 CET] <furq> what
[19:26:19 CET] <Diag> yepo
[19:26:25 CET] <Diag> you never heard of mantle/vulkan?
[19:26:41 CET] <furq> AMD donated the Mantle API to the Khronos group, which developed it into the Vulkan API.
[19:26:44 CET] <furq> oh
[19:26:47 CET] <furq> that's news to me
[19:26:51 CET] <Diag> http://www.dsogaming.com/news/nvidia-finally-officially-speaks-about-amds-mantle-will-not-support-it-no-real-benefit-using-it/
[19:26:53 CET] <infinito> nvidia are going for deep learning
[19:26:54 CET] <furq> "third party" is a hell of a way to refer to khronos though
[19:27:05 CET] <Diag> https://developer.nvidia.com/Vulkan
[19:27:06 CET] <Diag> oh look
[19:27:08 CET] <Diag> look at that
[19:27:12 CET] <Diag> when its not "AMD" its not shit
[19:27:17 CET] <Diag> fucking hypocrites
[19:27:21 CET] <Diag> nvidia can suck a damn dick
[19:27:26 CET] <infinito> lol
[19:27:27 CET] <Diag> never getting another dollar from me
[19:27:32 CET] <Diag> they ride off of everyone elses backs
[19:27:44 CET] <infinito> i think is a war
[19:28:04 CET] <infinito> americans vs germans
[19:28:05 CET] <Diag> nvidia sucks man
[19:28:14 CET] <Diag> horrible people
[19:28:18 CET] <Diag> shady business tactics
[19:28:23 CET] <furq> my philosophy is that i will buy whatever is good
[19:28:26 CET] <Diag> willing to screw over people just to take them out
[19:28:31 CET] <infinito> intel nvidia vs amd ati
[19:28:38 CET] <furq> pretty controversial i know
[19:28:42 CET] <Diag> furq: right now the best bang for your buck is the rx480
[19:28:59 CET] <Diag> 229 will get you an overclocked version with 8gb of vram that outperforms the 1060
[19:29:06 CET] <Diag> which is the same cost
[19:29:10 CET] <furq> https://www.phoronix.com/scan.php?page=news_item&px=Quake-1-On-Vulkan
[19:29:11 CET] <furq> oh shit
[19:29:15 CET] <Diag> and we all know amd cards get better with age
[19:29:30 CET] <infinito> yes you can overclock
[19:29:39 CET] <infinito> but nvidia goes for servers solutions
[19:29:40 CET] <furq> oh nvm it's netquake so who cares
[19:29:44 CET] <Diag> lol
[19:29:50 CET] <furq> fucking netquake
[19:29:58 CET] <Diag> nvidia is retarded though
[19:30:05 CET] <infinito> like a lot of meney for the same, but never die
[19:30:12 CET] <Diag> never die?
[19:30:15 CET] <Diag> never die?!
[19:30:16 CET] <Diag> HA
[19:30:24 CET] <Diag> Nvidia hardly supports their 700 series cards anymore
[19:30:26 CET] <Diag> theyre on the back burner
[19:30:36 CET] <Diag> meanwhile my 7970 is STILL on the leading edge of updates
[19:30:39 CET] <infinito> the server solutions works very well
[19:30:44 CET] <Diag> and its from the 600 series of their cards
[19:31:05 CET] <infinito> consumer cards are other world
[19:31:17 CET] <Diag> what about the nvidia shield
[19:31:21 CET] <Diag> i bought one of the tablets
[19:31:27 CET] <Diag> guess what, bad battery, was recalled
[19:31:35 CET] <Diag> They sent me a new one and a box to recycle the old one with
[19:31:38 CET] <Diag> I kept both
[19:32:02 CET] <infinito> im from argentina, here we only have a lot of chinese crap
[19:32:47 CET] <Diag> amd has always been on the leading edge of technology, nvidia has always leaned towards higher theoretical speed
[19:33:07 CET] <Diag> i dont think nvidia has ever made anything important to the computer world
[19:33:11 CET] <infinito> i dont kwnow ati datacenters
[19:33:16 CET] <infinito> or amd servers
[19:33:21 CET] <infinito> allways intel, nvidia
[19:33:35 CET] <infinito> for consumers i think is better whatever you like
[19:33:44 CET] <Diag> Hey
[19:33:48 CET] <Diag> i agree intel processors are better
[19:34:11 CET] <Diag> and i know nvidia likes to slap 20 cards in a box and call it a computing machine
[19:34:26 CET] <Diag> amd is doing better things to progress everything as a whole
[19:34:37 CET] <Diag> cough cough, such as pushing opencl while nvidia pushed cuda
[19:35:00 CET] <Diag> oh look, opencls a standard, cuda isnt, and like 5 programs use it
[19:35:16 CET] <infinito> yes, all adobe suit lol
[19:35:31 CET] <infinito> but works very well
[19:35:43 CET] <infinito> and cuvid ofcourse
[19:35:44 CET] <Diag> i meant 5 programs use cuda
[19:36:01 CET] <infinito> after effects, premiere
[19:36:16 CET] <Diag> photoshop is opencl accelerated
[19:36:18 CET] <infinito> works like a charm whit cuda
[19:36:23 CET] <infinito> with
[19:37:33 CET] <Diag> http://puu.sh/touK5/a92896fe3c.jpg
[19:39:36 CET] <infinito> lol
[19:40:40 CET] <Diag> lets not forget AMD has EA on their side
[19:40:45 CET] <Diag> woo woo battlefield
[19:45:26 CET] <infinito> you know how much parallel instances you can run in AMD?
[19:45:35 CET] <Diag> well i have 2 cards
[19:45:42 CET] <Diag> not sure how much opencl shit can run
[19:47:22 CET] <kerio> how much bitrate do i need for perceptual lossless opus again?
[19:47:27 CET] <kerio> 64kbps per channel?
[19:47:40 CET] <kerio> *perceptually
[19:51:52 CET] <kepstin> I dunno if there's been any thorough testing on that? But around 128kb for a stereo pair seems sufficient. (You'd need more for a mono channel, I think)
[19:52:03 CET] <kepstin> more per-channel for mono.
[19:52:37 CET] <kepstin> opus could really use a "quality" setting that's agnostic to the number of input channels :/
[19:53:21 CET] <Diag> kepstin: are you on crack? more bitrate for a mono channel than stereo?
[19:54:06 CET] <kepstin> a stereo pair isn't really "64kbit per channel', it's "128kbit shared between two channels"
[19:54:23 CET] <kepstin> it can make use of correlations between the two channels to reduce the bits needed
[19:54:39 CET] <kepstin> so a single mono channel would need more bits than 1/2 of a stereo pair.
[19:54:45 CET] <Diag> sounds retarded
[19:55:48 CET] <kepstin> I don't know exactly how much, but you're probably talking in the range of 72kbit mono equivalent to 128kbit stereo, depending on audio content and how efficient the codec is.
[19:56:41 CET] <kepstin> might not even be that much more. I don't think I've seen any tests on it
[19:56:54 CET] <Diag> 72kbit audio being acceptable
[19:56:58 CET] <Diag> what is the world coming to
[19:57:06 CET] <kerio> the xiph wiki suggests 96-128 for stereo, 128-256 for 5.1 surround
[19:57:11 CET] <kerio> for music storage
[19:58:09 CET] <kerio> Diag: 128kbps audio being perceptually lossless is pretty damn new tho
[19:58:15 CET] <kepstin> Diag: opus is a *really good* audio codec. 128kbit stereo on it is overkill for most people :/
[19:58:36 CET] <Diag> https://i.ytimg.com/vi/ruDoaCxaYKM/maxresdefault.jpg
[19:59:00 CET] Action: kepstin uses 144kbit for his music just because he has a little storage to waste and wants to be a little extra sure
[19:59:21 CET] <Diag> yuuuuuuuuuuuuuuuuuck
[19:59:49 CET] <Diag> wave/flac or at least a 320kbit mp3
[19:59:56 CET] <Diag> and you cant tell me
[20:00:04 CET] <kepstin> opus at 144K probably sounds *better* than a 320k mp3 :/
[20:00:06 CET] <Diag> that a 128kbit opus shit sounds the same as 320kbit mps
[20:00:08 CET] <kepstin> mp3 really sucks
[20:00:19 CET] <Diag> what extension is opus
[20:00:22 CET] <kepstin> opus
[20:00:27 CET] <Diag> tf
[20:00:31 CET] <Diag> what about FAT filesystems
[20:00:40 CET] <Diag> goddamn millennials
[20:01:13 CET] <kepstin> fat32/vfat has supported filenames longer than 8.3 for 30+ years now?
[20:01:20 CET] <Diag> did i say fat32?
[20:01:31 CET] <kerio> Diag: yes, opus at 128kbps sounds way better than 320kbps mp3
[20:01:41 CET] <Diag> im not buying it, what plays it
[20:01:45 CET] <kerio> ...ffmpeg?
[20:01:51 CET] Action: kerio points at the channel name
[20:01:54 CET] <Diag> will vlc open the shit?
[20:01:57 CET] <kepstin> rockbox has great opus support, I used it on my ipod classic :)
[20:02:00 CET] <kerio> i assume so, yes
[20:02:07 CET] <Diag> i hate the rockbox android port
[20:02:35 CET] <Diag> someone said it used to work amazingly though
[20:02:56 CET] <furq> why would you use rockbox on android
[20:03:01 CET] <Diag> kepstin: send me something good thats a 128kbit opus
[20:03:02 CET] <kerio> but kepstin
[20:03:07 CET] <kerio> you have an ipod classic
[20:03:08 CET] <Diag> furq: some chinese kid told me to try it
[20:03:15 CET] <kerio> just use alac
[20:03:17 CET] <kepstin> kerio: not any more, i lost it on a bus :(
[20:03:29 CET] <kerio> :(
[20:03:31 CET] <furq> i've still got a sansa fuze somewhere
[20:03:36 CET] <kepstin> kerio: I have 20k tracks, it wouldn't all fit in lossless
[20:03:53 CET] <kepstin> kerio: I actually had aftermarked modded it with 256gb flash storage, was pretty sweet.
[20:03:53 CET] <Diag> SOMEONE
[20:03:57 CET] <Diag> send a fucking opus track
[20:04:00 CET] <Diag> thats 128kbit
[20:04:06 CET] <kerio> DIAG CHILL MAN
[20:04:08 CET] <Diag> lol
[20:04:09 CET] <furq> do you not have any music
[20:04:13 CET] <kerio> also it's probably way faster to convert your own yea
[20:04:13 CET] <furq> just encode something yourself
[20:04:15 CET] <Diag> not in opus?
[20:04:26 CET] <furq> you have ffmpeg
[20:04:27 CET] <Diag> i dont even have ffmpeg
[20:04:28 CET] <kepstin> you have ffmpeg+libopus and some lossless files? make your own
[20:04:29 CET] <Diag> lol
[20:04:32 CET] <furq> the fuck are you doing in here then
[20:04:33 CET] <kerio> ffmpeg -i path/to/music -b:a 128k path/to/music.opus
[20:04:39 CET] <kerio> diag pls
[20:04:42 CET] <Diag> furq: they sent me over from #x265
[20:04:44 CET] <Diag> 4*
[20:04:49 CET] <Diag> no but sersly
[20:04:54 CET] <Diag> can i do it on mah vps
[20:04:54 CET] <kerio> aite if this is just a RIAA trick i swear
[20:04:58 CET] <Diag> riaa?
[20:06:09 CET] <kerio> Diag: i hope you're listening with a good pair of headphones at least
[20:06:15 CET] <kepstin> I mean, I could find some CC licensed music I could send you if I really needed to, but you'd also need the lossless copy and maybe some other formats to do a double-blind test.
[20:06:19 CET] <kerio> otherwise it's all meaningless
[20:06:28 CET] <kerio> oh i can send you the lossless too
[20:06:43 CET] <kerio> in that fruity lossless format
[20:07:15 CET] <Diag> kerio: yeah i have some monitor speakers
[20:07:20 CET] <kerio> i guess that this is a pretty good test case for opus tho, the source is 48kHz/24bps
[20:07:35 CET] <kerio> so there's no resampling
[20:07:51 CET] <kerio> and maybe it can use the extra bits idk
[20:08:13 CET] <Diag> kerio: you probably have 44/16 headphones lol
[20:08:13 CET] <kepstin> internally, opus is floating point iirc
[20:08:22 CET] <kerio> it is
[20:08:23 CET] <Diag> float is retarded for audio
[20:08:29 CET] <kerio> you keep saying that
[20:08:35 CET] <Diag> ok
[20:08:41 CET] <kerio> but clearly it's fucking working
[20:08:53 CET] <Diag> tell me when you can hear the difference between 65535 levels of amplitude and however many million
[20:08:56 CET] <kerio> Diag: but headphones are analog :<
[20:08:58 CET] <Diag> that become less accurate at higher values
[20:08:59 CET] <furq> you should probably tell mp3, aac, vorbis and opus about that opinion
[20:09:08 CET] <Diag> kerio: yeah through a shitty realtek D/A
[20:09:17 CET] <kerio> no way man i have a macbook pro
[20:09:20 CET] <kerio> :^)
[20:09:23 CET] <furq> the dac will be 48khz anyway
[20:09:41 CET] <furq> i'm not commenting on bit depth because i don't have or want any 24bit
[20:09:56 CET] <kerio> "è" is 24 bit
[20:10:16 CET] <Diag> kerio: what the fuck is this
[20:10:28 CET] <kerio> Diag: best song 2016
[20:10:47 CET] <furq> è is 16-bit
[20:10:52 CET] <furq> 0xC3A8
[20:11:00 CET] <kerio> oh :<
[20:11:14 CET] <furq> i want to know what you sent him now
[20:11:20 CET] <furq> i was going to send him some bullshit but i couldn't be bothered
[20:11:46 CET] <kerio> fuck you then have a 25 byte emoji 👩👩👧👦
[20:11:55 CET] <Diag> kerio: this sounds like ass
[20:12:01 CET] <kerio> fite me irl
[20:12:16 CET] <kerio> furq: i sent him radiohead - burn the witch
[20:12:22 CET] <furq> oh dear
[20:12:29 CET] <Diag> do you have any good music?
[20:12:33 CET] <furq> i was going to send him some richard blackwood but yours is much worse
[20:12:42 CET] <kerio> i hate you all
[20:12:55 CET] <Diag> maybe some elton john
[20:13:01 CET] <furq> it's you
[20:13:03 CET] <Diag> or billy joel
[20:13:03 CET] <furq> you're the rocket man
[20:13:08 CET] <kerio> furq: fun fact, burn the witch requires 1.8mbps of alac
[20:13:15 CET] <Diag> maybe even some billy mays
[20:13:16 CET] <furq> "requires"
[20:13:17 CET] <kerio> (48/24)
[20:13:38 CET] <furq> i think it sounds better with 0mbps of -f null
[20:14:20 CET] <kepstin> if anyone ever needs some particularly awful music for some reason, feel free to grab https://www.kepstin.ca/dump/Surprisingly%20O.K.%20Variations%20on%20This%20Shit/
[20:14:40 CET] <kepstin> (also at https://musicbrainzsoundteam.bandcamp.com/releases )
[20:14:40 CET] <kerio> furq: why are you jelly that radiohead is the best band
[20:14:40 CET] <furq> "musicbrainz sound team"
[20:14:43 CET] <Diag> kerio: this is identical to aplay dev/urandom/
[20:14:55 CET] <furq> they should spend more time making their database less empty
[20:14:55 CET] <kerio> Diag: ye but maybe focus on the sound quality
[20:15:08 CET] <Diag> kerio: dude how can i do that
[20:15:15 CET] <Diag> i have monitor speakers
[20:15:16 CET] <kepstin> furq: I spend a lot of time on that, but I mostly add japanese pop, so... :)
[20:15:23 CET] <kerio> hold on i'm downloading the filthy frank album
[20:15:29 CET] <furq> yeah i decided to use it as my authoritative tagging source
[20:15:30 CET] <kerio> so i can properly tell you to kill yourself
[20:15:37 CET] <furq> which may have been a mistake
[20:15:49 CET] <furq> i downloaded a bunch of netlabel stuff
[20:16:02 CET] <furq> at least one label i found has >100 releases on discogs and didn't exist on mb until i added it
[20:16:07 CET] <kepstin> well, it's entirely crowdsourced data, so we rely on people using the data to add missing stuff.
[20:16:13 CET] <kepstin> and we're rather weak in netlabels, yeah
[20:16:19 CET] <furq> fortunately there's a discogs import script so it's not too painful
[20:16:29 CET] <Audiofile> Hey guys, I have this record LP disk https://www.youtube.com/watch?v=U9iTsyrnjSA&t  does anyone know what the sample rate and bit depth is of an LP disk?
[20:16:37 CET] <furq> i picked mb because it has picard and beets, which are both very good
[20:16:40 CET] <kerio> aw yea it's available in flac
[20:16:42 CET] <furq> discogs doesn't have anything comparable
[20:17:00 CET] <furq> there's a tagger for fb2k but it's a nightmare to use
[20:17:21 CET] <furq> Audiofile: neither of those things make sense for analogue
[20:17:29 CET] <Audiofile> I'm trying to convert my LP library to digital
[20:17:56 CET] <furq> rip it at whatever the highest numbers your equipment supports
[20:18:03 CET] <kepstin> Audiofile: if you're gonna put it on a cd, use 16bit, 44.1kHz. Otherwise use 48kHz if you prefer.
[20:18:06 CET] <furq> i doubt you'll notice any difference above 24/48 though
[20:18:10 CET] <kerio> i bet it was available on what.cd
[20:18:12 CET] <kerio> before they killed it
[20:18:14 CET] <kerio> ;-;
[20:18:21 CET] <furq> rip
[20:18:34 CET] <furq> maybe if they'd filled my requests they wouldn't have died
[20:18:37 CET] <kerio> :D
[20:18:45 CET] <kerio> Audiofile: that album is available in cd apparently
[20:18:47 CET] <Audiofile> The guy I bought the record from said it was "digitally remastered"
[20:18:50 CET] <kerio> so just... acquire that?
[20:19:04 CET] <kerio> like let's not pretend vinyl is remotely good in terms of quality
[20:19:06 CET] <Diag> kerio: you have the new track yet?
[20:19:12 CET] <kerio> CHILL
[20:19:19 CET] <kepstin> Audiofile: then you should go back and find the digital copy that the lp was mastered from and buy that instead ;)
[20:19:39 CET] <Audiofile> I want to get the vinly on my PC because I like the clicks and pops, they make me feel young again
[20:19:51 CET] <kerio> Audiofile: you can add those in post!
[20:20:08 CET] <Audiofile> I suppose
[20:20:21 CET] <kerio> https://mynoise.net/NoiseMachines/dustyScratchedVinylNoiseGenerator.php
[20:20:28 CET] <Diag> >php
[20:20:33 CET] <Diag> >noise generator
[20:21:13 CET] <kepstin> hah, https://www.izotope.com/en/products/create-and-design/vinyl.html someone even sells that as a product
[20:21:13 CET] <Audiofile> Woah, I just navigated right back to my childhood
[20:21:33 CET] <kerio> kepstin: but it says get it free :<
[20:21:47 CET] <kerio> Diag: dude this torrent has like no seeds
[20:21:54 CET] <Audiofile> I can't believe there is a tape hiss plugin for protools
[20:22:01 CET] <kerio> are you sure you dont want some more radiohead instead
[20:22:16 CET] <Diag> I got versioned lol
[20:22:36 CET] <kepstin> kerio: the price is your personal information (an email address)
[20:22:47 CET] <kerio> 10minutemail
[20:22:59 CET] <Diag> kerio: does ffmpeg support a streaming record?
[20:23:17 CET] <Diag> I dont like the way AMD relive audio sounds
[20:24:39 CET] <kerio> Diag: ok what about a david bowie
[20:25:59 CET] <Diag> kerio: that works
[20:30:58 CET] <furq> here you go
[20:31:25 CET] <furq> i think this will provide an excellent listening experience on all levels
[20:31:36 CET] <kerio> should i buy pink guy's album
[20:31:43 CET] <Diag> furq: sorry file transfer is fucky because its open on my other computer
[20:32:02 CET] <DocMAX> is there a solution to stram with ffmpeg to airplay or dlna devices?
[20:32:07 CET] <DocMAX> stream
[20:32:54 CET] <Audiofile> So I set up my camcorder pointed at my turntable and now I'm trying to use noise reduction but my PC is taking 20 minutes to finish a 3 minute track
[20:33:29 CET] <furq> fidelity
[20:33:35 CET] <Audiofile> how fast of a PC do I need? my nephew built me one that was supposed to be fast
[20:34:06 CET] <Audiofile> I  found a library called libffmpeg-nr but the output seems to be JUST the noise
[20:34:26 CET] <kerio> >camcorder pointed at my turntable
[20:34:28 CET] <kerio> u wot m8
[20:35:21 CET] <Audiofile> I want to share the video with the music with my friends at the local VFW
[21:27:03 CET] <thebombzen> out of curiosity
[21:27:08 CET] <thebombzen> why does opus not support 44.1
[21:27:31 CET] <Diag> it doesnt?
[21:27:33 CET] <thebombzen> to me it seems really silly not to support 44.1 kHz but they clearly thought of that
[21:27:35 CET] <thebombzen> and no it doesn't
[21:27:37 CET] <Diag> theyre retarded
[21:27:42 CET] <thebombzen> well no they're not.
[21:27:51 CET] <Diag> gee
[21:27:56 CET] <Diag> lets let everything get resampled
[21:27:59 CET] <Diag> yes, they are
[21:28:10 CET] <thebombzen> That's the thing. The people who made opus are the experts. Clearly they thought of the issues of resampling cd audio.
[21:28:22 CET] <Diag> dunno
[21:28:31 CET] <Diag> resampling suuuuuuuuuuuuuucks
[21:28:35 CET] <thebombzen> It takes about 4 seconds to come up with that thought. But they decided not to support 44.1 kHz anyway, which means they know something about audio coding we don't.
[21:28:38 CET] <furq> it's going to be resampled by your dac anyway
[21:28:51 CET] <Diag> furq: some dacs dont
[21:29:07 CET] <iH2O> I use this to convert a color video to black and white. it's a bit slow, like 3 seconds to process 1M: ffmpeg -i input.mp4 -vf hue=0:0 output.mp4
[21:29:08 CET] <Diag> furq: my audigy 2 zs platinum pro doesnt iirc
[21:29:13 CET] <iH2O> is there a way to speed it up?
[21:29:27 CET] <furq> https://wiki.xiph.org/OpusFAQ#But_won.27t_the_resampler_hurt_the_quality.3F_Isn.27t_it_better_to_use_44.1_kHz_directly.3F
[21:29:27 CET] <Diag> iH2O: how many frames?
[21:29:35 CET] <iH2O> ?
[21:29:36 CET] <thebombzen> Although Diag resampling from 44.1 kHz to 48 kHz is a mathematically perfect operation
[21:29:41 CET] <Diag> iH2O: is the video?
[21:29:45 CET] <iH2O> just a standard youtube video
[21:29:49 CET] <Diag> yeah
[21:29:53 CET] <Diag> how many frames is it?
[21:29:58 CET] <thebombzen> iH2O: try using -vf format=gray8
[21:30:02 CET] <thebombzen> that'll change the pixel format
[21:30:12 CET] <iH2O> ok, trying it
[21:30:15 CET] <furq> x264 doesn't support gray8
[21:30:17 CET] <Diag> thebombzen: how is that a perfect operation
[21:30:18 CET] <thebombzen> oh well
[21:30:26 CET] <Diag> its not
[21:30:34 CET] <thebombzen> Diag: it is in the context of human hearing
[21:30:40 CET] <Diag> lolol
[21:30:53 CET] <Diag> well
[21:30:55 CET] <thebombzen> https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
[21:30:57 CET] <furq> but yeah as that article points out, opus is lossy
[21:31:00 CET] <Diag> put it on a scope and youll see youre dead wrong
[21:31:06 CET] <thebombzen> it's called the Nyquist-Shannon sampling theorem
[21:31:07 CET] <furq> if you don't want to lose quality then don't use a lossy codec
[21:31:25 CET] <thebombzen> a continuous band-restricted signal can be fully reconstructed from a sufficiently large number of samples
[21:32:04 CET] <thebombzen> the range of human hearing is between 0 and 20 kHz, and 44.1 kHz is actually sufficiently dense samples to fully reconstruct the 0 - 20 kHz band-constricted signal.
[21:32:07 CET] <iH2O> thebombzen: im afraid its even slower lol
[21:32:12 CET] <Diag> thebombzen: ok so resample a 22.5khz sine wave from 44.1 to 48
[21:32:35 CET] <iH2O> Diag: how can i determine the number of frames?
[21:32:41 CET] <thebombzen> Diag: 22.5 kHz is outside of the range of human hearing
[21:32:42 CET] <furq> iH2O: how long is the video
[21:32:45 CET] <thebombzen> so they're both zero.
[21:32:47 CET] <thebombzen> done.
[21:32:51 CET] <iH2O> 50 minutes
[21:33:13 CET] <thebombzen> Diag: it sounds like the easiest way to turn it to B&W is to strip off the chrome and just encode the luma. zero it out or something.
[21:33:19 CET] <thebombzen> sorry iH20
[21:33:22 CET] <thebombzen> wrong ping
[21:33:30 CET] <Diag> ok so
[21:33:34 CET] <furq> "easiest"
[21:33:42 CET] <thebombzen> "easiest" is wrong
[21:33:50 CET] <thebombzen> but fastest and easiest algorithmically
[21:33:52 CET] <Diag> thebombzen: resample 11025 at 441 to 48
[21:34:13 CET] <furq> but yeah you can try -preset veryfast if you want it to run faster
[21:34:17 CET] <thebombzen> Diag are you asking me to do a bunch of arithmetic in my head
[21:34:23 CET] <Diag> no
[21:35:11 CET] <Diag> thebombzen: 11025 in 441 http://puu.sh/toCpA/641a3a7a51.png
[21:35:22 CET] <thebombzen> okay.
[21:35:37 CET] <thebombzen> you haven't told me anything that disproves the Nyquist-shannon sampling theorem yet.
[21:35:46 CET] <Diag> http://puu.sh/toCrY/865cff87b4.png
[21:35:51 CET] <Diag> 11025 resampled to 48
[21:36:00 CET] <Diag> see how
[21:36:03 CET] <Diag> the fricken shits
[21:36:04 CET] <Diag> dont line upo
[21:36:06 CET] <Diag> up*
[21:36:12 CET] <thebombzen> well if you're doing a super primitive resampling then yes it will screw up
[21:36:14 CET] <Diag> youre gonna have loss
[21:36:17 CET] <Diag> no
[21:36:17 CET] <thebombzen> that's because you're doing it wrong.
[21:36:28 CET] <Diag> its impossible to perfectly recreate that at a non multiple sample rate
[21:36:32 CET] <Diag> its literally impossible
[21:36:39 CET] <thebombzen> it is if you consider the band restriction.
[21:36:41 CET] <Diag> because 11025 is an exact division of 441
[21:36:45 CET] <thebombzen> that you don't have to perfectly recreate it.
[21:36:54 CET] <thebombzen> You just need to recreate the frequencies between 0 and 20 kHz
[21:36:58 CET] <Diag> thats dumb
[21:37:01 CET] <thebombzen> why is it dumb
[21:37:07 CET] <Diag> that is between 0 and 20kh
[21:37:23 CET] <Diag> thats 11.025khz
[21:37:34 CET] <Diag> smack dab in the middle
[21:37:38 CET] <thebombzen> well the sine wave you provided is the unique band-restricted signal that could provide those sampling results
[21:37:57 CET] <thebombzen> which means that from the 44.1 kHz sampling we can uniquely reconstruct the continuous signal.
[21:38:05 CET] <thebombzen> so I'm not sure why there's an issue again.
[21:38:45 CET] <Diag> resampling will always equal loss
[21:39:00 CET] <thebombzen> no, it doesn't.
[21:39:06 CET] <Diag> yeah it will lol
[21:39:16 CET] <Diag> go adjust an image by 5% then back
[21:39:18 CET] <Diag> and get back to me
[21:39:27 CET] <thebombzen> why are we mentioning an image
[21:39:35 CET] <Diag> because its resampling
[21:39:49 CET] <thebombzen> yea but we're specifically talking about audio signals band-restricted between 0 and 20 kHz
[21:39:51 CET] <furq> tbf even that opus article says that resampling from 44.1 to 48 will cause quality loss
[21:40:05 CET] <thebombzen> furq: that's because the algorithm might be bad
[21:40:07 CET] <furq> but it won't be audible so who cares
[21:40:09 CET] <thebombzen> in theory you can do it perfectly
[21:40:14 CET] <Diag> maybe you wont care
[21:40:29 CET] <Diag> its just a dumb thing
[21:40:31 CET] <Diag> why
[21:40:33 CET] <Diag> just why
[21:40:35 CET] <furq> because it's inaudible
[21:40:39 CET] <Diag> no
[21:40:42 CET] <Diag> youre going from 441
[21:40:45 CET] <Diag> to 48
[21:40:47 CET] <furq> so?
[21:40:50 CET] <Diag> probably back to 41 for windows output
[21:40:53 CET] <Diag> 441*
[21:41:01 CET] <furq> you're also going from 1411kbps to 128kbps
[21:41:04 CET] <Diag> then the dac resamples to 48 because its an easy divider
[21:41:09 CET] <furq> i expect that'll have a bigger effect
[21:41:11 CET] <Diag> then it gets played out
[21:41:17 CET] <thebombzen> Diag the only difference would be in frequencies > 20 kHz
[21:41:23 CET] <Diag> no lol
[21:41:36 CET] <thebombzen> well why don't you fucking read the mathematical literature and come back to me
[21:41:46 CET] <furq> the only important thing is that opus is the best lossy codec as verified by listening tests
[21:41:50 CET] <thebombzen> rather than ignore what I"m talking about and just be like "ur wrong you don't know what you're talking about"
[21:41:59 CET] <Diag> i did read it
[21:42:08 CET] <furq> if it could achieve that while only supporting 8khz then i'd still use it
[21:42:14 CET] <thebombzen> you did?
[21:42:20 CET] <Diag> but youre also the one who says 128kbit audio sounds good
[21:42:21 CET] <Diag> lol
[21:42:30 CET] <thebombzen> this has nothing to do with the bitrate
[21:42:32 CET] <furq> well yeah it's not 2003 any more
[21:42:36 CET] <thebombzen> and everythign to do with the fourier transform
[21:42:44 CET] <Diag> thebombzen: oh god youre a furry too?
[21:42:50 CET] <thebombzen> ?
[21:42:54 CET] <Diag> nvm missed joke
[21:43:02 CET] <Diag> I understand where youre coming from
[21:43:11 CET] <Diag> at arbitrary frequencies the quality loss is going to be minimal
[21:43:14 CET] <Diag> but what im saying is
[21:43:16 CET] <furq> we're not talking about 128k cbr fhg mp3s off napster
[21:43:19 CET] <Diag> it could be avoided al together
[21:43:20 CET] <furq> 128k opus is more than enough
[21:43:28 CET] <thebombzen> Diag not "minimal" loss
[21:43:29 CET] <Diag> all*
[21:43:39 CET] <thebombzen> the loss is restricted to frequencies > 20 kHz
[21:43:41 CET] <thebombzen> which is zero
[21:43:43 CET] <Diag> its not no loss
[21:43:45 CET] <thebombzen> because humans can't hear that.
[21:43:51 CET] <Diag> there will be /some/ loss
[21:44:14 CET] <thebombzen> I'm not sure how many times I have to say "loss outside 0 to 20 kHz" is fucking irrelevant
[21:44:15 CET] <Diag> wheres kerio
[21:44:21 CET] <Diag> dude
[21:44:21 CET] <thebombzen> because humans can't hear those signals
[21:44:26 CET] <Diag> im not saying above 20khz
[21:44:31 CET] <Diag> im saying in the audible range
[21:44:32 CET] <thebombzen> well then you're wrong.
[21:44:37 CET] <Diag> no lol im not
[21:44:50 CET] <Diag> you will have loss
[21:44:53 CET] <kerio> whatt
[21:45:00 CET] <Diag> kerio: send me something that i can blast
[21:45:01 CET] <thebombzen> well if you understand where I'm coming from then you should know that I'm citing mathematics here
[21:45:02 CET] <Diag> like
[21:45:06 CET] <Diag> not blackstar
[21:45:15 CET] <thebombzen> and Diag you CAN choose to ignore mathematical theorems
[21:45:16 CET] <Diag> thebombzen: and im citing real world results
[21:45:28 CET] <Diag> kerio: like something life on mars
[21:45:30 CET] <thebombzen> well a bad algorithm for implementing a theorem doesn't disprove a theorem
[21:45:49 CET] <Diag> thebombzen: tell that to my analog scope (not the screenshots of cool edit)
[21:46:03 CET] <thebombzen> well then your analog scope has a bad algorithm
[21:46:09 CET] <Diag> also furq ill haveyou know, cool edit is from 2003 and it fuckin ricks
[21:46:10 CET] <Diag> lol
[21:46:11 CET] <Diag> ricks
[21:46:17 CET] <thebombzen> telling me that your particular software and hardware can't do something doesn't mean that it can't be done
[21:46:21 CET] <thebombzen> it just means the tool you're using can't do it.
[21:46:25 CET] <Diag> uhhuh
[21:46:48 CET] <Diag> so the scope thats capable of reading like ~10mhz easily
[21:46:58 CET] <Diag> cant show issues with a 10khz waveform
[21:47:01 CET] <Diag> i getcha
[21:47:05 CET] <thebombzen> no you don't.
[21:47:09 CET] <thebombzen> you don't get what I'm saying at all
[21:47:11 CET] <furq> what does that have to do with resampling
[21:47:27 CET] <Diag> furq: waveform before and after resampling audio
[21:47:42 CET] <furq> what does the quality of your oscilloscope have to do with resampling theory
[21:47:48 CET] <kerio> Diag: 1) resampling upwards won't cause any loss of information, clearly
[21:47:55 CET] <furq> that's like saying that you can't deinterlace video because you watched a badly deinterlaced video on a 4k tv
[21:47:57 CET] <thebombzen> I'm saying that the Sampling Theorem says that a 44.1 kHz sampled signal is enough information to uniquely reconstruct the continuous signal, band restricted from 0 to 20 kHz
[21:48:10 CET] <kerio> 2) resampling to and from 44.1 and 48 will not cause any appreciable artifacts, especially compared to a lossy compression
[21:48:23 CET] <furq> that might be the case but you've not proved it by having a good oscilloscope
[21:48:32 CET] <furq> but yeah mostly the thing kerio just said
[21:48:41 CET] <furq> if you don't want to lose quality then don't use a lossy codec
[21:48:49 CET] <thebombzen> kerio: well sampling upward can actually cause a change but only > 20 kHz
[21:48:49 CET] <kerio> 3) i don't have life on mars lossless so it wouldn't work
[21:48:52 CET] <furq> the clue's in the name
[21:49:07 CET] <Diag> sorry power cycled
[21:49:14 CET] <Diag> last thing i saw was <furq> what does that have to do with resampling
[21:49:33 CET] <furq> what shitty client have you got that doesn't disconnect over a power cycle but doesn't keep logs
[21:49:58 CET] <Diag> furq: im connected on another computer you dumb fuck
[21:50:14 CET] <Diag> am i really going to go ftp into my shit
[21:50:16 CET] <Diag> and check the logs
[21:50:20 CET] <furq> yes
[21:50:24 CET] <Diag> no
[21:50:30 CET] <Diag> i expect people to act like fucking adults
[21:50:40 CET] <thebombzen> Diag if you're connected on another computer why does it not have the logs
[21:50:47 CET] <furq> yeah i don't understand that either
[21:50:53 CET] <kerio> anyway Diag can you stop being dumb
[21:51:03 CET] <Diag> thebombzen: that computer is at home
[21:51:05 CET] <Diag> i am not at home
[21:51:08 CET] <Diag> jesus christ
[21:51:22 CET] <Diag> it doesnt playback
[21:51:27 CET] <Diag> because it assumes i never lost connection
[21:51:27 CET] <kerio> and accept that maybe in 2017 we have the technology for perceptually lossless audio at the same bitrate you downloaded shit mp3s from kazaa at
[21:51:40 CET] <Diag> wtf is kazaa
[21:51:56 CET] <thebombzen> Diag: here
[21:51:57 CET] <kerio> winmx then
[21:51:58 CET] <thebombzen> I did it for you
[21:51:58 CET] <thebombzen> http://sprunge.us/EeJH
[21:52:17 CET] <furq> why would you use an irc client that isn't on a headless server in the netherlands
[21:52:19 CET] <Diag> kerio: no i went to barnes and noble most of the time or local shops to pick up cds
[21:52:20 CET] <furq> you people are weird
[21:53:06 CET] <Diag> ah nice thanks thebombzen
[21:53:20 CET] <Diag> sure its enough to accurately reconstruct it
[21:53:24 CET] <kerio> i have a bouncer in germany
[21:53:26 CET] <kerio> does that count
[21:53:28 CET] <Diag> but there will be frequencies that will not be exactly the same
[21:53:34 CET] <Diag> you probably wont hear it
[21:53:37 CET] <thebombzen> yea but they'll be > 20 kHz
[21:53:38 CET] <Diag> but its not "lossless"
[21:53:42 CET] <thebombzen> so you literally can't hear it.
[21:53:43 CET] <Diag> its audibly lossless
[21:53:49 CET] <Diag> not mathmatically lossless
[21:54:03 CET] <thebombzen> I'm not sure how many times i have to say "within the range of 0 to 20 kHz"
[21:54:09 CET] <thebombzen> for you to learn that that's what I'm talking about
[21:54:21 CET] <thebombzen> because the first four or five times hasn't done the job
[21:54:24 CET] <furq> i don't understand bouncers
[21:54:27 CET] <Diag> thebombzen: jesus christ tell me how you will represent 11025hz the same way on 441 and 48?!@$5
[21:54:33 CET] <Diag> YOU CANT
[21:54:41 CET] <thebombzen> Well of course you can't represent them the same way
[21:54:45 CET] <thebombzen> cause one of them is at 48 kHz
[21:54:46 CET] <Diag> there we go
[21:54:59 CET] <Diag> Its not the same waveform on the other end
[21:55:03 CET] <kepstin> but the ways you represent them are in fact *mathematically identical* modulo quantization
[21:55:04 CET] <kerio> yes, it's the same waveform
[21:55:12 CET] <thebombzen> no it totally is
[21:55:17 CET] <Diag> how
[21:55:29 CET] <thebombzen> because the resampling is also sufficient information to reconstruct the original continuous signal
[21:55:34 CET] <thebombzen> (within 0 to 20 kHz)
[21:55:41 CET] <Diag> so in the screenshot i sent
[21:55:50 CET] <Diag> where cool edit was showing the 11khz wave
[21:55:58 CET] <kerio> Diag: you should probably read a bit on how the fourier transform works
[21:56:03 CET] <Diag> where the waves looked the same though the samples were in different spots
[21:56:14 CET] <furq> i'm really happy that this contentious argument involves the words "cool edit"
[21:56:22 CET] <Diag> furq: have you ever used cool edit
[21:56:25 CET] <Diag> its now known as
[21:56:27 CET] <Diag> adobe audition
[21:56:38 CET] <furq> why would they change the name
[21:56:40 CET] <furq> i don't understand
[21:56:43 CET] <Diag> because adobe bought it
[21:56:47 CET] <Diag> and wanted to not be cool
[21:56:49 CET] <furq> they should have changed it to adobe cool edit
[21:56:59 CET] <kepstin> Diag: maybe you should go watch monty's video on digital sampling: https://xiph.org/video/vid2.shtml
[21:57:01 CET] <thebombzen> Audition? that's what people use before they discover that Audacity works and does just as much nothing, right?
[21:57:10 CET] <Diag> lol
[21:57:12 CET] <Diag> audacity works
[21:57:19 CET] <Diag> but it aint shit compared to audition or cool edit
[21:57:29 CET] <thebombzen> Adobe Cool Edit
[21:57:34 CET] <thebombzen> honestly that's a new one
[21:57:49 CET] <thebombzen> and furq
[21:58:12 CET] <thebombzen> [15:56:14] <furq> i'm really happy that this contentious argument involves the words "cool edit"
[21:58:16 CET] <thebombzen> this might go in a signature somewhere
[21:58:21 CET] <thebombzen> this is how happy this makes me
[21:58:24 CET] <Diag> thebombzen: audacity actually copied a lot of wording and layout from cool edit
[21:58:30 CET] <thebombzen> so
[21:58:35 CET] <thebombzen> this is not a problem
[21:58:53 CET] <Diag> when did audacity come out
[21:59:01 CET] <thebombzen> I don't know but I bet wikipedia does
[21:59:07 CET] <Diag> may 28 2000
[21:59:37 CET] <furq> https://upload.wikimedia.org/wikipedia/commons/1/18/Audacity_2-1-2_running_on_Windows10.png
[21:59:43 CET] <furq> how did they get the waveform to have those clocks in it
[21:59:46 CET] <Diag> i know cool edit was around as early as 2005
[21:59:48 CET] <furq> is this some aphex twin shit
[21:59:48 CET] <Diag> jesus
[21:59:49 CET] <Diag> 1996
[21:59:50 CET] <durandal_1707> stop off topic nonsense
[21:59:51 CET] <Diag> 95*
[21:59:53 CET] <Diag> cant type
[22:00:06 CET] <Diag> furq: what the hell lol
[22:00:16 CET] <Diag> eyyy itsa him, italiano
[22:00:17 CET] <thebombzen> the clocks are there because it's a cool edit
[22:00:24 CET] <Diag> o snap
[22:00:29 CET] <furq> damn
[22:00:45 CET] <Diag> thebombzen: furq cool edit looks like so http://puu.sh/toE57/4ca14c2f28.png
[22:01:01 CET] <Diag> way zoomed in of course
[22:02:33 CET] <kepstin> wow, it's an audio editor that actually shows waves correctly rather than using stairsteps.
[22:02:42 CET] <Diag> lol
[22:02:47 CET] <Diag> this is cool edit 95
[22:02:48 CET] <Diag> http://puu.sh/toEdI/1d6bceb25f.png
[22:03:01 CET] <thebombzen> https://i.imgur.com/xMQOuqS.png
[22:03:05 CET] <Diag> 60hz wave because it didnt zoom in further
[22:03:23 CET] <Diag> i actually laughed out loud
[22:03:44 CET] <Diag> thebombzen: http://puu.sh/toEhf/cd584c5a25.png
[22:03:55 CET] <thebombzen> I know why
[22:03:56 CET] <Diag> OH best part
[22:03:58 CET] <thebombzen> it's length 0
[22:04:00 CET] <Diag> I can click and drag the samples
[22:04:07 CET] <thebombzen> I can too
[22:04:09 CET] <Diag> can audacity do that?
[22:04:19 CET] <thebombzen> I can also click and drag the samples in audacity
[22:04:27 CET] <thebombzen> I'm just not sure if they'll move
[22:04:29 CET] <Diag> Lets see
[22:04:35 CET] <thebombzen> but honestly
[22:04:39 CET] <thebombzen> I don't really need that feature
[22:04:54 CET] <furq> this is the lamest software fight i've ever seen
[22:04:59 CET] <Diag> lol
[22:05:08 CET] <furq> at least argue over vim vs emacs or something
[22:05:10 CET] <thebombzen> Audacity vs Cool Edit: Fight
[22:05:10 CET] <Diag> audacity is shit compared to cool edit or audition
[22:05:22 CET] <Diag> its like saying gimp or paint.net is better than photoshop
[22:05:27 CET] <Diag> sure they have the same features
[22:05:31 CET] <Diag> not sure about paint.net anymore
[22:05:34 CET] <Diag> havent used it in forever
[22:05:38 CET] <thebombzen> paint.net
[22:05:39 CET] <Diag> but photoshop kicks so much ass
[22:05:39 CET] <thebombzen> is that like
[22:05:51 CET] <thebombzen> some html5 webapp where you can draw
[22:05:56 CET] <thebombzen> cause that's what it sounds like
[22:05:59 CET] <Diag> http://www.getpaint.net/index.html
[22:06:08 CET] <furq> it's an image editor written in C#, obviously
[22:06:13 CET] <thebombzen> well I knew that
[22:06:26 CET] <thebombzen> but either way
[22:06:36 CET] <thebombzen> I wasn't seriously trying to argue that Audacity was better than Adobe Audition
[22:06:47 CET] <Diag> howd we get here
[22:06:48 CET] <thebombzen> so all these arguments you're giving me on why I'm wrong are both correct and pointless
[22:07:59 CET] <thebombzen> hey durandal_1707 isn't ffmpeg at 3.2.2
[22:08:04 CET] <thebombzen> shouldn't we change the title
[22:33:16 CET] <solrize> i'm doing a big mp4 to webm conversion and i notice ffmpeg is slowly eating memory, now using 4.6GB after about 2 hours of operation.  is that normal?
[22:38:44 CET] <solrize> i think there's a memory leak... build is from master from a few weeks ago.  anything known about this?  it's up to 4.8gb now.  it's a 32gb machine so it should be able to finish, but this doesn't seem right.
[22:42:09 CET] <thebombzen> does sound like a memory leak, but it's possible it's a memory leak in libvpx
[22:42:18 CET] <thebombzen> I'd try isolating that one out first
[22:42:41 CET] <solrize> hmm ok
[22:42:49 CET] <solrize> any advice?  run under valgrind or something?
[22:43:37 CET] <solrize> i'll pull current head and see if it still happens first, after this run finishes in an hour or two
[22:43:39 CET] <solrize> thanks
[22:47:21 CET] <thebombzen> I just meant use vpxenc (the reference encoder)
[22:47:36 CET] <thebombzen> cause that allows you to check whether it's a libvpx issue or an issue with ffmpeg.c or avformat or whatever
[22:48:29 CET] <thebombzen> ffmpeg is a huge pipeline and using vpxenc and seeing if you get the same results allows you to figure out whether or not it's a leak in ffmpeg.c/avformat or in libvpx
[22:53:29 CET] <solrize> oh hmm ok i'll try that.  i don't have vpxenc, i guess it's on webmproject.org?
[22:54:56 CET] <solrize> do i have to convert the mp4 into something to put into vpxenc?
[23:11:59 CET] <kepstin> solrize: you'll probably have to use ffmpeg to decode the video, then pipe raw video into vpxenc.
[23:12:10 CET] <solrize> aha ok thanks
[23:22:38 CET] <popara> Can the ffmpeg be compiled along with the nvenc static with no shared ld? I managed to make it, and the compilation suceed, but when i run the ffmpeg i get Cannot load libcuda.so.1
[23:22:46 CET] <popara> If i don't do it with static, all is fine
[23:25:29 CET] <BtbN> There is no static nvenc.
[23:25:54 CET] <BtbN> The libraries it loads at runtime are part of the nvidia driver.
[23:27:33 CET] <faLUCE> hello. Given an AVPacket with encoded data, how can I add a mpegts container to it? I tried  av_interleaved_write_frame(avFormatOutputContext, &encodedPkt) but it flushes the pkt after muxing it and I can't reuse it....
[23:30:34 CET] <BtbN> it unrefs it after it's done with it. ref it yourself if you want to keep it.
[23:32:17 CET] <faLUCE> BtbN: and what about av_write_frame? It doesn't take ownership but.... does it has the same behaviour when muxing packet?
[23:32:28 CET] <DHE> faLUCE: you can use av_packet_ref on another AVPacket to duplicate it
[23:33:18 CET] <faLUCE> ok thanks
[23:33:39 CET] <faLUCE> but what abou av_write_frame non interleaved?
[23:35:05 CET] <DHE> it puts an onus on you to send the various streams in globally ascending dts order
[23:35:21 CET] <DHE> it doesn't have to be perfect, but the closer the better
[23:35:58 CET] <popara> Anyone has any idea  how much transcodings into H265 a quadro k4200 for example is able to do?
[23:39:29 CET] <DHE> popara: a couple of 1080p30 videos in realtime. the specs are about constant for the same basic chipset (Pascal, etc) regardless of cards, so anything you find for a similar-generation card should be good
[23:40:14 CET] <BtbN> Non Quadro cards are limited to two sessions at a time though.
[23:40:28 CET] <BtbN> And I'm not sure if just being a Quadro card is enough to get rid of that limit.
[23:40:52 CET] <popara> I read that quadro cards do not have a limit
[23:40:57 CET] <popara> the other cards have limit of 2 sessions
[23:41:03 CET] <BtbN> Yes, the real, crazy expensive ones do
[23:41:12 CET] <BtbN> But like, the simpler Quadro NVS don't.
[23:41:31 CET] <DHE> BtbN: the cheap/workstation GPUs still limit 2. I have one and it limits me
[23:41:50 CET] <BtbN> Yeah, that's what I expected.
[23:42:16 CET] <popara> Quadro products (Kepler or Maxwell) can encode more than 2 video streams.
[23:42:20 CET] <popara> this is what i read
[23:42:36 CET] <DHE> the chip is capable of it. nvidia drivers screw you if the card isn't on a whitelist though
[23:42:54 CET] <popara> yes because i run 2 streams in HEVC, and only 3% load on GPU
[23:42:55 CET] <popara> i said wtf
[23:42:59 CET] <BtbN> yes, the chips don't give a damn. It's all driver side.
[23:43:11 CET] <BtbN> The GPU is not involved with video de/encoding.
[23:43:14 CET] <BtbN> GPUs are bad at that.
[23:43:41 CET] <DHE> popara: there's a distinct "GPU" and "video" usage report. encoding will use the other
[23:44:17 CET] <DHE> there is something of a workaround for the 2-stream limit. that's only 2 real-time streams, but it's also 2 as-fast-as-possible streams for bulk work
[23:46:13 CET] <BtbN> yeah, the driver just limits how many encoder contexts you can create at a time.
[23:46:29 CET] <popara> tomorrow i will get one quadro k4200 to see how good it will perform
[23:46:37 CET] <DHE> but there's no speed limit on each context, which is about the only saving grace.
[23:46:37 CET] <popara> i'm curius
[23:46:38 CET] <BtbN> To force you to buy their expensive cards if you want to build a large transcode rig or something.
[23:47:03 CET] <popara> i tried both qsv and nvenc. In the nvenc i got a lot better stream quality , and in both the same command applied
[23:49:15 CET] <kerio> real men encode on cpu
[23:50:02 CET] <DHE> yes, real CPU will provide far better quality
[23:50:34 CET] <popara> Yes but in 40 cores (80 threads) CPU, i was able to transcode only few streams on HEVC
[23:50:38 CET] <popara> and i think it costs more
[23:50:47 CET] <popara> if the k4200 does not have limits it costs $800
[23:51:23 CET] <DHE> it has limits. I'd need to look them up, but it will have a max fps
[23:51:28 CET] <solrize> where does the limit come from?
[23:52:02 CET] <popara> DHE yes on 30fps
[23:52:09 CET] <DHE> at what resolution?
[23:52:13 CET] <popara> didnt see what
[23:52:14 CET] <popara> that*
[23:53:45 CET] <DHE> K4200 has a Kepler chip
[23:54:28 CET] <solrize> are you saying nvidia purposely throttles their cheaper gpus?!
[23:54:49 CET] <popara> Yes so it is supposed not to have any limits (on sessions).
[23:54:58 CET] <solrize> wow
[23:55:10 CET] <solrize> is it the driver blob doing that, or the board itself?
[23:55:25 CET] <DHE> the information I have only goes up to Pascal. but at 1080p you get between ~151 and ~390 fps, depending on settings. that is almostly linearly divided between multiple simultaneous sessions
[23:55:30 CET] <DHE> (in H265 mode)
[23:55:48 CET] <DHE> Kepler should bet better than that, but I wouldn't expect leaps and bounds
[23:56:15 CET] <solrize> yeah fps is the most important thing but if there's an artificial limit on sessions that's tacky
[23:56:28 CET] <DHE> solrize: yes the driver caps you at 2 sessions. even if it can do 10 streams in real-time, that means you can do 2 streams at 5x realtime
[00:00:00 CET] --- Tue Jan 17 2017

More information about the Ffmpeg-devel-irc mailing list