[Ffmpeg-devel-irc] ffmpeg.log.20170221

burek burek021 at gmail.com
Wed Feb 22 03:05:01 EET 2017


[00:24:23 CET] <darrel12> Thanks furq, I'm brand new here. In that case, if anybody can help, here's a pastebin with the CLI commands that I'm having a problem with, http://pastebin.com/3ACKVUtM. My issue is that I'm SSHing into an EC2 image of Ubuntu 16.04 and trying to open an audio stream to my local computer to intercept with ffplay. The first set of commands is how I was initially SSHing into the image and trying to stream to my local machine, and then I 
[00:25:07 CET] <furq> that got cut off after "and then I"
[00:26:06 CET] <darrel12> and then I came across a post on StackOverflow that talks about using port forwarding in the SSH connection, so I could just run the ffmpeg command on the images localhost and run ffplay on my local machine to intercept it maybe? But I know RTP runs ofer UDP and SSH connections are TCP, so I feel like that wouldn't work without more work. That is the second  set of commands. This all works when I do it completely locally, but not over
[00:26:27 CET] <furq> "but not over"
[00:26:39 CET] <darrel12> but not over the internet. Also, I temporarily have all types of connection allowed both incoming and outgoing for the AMI, so if port restriction or something like this is related it would be on my local machine. If I'm not on here or I don't reply quickly, you can email me at darrelholt12 at yahoo.com
[00:26:46 CET] <furq> we made it
[00:26:56 CET] <darrel12> thanks lol
[00:29:58 CET] <Hello71> if you have only one client a pipe is probably better
[00:30:17 CET] <Hello71> hint: man ssh, /-R
[00:34:06 CET] <darrel12> I saw something about pipes when I was looking around. I can deal with pipes in C++. when I look at the -R flag for ssh none of the entries mention pipes. got another hint?
[00:35:13 CET] <darrel12> and does this require doing that? is there a method to open a unidirectional pipe without using the -R flag with ssh?
[00:38:57 CET] <darrel12> do you mean the | pipe? and just pipe the ffmpeg command into the ssh command or something?
[00:54:29 CET] <Hello71> -R opens a port on the *remote* machine.
[00:57:45 CET] <darrel12> So thats why when I run `netstat -atun` on the image it shows what I specified with -R, but not on my local machine. But can this still work if I'm trying to use RTP when ssh makes a TCP connection?
[01:16:59 CET] <DHE> the server side listens. when a connection comes into it, your client client will open an outgoing connection
[01:17:42 CET] <darrel12> Happy, when you mentioned pipes, do you mean transferring the mp3 file over SSH? Because that's what I'm getting from my searches. I need to stream with ffmpeg and simulate real-time streaming with the -re flag
[01:18:54 CET] <darrel12> DHE, by server side do you mean the machine that is initiating the ssh connection, or the machine being connected to?
[01:20:26 CET] <DHE> the machine listening on port 22 running sshd
[01:22:44 CET] <darrel12> so that would be the machine initiating the ssh connection? I don't even know if this helps with my problem though. I need to stream with ffmpeg; ssh is just how I'm logging into the remote machine.
[02:04:18 CET] <vans163> is anyone familiar with a wav sound binary chunk? I am recording audio and i notice the format is 2 channel, s16, 44100 frequency.  The audio is coming in chunks of 4096 bytes.  Is there a way to determine the sample frame count?
[02:14:46 CET] <NeedHelp12345> I wanna to build a simple audio player with ffmpeg, but I don't know where to start. Is there any mp3 player example?
[02:15:17 CET] <NeedHelp12345> At the beginning I want just to read and play a mp3 file.
[02:16:57 CET] <NeedHelp12345> programming language: c++
[02:18:43 CET] <vans163> it seems the formula maybe 4096 / max_chans(2) / sizeof(s16)(2)
[02:20:07 CET] <darrel12> NeedHelp12345, there are a lot of guides on how to play audio with ffmpeg. Start with https://trac.ffmpeg.org/wiki/StreamingGuide. You probably have to learn how to use Qt as well, assuming you want your audio player to be graphical.
[02:22:03 CET] <faLUCE> hey, I have some questions about flv mux:   1) given that it's a proprietary format, how has it been coded? with reverse engineering?   2) it seems that youtube updates flash very often. Then: what does ffmpeg do? does it sync its flv muxer with adobe's changes?
[02:22:32 CET] <llogan> NeedHelp12345: see ffplay.c and http://dranger.com/ffmpeg/ and maybe https://github.com/mpenkov/ffmpeg-tutorial
[02:23:00 CET] <NeedHelp12345> Thx
[02:23:53 CET] <NeedHelp12345> Is there any documentation about the whole functions offered by the ffmpeg library?
[02:24:41 CET] <c_14> https://ffmpeg.org/doxygen/trunk/index.html
[02:24:41 CET] <llogan> http://ffmpeg.org/doxygen/trunk/index.html
[02:24:50 CET] <faLUCE> NeedHelp12345: you can read the doxygen API. but it's better if you start with doc/examples
[02:24:50 CET] <llogan> stereo
[02:25:09 CET] <darrel12> http://ffmpeg.org/documentation.html
[02:26:35 CET] <NeedHelp12345> Oh, I thought that this documentation is not complete. Thx.
[02:27:50 CET] <faLUCE> NeedHelp12345: for making a player, you need a graphical lib too
[02:28:26 CET] <NeedHelp12345> I know. But my first goal is to create a simple console audio output.
[02:28:40 CET] <llogan> faLUCE: there's the flv container format (specs http://download.macromedia.com/f4v/video_file_format_spec_v10_1.pdf), the "flv1" video format. by "updates" do you mean the flash player program?
[02:28:42 CET] <NeedHelp12345> Could you recommend a graphical lib? (I don't want to use Qt)
[02:29:13 CET] <darrel12> Qt is the easiest for C++ I've come across
[02:29:25 CET] <faLUCE> llogan: yes
[02:29:38 CET] <NeedHelp12345> Is there no other good lib?
[02:29:45 CET] <faLUCE> NeedHelp12345: why you don't want to use Qt ?
[02:29:54 CET] <faLUCE> there are many libs (gtk for example)
[02:30:14 CET] <llogan> the player is a separate thing from the actual flv format
[02:31:02 CET] <NeedHelp12345> Because Qt has also 'special' licenses for companies etc... I would prefer to use a complete free base for my applications.
[02:31:48 CET] <darrel12> So you're developing an audio player for a company? I;m pretty sure it's free for personal use
[02:32:01 CET] <faLUCE> llogan: I know, but I don't understand why they update it so often
[02:32:02 CET] <NeedHelp12345> No, it's only for personal use.
[02:32:08 CET] <Hello71> better not use linux then, they use bsd network stack
[02:32:24 CET] <faLUCE> I thought they changed/improved some format's specs from time to time
[02:32:58 CET] <faLUCE> NeedHelp12345: then you have no problems if it's for personal use
[02:33:17 CET] <NeedHelp12345> I just don't want go the easie way. I wanna to learn things on my own without using qt.
[02:33:41 CET] <llogan> looks like the specification hasn't changed since 2010.
[02:33:52 CET] <NeedHelp12345> I want to improve my programming skills.
[02:34:46 CET] <llogan> i'm not sure why the player is updated so much. crappy buggy crap? i don't think i even have it installed.
[02:35:30 CET] <darrel12> I have a question: lets say I start up ffserver with the default values in /etc/ffserver.conf on an Ubuntu 16.04 machine. What's up with the .ffm file? Where is that supposed to be on my machine?
[02:36:01 CET] <llogan> i doubt anyone here uses, or knows how to use, ffserver
[02:36:32 CET] <darrel12> NeedHelp: The reason we have things like Qt to build the GUI is because it's a huge pain in the ass to write up a GUI from scratch. If you wanna do that I would recommend you work with the swing library in Java.
[02:36:56 CET] <darrel12> why llogan?
[02:37:33 CET] <faLUCE> NeedHelp12345: if you want to improve your programming skill in C++ then I suggest you to learn C++11
[02:37:43 CET] <llogan> it's basically unmaintained
[02:38:01 CET] <darrel12> What would you use to multicast then?
[02:38:03 CET] <faLUCE> llogan: I see thanks
[02:38:41 CET] <faLUCE> darrel12: if he uses java then he has to wrap libav
[02:40:50 CET] <darrel12> FaLUCE: My point was to deter him from concentrating on writing a GUI from scratch in C++ when there are a thousand more important things he could be learning if he wants to improve his programming skills
[02:41:12 CET] <llogan> darrel12: i'm not sure. not an area i'm interested in or experienced with. if you can get ffserver to work, then go for it, but i think you'll be on your own figuring it out.
[02:41:15 CET] <faLUCE> darrel12: of course
[02:41:21 CET] <shindroid> write your gui in C#
[02:41:23 CET] <vans163> writing a GUI in C++ is like rowing a boat with a pitchfork
[02:41:23 CET] <shindroid> on windos
[02:41:31 CET] <shindroid> ^
[02:41:34 CET] <shindroid> unless you have opengl
[02:41:38 CET] <shindroid> then its worth the row
[02:41:39 CET] <shindroid> LOL
[02:41:49 CET] <vans163> then yea ofcourse but you still script that you use opengl for the engine
[02:41:54 CET] <shindroid> but...
[02:41:55 CET] <vans163> then use C#/lua/etc to do the GUIlogic
[02:42:01 CET] <shindroid> going through opengl is like rowing with a spoon
[02:43:05 CET] <vans163> im actually working with opengl things now hehe
[02:43:13 CET] <darrel12> llogan: thanks for the heads up. Do you or anybody else here have a lot of experience in streaming rtp over the web? I've only gotten cryptic answers so far when it comes to that...
[02:43:22 CET] <shindroid> im working with shite gdi
[02:43:36 CET] <shindroid> and draw image from ffmpegs junk
[02:43:45 CET] <shindroid> C -> C++11 -> CLI managed -> C#
[02:43:47 CET] <shindroid> hue hueh euehue
[02:43:52 CET] <shindroid> all in vs 2015
[02:44:24 CET] <darrel12> I stopped using vs years ago. Way too much going on there. Isn't it like 15 gigs now?
[02:44:25 CET] <vans163> yea similar except c/C++1x in nacl
[02:44:40 CET] <llogan> darrel12: i've seen this mentioned here recently: https://github.com/arut/nginx-rtmp-module
[02:45:02 CET] <vans163> abit of JS for the managed stuff but its purely light
[02:45:12 CET] <vans163> most of it ahs to be in C/C++ for the realtime properties
[02:45:45 CET] <vans163> darrel12: im mostly stuck on vs2010, it works great for ansi c
[02:46:18 CET] <darrel12> llogan: thanks man, I heard some stuff about using nginx but I don't have a lot of experience with it so I didn't delve into that method
[02:46:28 CET] <vans163> you dont need nginx
[02:46:38 CET] <vans163> that seems like it would add complexity to your stack
[02:47:05 CET] <darrel12> wht do I need then? I just want to get point-to-point streaming across the web working before I do any multicasting
[02:47:59 CET] <darrel12> I pasted this earlier: http://pastebin.com/3ACKVUtM It's what I've been trying but I can't get the ffplay interception working
[02:48:10 CET] <vans163> i was thinking just a simple rtp server with pythin like SimpleHTTPServer
[02:48:18 CET] <vans163> to just get that PoC out
[02:49:30 CET] <vans163> what errors did you get using that?
[02:49:30 CET] <darrel12> Well, my end goal is going to be using Python to manage a multicast to many clients, so I like the direction you're going with this. what is PoC though? point of conversation?
[02:49:40 CET] <vans163> proof of concept
[02:49:56 CET] <darrel12> no errors, the ffplay just sits on nan, not receiving any of the stream
[02:50:08 CET] <vans163> like.. okay it works using this super innefficent webserver thats using 100% of my cpu, let me spend 3 days to figure out how to config nginx properly now.
[02:50:23 CET] <vans163> vs spending 3 days to configure nginx to realize you odnt need it
[02:50:35 CET] <vans163> tho nginx isnt that hard to config :p maybe 3 hours
[02:51:01 CET] <vans163> darrel12: can you get a wireshark capture?
[02:51:04 CET] <darrel12> would it be more efficient with the speed of an AWS? because that's what my server is
[02:51:08 CET] <vans163> thats wierd it seems that should work like that
[02:51:20 CET] <vans163> it would not be cost efficient IMO
[02:51:41 CET] <darrel12> I don't have a box that I can use as a server, and I
[02:51:46 CET] <vans163> AWS charges 1600$ for 35tb out which is 100mbps unmetered
[02:52:07 CET] <darrel12> I
[02:52:13 CET] <vans163> normal dedicated servers charge 500-1000$ for 1gbps unmetered aka 350tb out/in
[02:52:19 CET] <darrel12> I'm not gonna run it that hard
[02:53:50 CET] <vans163> im not sure what can be wrong iv never used ffplay like that and ffmpeg like that.  but could it just be the AWS instance has no sound card?
[02:53:51 CET] <darrel12> I'm projecting that it won't have more than gig of data transfer a month for the next few months to a year.
[02:54:00 CET] <vans163> AWS should be great then yea
[02:54:13 CET] <vans163> any VPS really too
[02:54:16 CET] <darrel12> vans163: it's actually the other way around. I'm streaming from the server to my pc for that very reason
[02:54:38 CET] <darrel12> I'll break down the objective of my program real quick
[02:54:57 CET] <vans163> firewall issue? did you check that udp port 1234 is open?
[02:55:06 CET] <vans163> out and in? on both sides? can you get a packet across?
[02:56:26 CET] <darrel12> users will have a client that they create a local playlist with. when they press play to listen to it, it's gonna broadcast to the server which then will multicast to others that want to intercept the stream and listen to the same audio ahim/her. I will do the wireshark cap right now, I don't have a firewall on this ubuntu machine, and ill start the ffplay right now and see of netstat says it's listening
[02:57:42 CET] <vans163> darrel12: from what i remember AWS has complex security groups, I think tho for simplicy you could set it to all open.  These are groups outside the instances firewall itself which must also allow port 1234 udp in/out.  And pretty neat, I loved spotify for that reason
[02:57:43 CET] <darrel12> Also, on the AMI I have all connections allowed in the security group, so I know it's not the server side, before I set the security up I couldn't even ping the server
[02:57:55 CET] <vans163> except spotify was outdated, I much rather listen to a playlist live
[02:57:59 CET] <vans163> that i know others are listening to as well
[02:59:05 CET] <darrel12> Yeah, that's the point actually, and we're gonna have data analytics too; so you can see the most popular "DJ" and such.
[02:59:17 CET] <vans163> i dont have much time to look for new music these days, if I can just tune into different genres that I like and see if I can pick out some songs to listen to for the next few weeks that would be good
[03:01:13 CET] <darrel12> lol, that's exactly the point of this program. We're making this project with scrum, so when we have something working I'll come here and share it with you and you can be an original user
[03:02:01 CET] <vans163> darrel12: ping me yea
[03:02:31 CET] <llogan> just use ffmpeg to make music: http://trac.ffmpeg.org/wiki/FancyFilteringExamples#aevalsrc
[03:04:11 CET] <darrel12> I have so many damn connections open on this laptop the wireshark hits are numerous, and I have only used it to learn how DNS queries work. How do I filter this crap by UDP/RTP?
[03:05:11 CET] <darrel12> I found a packet
[03:05:30 CET] <darrel12> nvm, it's just the ssh connection
[03:06:02 CET] <darrel12> it looks like I'm not getting any packets from the server on UDP or RTP
[03:07:26 CET] <darrel12> vans163: what did you mean by ping you?
[03:17:03 CET] <vans163> darrel12:  i mean pm me once you get the platform out.  and it should be as simple as in the filters to do    udp.port == 1234 as the filter
[03:17:58 CET] <vans163> tcpdump -i eth0 -w wireshark.pcap  (if no gui on the server)
[03:18:15 CET] <darrel12> vans163: ok thanks
[03:18:32 CET] <vans163> then scp SERVER:/tmp/wireshark.pcap . && wireshark wireshark.pcap from a place you got a gui
[03:18:56 CET] <darrel12> ok, so capture on the server and port it to my machine to look at?
[03:19:10 CET] <vans163> yea i usually do that since I dont know tcpdump cmdline that well nor linux cmdline greps
[03:20:13 CET] <vans163> replace -i eth0 with the main interface on you AWS instance ofcourse
[03:20:43 CET] <vans163> usually tahts the one that has the public ip
[03:26:48 CET] <darrel12> K, it's been a while since I used scp but I got the pcap and I'm looking at it now
[03:27:26 CET] <darrel12> so, I'm looking at the packets that are outbound from the server to my machine
[03:28:17 CET] <darrel12> but I don't know what I'm looking for since udp doesn't require a handshake it won't tell me if the data got through
[03:29:46 CET] <darrel12> vans163: also, I don't know if it's relevant but the pcap says the source was the private ip of the AMI
[03:37:57 CET] <stevenliu> Hello
[03:38:00 CET] <darrel12> hi
[03:43:17 CET] <darrel12> vans163: should I use iptables or something to keep the port open for UDP? I thought having ffplay listening to the port was enough?
[03:51:21 CET] <vans163> darrel12: on the server ffplay is (shuld be) listening on port 1234.   check with  netstat -ulpn
[03:51:47 CET] <vans163> then on the server if you apply the wireshark filter  udp.port==1234 you should see some packets not an empty capture
[03:52:26 CET] <darrel12> But it's the other way around. I'm running ffmpeg on the server and ffplay on my local machine because I have speakers on this computer
[03:52:47 CET] <vans163> replace what I said above ffplay with ffmpeg
[03:52:52 CET] <darrel12> so on the server I run ffmpeg -re -i audio.mp3 -acodec libmp3lame -f rtp rtp://<local_machine_ip>:1234
[03:53:10 CET] <darrel12> and on my local machine i run  ffplay rtp://localhost:1234
[03:53:12 CET] <vans163> on the server when you run netstat -ulpn  do you see an entry like    udp        0      0 0.0.0.0:1234  ?
[03:53:15 CET] <darrel12> does that sound about right?
[03:53:17 CET] <vans163> oh no
[03:53:19 CET] <vans163> thats wrong
[03:53:21 CET] <darrel12> damn
[03:53:54 CET] <vans163> localhost is your local loopback interface. you need to connect to the remote server.   "run  ffplay rtp://<AWS_MACHINE_IP>:1234"
[03:54:03 CET] <darrel12> ok let me try that
[03:54:19 CET] <vans163> but you need to pick up some basics first, i thought we were beyond that
[03:54:33 CET] <darrel12> I've tried all of the permutations of this
[03:54:44 CET] <darrel12> I've only gotten it to work on my local computer doing both
[03:54:50 CET] <vans163> okay np
[03:57:16 CET] <darrel12> k, so on my local machine I run ffplay rtp://<AMI_ip>:1234 and on the server I run  ffmpeg -re -i valse_sentimentale.mp3 -acodec libmp3lame -f rtp rtp://<local_ip>:1234
[03:57:16 CET] <darrel12> Additionally, earlier just to make sure that I have the port open on my local machine I ran sudo ufw allow 1234/udp after enabling the firewall with sufo ufw enable
[03:57:56 CET] <darrel12> still I don't intercept the stream. So now this time I should run the capture while I do the stream? on which machine? both?
[04:00:14 CET] <vans163> on the server justincase run 0.0.0.0:1234   (so replace <local_ip> with 0.0.0.0)  this means to listen on all interfaces
[04:01:47 CET] <darrel12> But if I put 0.0.0.0:1234 into the ffmpeg command on the AMI wouldn't that be a problem? nvm, I'll just do it and see what happens
[04:01:49 CET] <vans163> darrel12: capture on the server will tell you if ffplay can communicate with the server BEFORE iptables/firewall can block things,  capture on the client will tell you if ffplay can send traffic out (most likely yes), capture on the client can tell you if the servers replies are coming to the client.
[04:02:11 CET] <vans163> darrel12: 0.0.0.0 is special in that regard
[04:03:23 CET] <darrel12> I just thought the <local_ip> is supposed to tell ffmpeg where the data is being sent, and if I make it 0.0.0.0 it will presumably sent it to every ip?
[04:04:15 CET] <vans163> darrel12: yea so im not sure on the AWS instance but it might have a few interfaces (private net, public, loopback, etc) this way you can connect to ffmpeg using any of their ip ranges
[04:04:40 CET] <vans163> something basic to try first is can you ping the AWS_PUBLIC_IP ?
[04:04:43 CET] <vans163> from the client
[04:05:10 CET] <darrel12> yup
[04:10:48 CET] <darrel12> just to clarify, because I either read the your post at 7:00:14 wrong or you said something incorrect. On my local computer I run: "ffplay rtp://<AWS_PUBLIC_IP>:1234" On the AWS I run: "ffmpeg -re -i audio.mp3 -acodec libmp3lame -f rtp rtp://0.0.0.0:1234" However, What I think you meant for me to run is: "ffplay rtp://0.0.0.0:1234" on my local machine and "ffmpeg -re -i audio.mp3 -acodec libmp3lame -f rtp rtp://<LOCAL_IP>:1234"
[04:11:03 CET] <vans163> if 0.0.0.0 fails can you try if UDP packets get through.    on the server run  (this is netcat)    nc -u -l 9123    on the client:  nc -u AWS_PUBLIC_IP 9123  (now type something into the console)
[04:12:34 CET] <vans163> darrel12: read this first and then think about that  https://en.wikipedia.org/wiki/0.0.0.0
[04:12:38 CET] <darrel12> Either way, both of the commands I just said don't work. That netcat stuff works though
[04:13:30 CET] <darrel12> vans163: thanks for the link.
[04:14:23 CET] <vans163> if netcat on the server seens what you are typing in the client that means UDP packets are getting through.  can you show the  results of netstat -ulpn on the server?
[04:14:38 CET] <vans163> careful to not publicly paste your public ip
[04:15:38 CET] <darrel12> I had to sudo it: Active Internet connections (only servers)
[04:15:39 CET] <darrel12> Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
[04:15:39 CET] <darrel12> udp        0      0 0.0.0.0:68              0.0.0.0:*                           981/dhclient
[04:16:21 CET] <darrel12> was I supposed to leave the netcat session open when I ran that?
[04:16:33 CET] <vans163> naw but have ffmpeg ruuning
[04:16:48 CET] <darrel12> ok hold up, I'll do it again with ffmpeg running
[04:16:54 CET] <vans163> you can.. use byobu I know the simplest.  apt-get install byobu to multitask
[04:17:14 CET] <darrel12> you mean in 1 ssh connection?
[04:17:17 CET] <vans163> yea
[04:17:20 CET] <darrel12> ok
[04:17:56 CET] <darrel12> shit
[04:17:59 CET] <vans163> its pretty simple cimpred to tmux or others.  you justneed ot know 3 buttons.  F2 F3 F4
[04:18:03 CET] <darrel12> i ran it and ctrl+c doesn't stop it
[04:18:08 CET] <darrel12> nvm, exit does
[04:18:14 CET] <vans163> type exit or Ctrl A+D to detach
[04:18:24 CET] <vans163> k 3 buttons and 1 combo
[04:18:40 CET] <darrel12> brb. woman wants me to kill spider
[04:18:48 CET] <vans163> godspeed
[04:20:28 CET] <darrel12> ok, so i run byobu (whatever it's supposed to mean), and I run ffmpeg, what do i press to start a new task
[04:21:46 CET] <darrel12> Also, I have a tangential question. How do I manage to log this chat if my computer is off? do I have to have it perpetually running on a box somewhere and just pull the logs when I wanna see what I've missed?
[04:23:03 CET] <vans163> darrel12: there is ZNC for logging in that way (that i know of) or you can use a sweet unfortunatly windows only ircclient like HydraIRC which can log if you set it to.  infact msot IRC clients can log if you set them to
[04:23:17 CET] <vans163> ZNC you can run in the cloud, so you dont miss anything
[04:23:32 CET] <vans163> byobu press F2 to make a new tab, F3 F4 to cycle
[04:23:34 CET] <darrel12> That's what I'll have to do then
[04:24:39 CET] <darrel12> Active Internet connections (only servers)
[04:24:39 CET] <darrel12> Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
[04:24:39 CET] <darrel12> udp        0      0 0.0.0.0:49320           0.0.0.0:*                           4410/ffmpeg
[04:24:39 CET] <darrel12> udp        0      0 0.0.0.0:49321           0.0.0.0:*                           4410/ffmpeg
[04:24:39 CET] <darrel12> udp        0      0 0.0.0.0:68              0.0.0.0:*                           981/dhclient
[04:25:20 CET] <vans163> so ffmpeg is listening on ports 49320 and 49321,  why isnt it listening on 1234 x.x
[04:25:23 CET] <darrel12> that ffmpeg is sending it to <AWS_IP> not 0.0.0.0 was it supposed to be 0.0.0.0
[04:25:30 CET] <vans163> no
[04:25:38 CET] <vans163> ffmpeg 0.0.0.0 yes.   ffplay no
[04:25:59 CET] <darrel12> just a sec...
[04:26:44 CET] <vans163> darrel12: https://en.wikipedia.org/wiki/By%C5%8Dbu
[04:27:04 CET] <darrel12> ahh, that makes sense now.
[04:27:27 CET] <vans163> i initally thought it meant bring your own booze u, to reflect on the life sysadmins often have
[04:27:44 CET] <vans163> u meaning the 1u datacenter rack unit
[04:30:30 CET] <darrel12> lol, I like it but I'd probably understand more if I were a network guy. k, so I changed the port to 9876 because I think earlier I had an ssh instance freeze up without closing 1234. I'm running "ffplay rtp://<AWS_IP>:9876" locally and "ffmpeg -re -i valse_sentimentale.mp3 -acodec libmp3lame -f rtp rtp://0.0.0.0:9876" on the AWS
[04:32:09 CET] <darrel12> I still get: Active Internet connections (only servers)
[04:32:09 CET] <darrel12> Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
[04:32:10 CET] <darrel12> udp        0      0 0.0.0.0:40359           0.0.0.0:*                           11270/ffmpeg
[04:32:10 CET] <darrel12> udp        0      0 0.0.0.0:40360           0.0.0.0:*                           11270/ffmpeg
[04:32:10 CET] <darrel12> udp        0      0 0.0.0.0:68              0.0.0.0:*                           981/dhclient
[04:32:17 CET] <darrel12> no 9876 port
[04:32:45 CET] <darrel12> I'm supposed to run the netstat on the server right?
[04:32:54 CET] <vans163> darrel12: what does netstat -tlpn return?   yes server
[04:33:16 CET] <darrel12> tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1161/sshd
[04:33:16 CET] <darrel12> tcp6       0      0 :::22                   :::*                    LISTEN      1161/sshd
[04:33:41 CET] <darrel12> if it matters, I'll post how I ssh into the box real quick
[04:33:46 CET] <vans163> naw it doesnt
[04:33:49 CET] <darrel12> ok nvm
[04:34:13 CET] <vans163> so the problem seems to be,  the ffmpeg command is not listening on the port, perhaps the command is not proper?
[04:34:38 CET] <darrel12> it works if I run both it and ffplay locally
[04:35:37 CET] <vans163> can you try running it with sudo and change the port to 35000 ?
[04:35:45 CET] <vans163> then check netstat -tlpn and -ulpn
[04:35:47 CET] <darrel12> ok
[04:37:13 CET] <darrel12> tlpn: tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1161/sshd
[04:37:13 CET] <darrel12> tcp6       0      0 :::22                   :::*                    LISTEN      1161/sshd
[04:37:23 CET] <darrel12> ulpn: udp        0      0 0.0.0.0:55520           0.0.0.0:*                           15856/ffmpeg
[04:37:24 CET] <darrel12> udp        0      0 0.0.0.0:55521           0.0.0.0:*                           15856/ffmpeg
[04:37:24 CET] <darrel12> udp        0      0 0.0.0.0:68              0.0.0.0:*                           981/dhclient
[04:37:36 CET] <darrel12> oh wait
[04:38:05 CET] <darrel12> tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1161/sshd
[04:38:05 CET] <darrel12> tcp6       0      0 :::22                   :::*                    LISTEN      1161/sshd
[04:38:10 CET] <darrel12> udp        0      0 0.0.0.0:48915           0.0.0.0:*                           16976/ffmpeg
[04:38:10 CET] <darrel12> udp        0      0 0.0.0.0:48916           0.0.0.0:*                           16976/ffmpeg
[04:38:10 CET] <darrel12> udp        0      0 0.0.0.0:68              0.0.0.0:*                           981/dhclient
[04:38:18 CET] <darrel12> those are after running ffmpeg with sudo
[04:38:35 CET] <vans163> 0.0.0.0:48915   <<  48915  is the port,  one of those ports should be 9876
[04:38:38 CET] <darrel12> above "oh wait" is without
[04:38:46 CET] <darrel12> well, 35000 now
[04:38:49 CET] <vans163> or yea
[04:38:50 CET] <vans163> that
[04:39:07 CET] <darrel12> want to we can rdp if it's easier
[04:39:12 CET] <vans163> some distros block listening on ports under a certain number often its 1000.  i said try 35000 jsut incase
[04:39:20 CET] <darrel12> oic
[04:39:48 CET] <vans163> no need for rdp I think we both have an idea with what the error is
[04:40:01 CET] <vans163> ffmpeg is failing to listen on the port you tell it to, now its time to figure why
[04:40:18 CET] <vans163> im not familiaure with using ffmpeg in the way you are using
[04:40:26 CET] <darrel12> so it's ffmpeg not broadcasting properly?
[04:40:31 CET] <darrel12> what do you usually use it for?
[04:41:12 CET] <vans163> ffmpeg needs to start a rtp server (to accept peers), you are telling ffmpeg to start an rtp server on all interfaces (0.0.0.0) on the port 9876.  Ffmpeg is not doing this.
[04:41:24 CET] <vans163> I just used it here and tehre for encoding h264 video from raw pixels
[04:41:59 CET] <darrel12> oh, just hobby based then?
[04:42:38 CET] <vans163> well im working on a cloud gpu accelerated office and gaming platform
[04:43:02 CET] <vans163> so you can use AutoCAD or play No Mans Sky on a chromebook or macbook air in 1080p 60fps from your browser
[04:43:25 CET] <vans163> well yo uwont get 1080p on chromebook or macbook air unles you connect it to a bigger display :p
[04:43:59 CET] <darrel12> Wow. so the cloud based server does all of the hard work and just sends the video to the chromebook so it doesn't have to grind away at the gpu to try to crank out all of the graphics?
[04:44:25 CET] <vans163> yea exactly. as long as the client has a decent hardware h264 decoder, most intel cpus after 2011 have those
[04:44:54 CET] <vans163> but software decoding is not bad either, quite usable
[04:45:07 CET] <darrel12> that's pretty damn cool. let me know when you get it finished.
[04:45:27 CET] <vans163> darrel12: heh if your still idle within the next 6 months here il ping you :P
[04:46:16 CET] <darrel12> I plan to be. This is my first time using irc since like 2009 but I like it now that I have somewhere to talk about something I do.
[04:47:34 CET] <darrel12> So, where's the starting point to figuring out why ffmpeg won't start the rtp server? Do you have an idea or should I start googling?
[04:47:51 CET] <vans163> darrel12: yea sometimes i save a crapload of time through dialog on irc, othertimes i waste a crapload of time chatting :p
[04:48:28 CET] <vans163> darrel12: i would check tomorrow during working hours in this channel, maybe someone knows what can be wrong with the command. and iv no idea where to start
[04:48:41 CET] <darrel12> vans 153: then it must at least balance the wasted time :P
[04:49:24 CET] <vans163> darrel12: https://cdn.meme.am/cache/instances/folder977/50499977.jpg
[04:49:57 CET] <darrel12> Well, thanks for the help thus far. I got the most cryptic answers of my life about 5 hours ago, made me more confused than it helped.
[04:50:28 CET] <vans163> darrel12: no worries, time for me to get back to streaming the audio to go along with the h264 video
[04:51:14 CET] <darrel12> vans163: lol squid pro row. What I meant was the time you've wasted was made up by the time you've saved. enjoy it. ttyl
[04:51:37 CET] <vans163> darrel12: heh yea I think I used the wrong term there :P
[04:51:40 CET] <vans163> ttyl
[05:20:43 CET] <thebombzen> what exactly is the number of bframes? -c libx264 -preset veryslow has "8 B-frames" but it's eight B-frames in what context? clearly not for the whole file
[05:24:55 CET] <c3r1c3-Win> per GOP
[05:25:34 CET] <thebombzen> GOP?
[05:25:46 CET] <thebombzen> Eight B-frames per repupublican?
[05:26:16 CET] <Diag> Here's the Democrats
[05:26:20 CET] <c3r1c3-Win> Group of Pictures.
[05:26:30 CET] <Diag> Anyone? Anyone?
[05:26:35 CET] <thebombzen> what is a "Group of Pictures"
[05:26:47 CET] <c3r1c3-Win> Diag: Sorry. No idea where that one goes.
[05:27:09 CET] <thebombzen> yea sorry Diag I don't know what I"m supposed to follow that up with
[05:27:10 CET] <Diag> Because _corrupt joined
[05:27:16 CET] <Diag> Ffs
[05:27:19 CET] <Diag> Nvm
[05:27:22 CET] <darrel12> I got it diag
[05:27:30 CET] <darrel12> I just got back, It was a good one
[05:27:37 CET] <thebombzen> oh I thought it was a reference to "Eight B-frames per republican"
[05:27:44 CET] <thebombzen> which I said about 15 seconds earlier
[05:27:50 CET] <Diag> Kinda lol
[05:27:57 CET] <Diag> Just because political parties
[05:27:58 CET] <c3r1c3-Win> thebombzen: http://bfy.tw/AD3a
[05:28:47 CET] <thebombzen> c3r1c3-Win: no need to be obnoxious
[05:28:58 CET] <thebombzen> because 1. I have no idea how long a GOP actually is
[05:29:07 CET] <thebombzen> so the wikipedia article on it isn't helpful
[05:29:25 CET] <c3r1c3-Win> It's as long as it's set to be.
[05:29:26 CET] <_corrupt> i will eat your fuckin heart
[05:29:36 CET] <thebombzen> which is how long?
[05:29:45 CET] <c3r1c3-Win> So if your keyint is 250, then usually your GOP is 250
[05:30:02 CET] <thebombzen> ah so a GOP is the interval between kframes
[05:30:10 CET] <thebombzen> but 8 bframes doesn't seem like a lot out of 250
[05:30:29 CET] <c3r1c3-Win> that's because b-frames are used inbetween p-frames.
[05:31:22 CET] <c3r1c3-Win> i.e Start at the beginning.
[05:31:31 CET] <c3r1c3-Win> What's a keyframe? (Looks like you know that).
[05:31:36 CET] <thebombzen> I do know what an Iframe is
[05:31:41 CET] <c3r1c3-Win> What's an I-frame?
[05:31:43 CET] <c3r1c3-Win> Very good.
[05:31:47 CET] <c3r1c3-Win> What's a p-frame?
[05:31:51 CET] <thebombzen> I also know what those are
[05:31:58 CET] <thebombzen> I'm just thinking that eight doesn't seem like a lot out of 250
[05:32:18 CET] <darrel12> Sorry to interrupt guys, lets say I just setup ZNC on a ubuntu machine to log this channel when I'm away. Where does it save the logs? or do I have to stop ZNC for it to save the log or something? because there is no log directory ~/.znc/moddata/log/
[05:32:18 CET] <c3r1c3-Win> Well b-frames (or bi-direcional frames) are used inbetween p and/or i frames, and several can be stacked together.
[05:32:20 CET] <thebombzen> given that one of those is an I-frame, it means that 241 are Pframes and eight are Bframes
[05:32:38 CET] <thebombzen> you don't need 241 P-frames to nest between eight B-frames
[05:33:17 CET] <c3r1c3-Win> The more you have together, the more memory is used on recontsruction of the stream (among other things)
[05:33:30 CET] <thebombzen> sure, but even if they're not consecutive
[05:33:45 CET] <thebombzen> you only need 16 Pframes to ensure that you never have consecutive Bframes
[05:33:52 CET] <thebombzen> if you only have eight Bframes
[05:33:59 CET] <c3r1c3-Win> thebombzen: You have it backwards.
[05:34:04 CET] <thebombzen> well technically you only need eight or nine but I'm allowing a padding of two P-frames between B-frames
[05:34:17 CET] <c3r1c3-Win> B-frames are used in groups inbetween P and I frames.
[05:34:32 CET] <thebombzen> If they're used in groups, then why are there only eight in 250?
[05:34:34 CET] <c3r1c3-Win> So it would go IPBBBPBBBPBBB
[05:34:42 CET] <thebombzen> that looks like nine of them
[05:34:42 CET] <c3r1c3-Win> Not IBPPPBPPPBPPP
[05:34:51 CET] <c3r1c3-Win> Nope, that would be 3
[05:34:54 CET] <thebombzen> okay sure, but that still looks like significantly more than eight bframes in 250
[05:35:08 CET] <thebombzen> so it's NOT 8 B-frames per GOP
[05:35:23 CET] <thebombzen> it's max 8 *consecutive* Bframes, NOT per GOP
[05:36:35 CET] <c3r1c3-Win> Sorry, yes. NOT per GOP, but rather per... I guess you'd call it a sequence (I'm actually trying to recall what the proper term is)
[05:36:44 CET] <thebombzen> the term is "consecutive"
[05:37:07 CET] <thebombzen> how do consecutive B-frames even work, if B-frames reference the immediate predecessor and immediate successor?
[05:37:33 CET] <c3r1c3-Win> So you have an I frame
[05:37:50 CET] <c3r1c3-Win> then you have the P frame, and then you have a b frame.
[05:38:13 CET] <thebombzen> sure, but how does one decode the sequence IBBI if each B-frame requires you to decode the other one first
[05:38:50 CET] <c3r1c3-Win> Why not have 2+ b-frames? When re-constructing the image you would have to apply the b-frames' effect to the displayed (reconstructed)P or I frame. It just takes more work/memory to do so.
[05:39:02 CET] <thebombzen> no, I'm asking how it's mathematically possible
[05:39:07 CET] <thebombzen> this is not an issue of memory
[05:39:25 CET] <thebombzen> if you have the sequence IBBI, how do you decode the B-frames if decoding each one requires you to decode the other one first?
[05:40:15 CET] <c3r1c3-Win> Because it doesn't work like that.
[05:40:40 CET] <c3r1c3-Win> It's a vector, you do the math and put it together.
[05:40:51 CET] <thebombzen> "You do the math" does not actually tell me anything
[05:41:05 CET] <thebombzen> if "it doesn't work like that" then how does it work
[05:41:08 CET] <c3r1c3-Win> Well if you really want to know you can look at the source code. It does the math.
[05:41:21 CET] <c3r1c3-Win> So it'll tell you how it does it.
[05:41:50 CET] <thebombzen> So do you not know? because looking at an encoder source code isn't really helpful here
[05:42:10 CET] <c3r1c3-Win> I mean the Wikipedia page that explains GOPs actually explains a bit of what you're asking. You might want to go re-read it.
[05:43:02 CET] <thebombzen> I did read it, and I get that in H.264, the B-frames don't have to reference immediate neighbors
[05:43:08 CET] <thebombzen> but in MPEG2, for example, they do.
[05:43:20 CET] <thebombzen> So does that mean MPEG2 is limited to one consecutive B-frame, or is there something I'm missing?
[05:44:13 CET] <c3r1c3-Win> Actually the Wikipedia article addresses that as well: In older designs such as MPEG-1 and H.262/MPEG-2, each B picture can only reference two pictures, the one which precedes the B picture in display order and the one which follows, and all referenced pictures must be I or P pictures.
[05:44:37 CET] <thebombzen> I know that.
[05:44:38 CET] <thebombzen> I just read that.
[05:44:44 CET] <c3r1c3-Win> So you can have 2 b-frames next to each other in MPEG2
[05:45:02 CET] <c3r1c3-Win> i.e. IPBBPBBPBB
[05:45:06 CET] <thebombzen> But if the B-frame has to reference the previous and then the next frame
[05:45:17 CET] <thebombzen> how could you have two B-frames next to each other
[05:45:24 CET] <thebombzen> given that they can only reference P-frames
[05:45:48 CET] <c3r1c3-Win> Because it isn't a 'frame' per se. It's a set of vectors and by using (some very cool math) you recontsruct the picture.
[05:46:02 CET] <thebombzen> "each B picture can only reference two pictures, the one which precedes the B picture in display order and the one which follows"
[05:46:06 CET] <thebombzen> that sounds awfully lot like a frame
[05:46:34 CET] <c3r1c3-Win> *sigh*, yes it is a frame in the fully proper sense of the word.
[05:47:06 CET] <thebombzen> if a B picture can only reference its immediate neighbors, and it can only reference I or P-frames, then how can it have B-frames that are also immediate neighbors?
[05:47:09 CET] <c3r1c3-Win> But that frame isn't anywhere the same a JPEG image. It's something completely different.
[05:47:21 CET] <c3r1c3-Win> *same as JPEG image (like an I-frame would be)
[05:47:36 CET] <thebombzen> I'm really not sure why JPEG is relevant. Again, "if a B picture can only reference its immediate neighbors, and it can only reference I or P-frames, then how can it have B-frames that are also immediate neighbors?"
[05:48:04 CET] <c3r1c3-Win> Because it's not a picture, it's a set of motion vectors.
[05:48:16 CET] <thebombzen> what is "it"
[05:48:27 CET] <c3r1c3-Win> B-frames is 'it'.
[05:48:41 CET] <thebombzen> sure, but that still doesn't really explain anything.
[05:48:46 CET] <c3r1c3-Win> So.. Because a b-frame isn't a picture, it's a set of motion vectors.
[05:49:07 CET] <c3r1c3-Win> It's explains it pretty well.
[05:49:30 CET] <thebombzen> Okay, a B-frame isn't a picture, but rather it's an array of vectors. And B-frames can only reference the immediate neighboring "pictures. "
[05:49:43 CET] <c3r1c3-Win> Yes.
[05:49:57 CET] <thebombzen> so you're saying specifically, B-frames can only reference the closest I-frame or P-frame
[05:50:06 CET] <thebombzen> not necessarily "just the closest frame"
[05:50:28 CET] <thebombzen> because if so, why didn't you just say that
[05:50:29 CET] <c3r1c3-Win> In MPEG2, that's the case. In MPEG4 they expanded on what it can do at the cost of increased processing pwoer needs and memory.
[05:50:43 CET] <c3r1c3-Win> *power needed and memory.
[05:51:14 CET] <thebombzen> Sure, but if your answre is "B-frames don't reference the immediately adjacent frames, they reference the closest I/P frames on either side"
[05:51:18 CET] <thebombzen> Why didn't you just say that
[05:51:19 CET] <Kirito> https://usercontent.irccloud-cdn.com/file/39qfZSJH/error.png
[05:51:26 CET] <Kirito> I'm at a bit of a complete loss here. D:\ has 100GB of space free to start with, but... ..why is it complaining that there's not enough space on the INPUT files drive?
[05:51:28 CET] <thebombzen> instead of saying "cool vector math" or something extremely unhelpful
[05:51:41 CET] <Kirito> (Powershell is a horrendous pain to copy and paste from, sorry)
[05:52:22 CET] <c3r1c3-Win> thebombzen: I didn't say that because it's not ture.
[05:52:23 CET] <thebombzen> Kirito: perhaps because you're putting it in R: and not D:?
[05:52:25 CET] <c3r1c3-Win> *true
[05:52:31 CET] <thebombzen> c3r1c3-Win: then what are you trying to say?
[05:52:39 CET] <thebombzen> if that's not true, then what is the truth?
[05:52:51 CET] <Kirito> thebombzen: yes, the output disk is R:\, which is a clean 100GB disk
[05:52:54 CET] <thebombzen> because you're doing a very poor job at explaining it
[05:52:55 CET] <c3r1c3-Win> thebombzen: What standard are you looking at? MPEG2 or MPEG4? They are different.
[05:53:03 CET] <Kirito> D:\ contains the input frames, and also has 100GB of disk space free
[05:53:06 CET] <Kirito> either way.. what?
[05:53:40 CET] <thebombzen> Kirito: you're probably out of memory
[05:54:00 CET] <thebombzen> c3r1c3-Win: it doesn't matter right now
[05:54:02 CET] <Kirito> <_< so I have to get more memory to encode it?
[05:54:04 CET] <thebombzen> I'm asking how it works mathematically
[05:54:53 CET] <c3r1c3-Win> thebombzen: And that's why I said you should look at the code. It tells you that in all the gory detail. For me to state it mathmatically, I would jsut be copying the code nad pasting it into chat.
[05:54:57 CET] <thebombzen> well a 3840x2160 rgb48 png file is 50 MB of memory per frame
[05:55:12 CET] <thebombzen> c3r1c3-Win: no it doesn't, because I don't know C
[05:55:30 CET] <thebombzen> and if you can't explain it without copy/pasting C code then you must either not know how it works or you're just not articulate enough to explain it
[05:55:44 CET] <Kirito> How many frames is it trying to load into memory at once? Not all of them, surely?
[05:55:46 CET] <thebombzen> in which case, don't chastise me for not asking
[05:56:06 CET] <thebombzen> Kirito: either way, you should update your FFmpeg
[05:56:14 CET] <thebombzen> you're using a 2 year old build
[05:56:27 CET] <thebombzen> try with an updated version. it's possible you're triggering a bug that has since been fixed
[05:56:46 CET] <Kirito> LOL really? I think I'll just nuke this and install Linux for the encoding then >_>
[05:56:55 CET] <thebombzen> well keep the file around lol
[05:57:06 CET] <thebombzen> but the copyright date on the FFmpeg build doesn't like
[05:57:10 CET] <thebombzen> doesn't lie*
[05:57:32 CET] <Kirito> The changelog for the ffmpeg build I downloaded does, however >_>
[05:57:42 CET] <Kirito> showed it was updated in the last few months
[05:57:50 CET] <thebombzen> well your screenshot says 2015
[05:57:55 CET] <thebombzen> so...
[05:58:08 CET] <Kirito> yeah, oh well, thanks
[05:58:11 CET] <thebombzen> c3r1c3-Win: if you think that "mathematically explain" can't be done more consicely than "optimized C code" then you probably have not seen a mathematical explanation
[05:58:49 CET] <thebombzen> for example, I could explain how JPEG works without pointing to the reference encoder
[06:00:52 CET] <c3r1c3-Win> I like this video a lot for a lot of reasons, but the best part for me is at about the 2:18 mark when he explains the motion and vector part and actually gives a visual to go along with it.
[06:00:55 CET] <c3r1c3-Win> https://www.youtube.com/watch?v=r6Rp-uo6HmI
[06:01:53 CET] <c3r1c3-Win> So you have some data that says to the decoder "Move this pixel over here" (i.e. the vectors that I keep referring to).
[06:04:06 CET] <c3r1c3-Win> So how can a vector be 'bidirectional'? Well you say in the b-frame "Hey! I want these top 50 pixels to come from the last i-frame, and these bottom 75 pixels to come from the next I-frame, and these 15 middle pixels to come from the b-frame that just happened".
[06:04:34 CET] <c3r1c3-Win> and those 17 on the right side... take'em from the frame that's about to happen.
[06:05:49 CET] <c3r1c3-Win> How does it know that the frame that's about to happen has the pixels it wants? By searching through the GOP and trying to figure out which way is best to compress the data.
[06:06:43 CET] <thebombzen> I know how prediction works
[06:06:50 CET] <thebombzen> it appears you didn't read my question
[06:07:08 CET] <thebombzen> which was, again, "if a B picture can only reference its immediate neighbors, and it can only reference I or P-frames, then how can it have B-frames that are also immediate neighbors?"
[06:07:54 CET] <c3r1c3-Win> thebombzen: because b-frames can't only reference an I or p-frame.
[06:08:05 CET] <c3r1c3-Win> Taht's only true in MPEG1/2 land. Not true in MPEG4 land.
[06:08:14 CET] <thebombzen> Sure, but I'm asking about MPEG2.
[06:08:23 CET] <c3r1c3-Win> In MPEG4 land a b-frame can reference another b-frame.
[06:08:34 CET] <thebombzen> Okay, but in MPEG2, how does that work?
[06:08:52 CET] <c3r1c3-Win> In MPEG 2 let's say you have a sequence of IPBBPBBPBB
[06:09:15 CET] <c3r1c3-Win> the first b-frame references the P frame just behind it.
[06:09:36 CET] <c3r1c3-Win> The 2nd B frame references the P frame that's about to come next.
[06:09:52 CET] <thebombzen> sure, but B-frames reference TWO DIFFERENT FRAMES
[06:10:34 CET] <thebombzen> HOW DO THEY DO THAT
[06:10:51 CET] <thebombzen> given that only one of their neighbors is a P/I frame.
[06:11:44 CET] <c3r1c3-Win> By literally jumping ahead and reconstructing the frame from the other points.
[06:12:15 CET] <thebombzen> No, that's called "motion prediction and it doesn't answer my question"
[06:12:24 CET] <thebombzen> Iknow how motion prediction works.
[06:12:29 CET] <thebombzen> I'm not asking you how motion prediction works.
[06:12:36 CET] <thebombzen> and at this point I think you're trolling
[06:12:52 CET] <c3r1c3-Win> But all a b-frame is, is "motion compensation"
[06:13:04 CET] <thebombzen> no, a B-frame is a frame that is encoded using motion compensation.
[06:13:08 CET] <thebombzen> Those are not the same things.
[06:13:16 CET] <c3r1c3-Win> balh.. not compensation, prediction.
[06:13:38 CET] <thebombzen> Again, A B-frame is a frame with redundancy eliminated by encoding it using motion prediction.
[06:13:53 CET] <thebombzen> It itself is not an array of motion vectors.
[06:14:50 CET] <thebombzen> but I'm going to stop asking this quesiton because you're either trolling, stupid, or not a native english speaker.
[06:15:01 CET] <thebombzen> because I really can't phrase it more clearly than that.
[06:18:45 CET] <c3r1c3-Win> Hmm.. I'm gonna flip it around on you.
[06:18:57 CET] <c3r1c3-Win> English is my native tongue.
[06:19:10 CET] <c3r1c3-Win> It's really late where I'm at, hence some of the errors.
[06:19:28 CET] <thebombzen> well if you're not capable of providing a coherent answer, then don't.
[06:19:30 CET] <thebombzen> just don't answer
[06:19:31 CET] <c3r1c3-Win> I find it odd taht someone who doesn't even know what a GOP is is so sure that they know what they're talking about.
[06:19:56 CET] <c3r1c3-Win> And can't be bothered to read some C to find the answer to their 'I NEED TEH MATHZ" questions.
[06:19:57 CET] <thebombzen> well I have a really simple quesiton and you have somehow lumbered for 45 minutes without being able to understnad it
[06:20:14 CET] <c3r1c3-Win> Sounds like a troll to me.
[06:20:20 CET] <thebombzen> AGAIN if you don't know the answer to a quesiton, then *don't answer it*
[06:20:52 CET] <thebombzen> and I wasn't asking for you to explain to me how the specific fourier analysis works in one codec
[06:21:25 CET] <thebombzen> I was asking why there's no logical inconsitency with having adjacent B-frames, having B-frames reference both adjacent frames, and having B-frames not able to reference adjacent B-frames.
[06:21:46 CET] <thebombzen> that's something you should answer without looking at C source code.
[06:21:58 CET] <thebombzen> unless you don't know, in which case say that.
[06:22:53 CET] <darrel12> I'm sensing a lot of hostility here. How about we all just step outside and take a couple hits on our doobies?
[06:23:13 CET] <thebombzen> because I don't smoke
[06:23:25 CET] <darrel12> then pour a drink
[06:23:30 CET] <thebombzen> Also don't drink
[06:23:49 CET] <darrel12> then maybe that's adding to your tension:P
[06:24:16 CET] <thebombzen> "you don't abuse substances so you must be tense. maybe if you consumed alcohol at midnight then you'd be less tense"
[06:24:25 CET] <thebombzen> that's what you sound like
[06:25:19 CET] <darrel12> I didn't say abuse. and yeah, maybe if you did you would be less tense.
[06:38:40 CET] <thebombzen> not really
[06:38:49 CET] <thebombzen> I don't like the way ethanol tastes
[06:38:57 CET] <thebombzen> so that wouldn't help
[06:40:13 CET] <darrel12> I don't like the way Brussels sprouts taste, but they help.
[06:48:24 CET] <thebombzen> they don't help me detense
[06:49:25 CET] <darrel12> thebombzen: my point was only to get you from arguing more with c3r1c3. I don't know how much you frequent this channel, but being new myself - I don't go around asking for help and giving the helpers shit because they didn't help me the way I wanted to be helped.
[06:56:03 CET] <Venti^> thebombzen: I doubt there is any restriction on the number of consequtive b-frames, even in mpeg2
[06:57:54 CET] <thebombzen> Venti^: how does that work, if B-frames are required to reference their immediate neighbors
[06:58:12 CET] <thebombzen> darrel12: it's more that I asked a question and the person didn't know the answer but pretended they did for 45 minutes
[06:58:15 CET] <thebombzen> that's why I'm a bit annoyed
[06:58:36 CET] <Venti^> that is not quite correct, I guess what you mean by immediate neighbor is, that it can only have 2 references
[06:59:06 CET] <thebombzen> well Wikipedia specifically states: "In older designs such as MPEG-1 and H.262/MPEG-2, each B picture can only reference two pictures, the one which precedes the B picture in display order and the one which follows"
[06:59:34 CET] <darrel12> thebombzen: well in any case, I'm glad your time isn't being wasted anymore
[06:59:44 CET] <Venti^> that means, it could code frame 1 as I-frame, frame 100 as P-frame, and all the frames in between as B-frames... frames 1 and 100 would be "immediate neighbors" for all of those because they aer the only reference pictures
[07:00:05 CET] <Venti^> in a situation where you don't use the B-frames as reference, which might have been the case with mepg2
[07:00:15 CET] <thebombzen> ah, so "immediate neighbor" means "closest P or I frame in either direction"
[07:00:41 CET] <Venti^> I don't know, I have never heard this term before, that is my guess
[07:01:39 CET] <Venti^> in any case, it sounds more like an implementation limitation than a bitstream limitation, but I don't know much about mpeg2
[07:01:55 CET] <thebombzen> [23:51:14] <thebombzen> Sure, but if your answre is "B-frames don't reference the immediately adjacent frames, they reference the closest I/P frames on either side"
[07:01:55 CET] <thebombzen> [23:51:18] <thebombzen> Why didn't you just say that
[07:01:55 CET] <thebombzen> [23:52:22] <c3r1c3-Win> thebombzen: I didn't say that because it's not ture.
[07:03:24 CET] <thebombzen> so either he's lying, doesn't know what he's talking about, or my question still hasn't been answered yet.
[07:04:01 CET] <Venti^> or maybe he's being pedantic because you are kind of rude
[07:05:55 CET] <darrel12> =-O
[07:06:03 CET] <darrel12> I thought I had that vibe too
[07:09:12 CET] <thebombzen> that's not being pedantic
[07:09:21 CET] <thebombzen> I happen to know what being pendantic is, and it's not that
[07:10:06 CET] <thebombzen> also it's possible that I was "being kind of rude" because I was being strung along by someone who didn't know the answer, but acted like they did?
[07:10:42 CET] <Venti^> while, from looking at the spec of mpeg2, it looks like your statement about B-frames not referencing immediately adjacent frames is true in general, that doesn't mean that there can't be some cases where it isn't true
[07:11:03 CET] <darrel12> Maybe he thought he truly did know the answer to your dilemma, or had prtial understanding; so he offered what he could?
[07:12:11 CET] <thebombzen> and the twelve times I said "that's not my question" maybe they could have realized they didn't know the ansewr?
[07:12:54 CET] <thebombzen> or maybe I was just sort of irritated from the outset by the lmgtfy reference that didn't actually help answer the question
[07:13:07 CET] <thebombzen> it was just obnoxious
[07:14:09 CET] <darrel12> yeah it was, but I love lmgtfy responses regardless of how offending they are
[07:14:47 CET] <thebombzen> also when the answer is "you do cool math" that's really condescending
[07:14:59 CET] <thebombzen> because it doesn't answer the question, it just supposes that I'm not able to know what "cool math" is
[07:15:54 CET] <thebombzen> and even if YOU love lmgtfy responses, I didn't, because I was curious about something and you weren't. so maybe that's why I was a little snappy?
[07:16:19 CET] <thebombzen> even if motion prediciton is what I was asking about "you do cool math" isn't a helpful answer
[07:19:04 CET] <darrel12> Maybe you've just been working on your problem for a long enough time (like I have) that any response lacking actual help is irritating. Because I know I was annoyed by the cryptic answers I got earlier. Hell, maybe he's just as much of an asshole as the rest of computer science people usually are. I was curious about what you were asking, I just don't understand what you're doing to help or even learn from your guys' exchange. And I 
[07:24:37 CET] <thebombzen> what am I doing to learn from it? well nothing given that it wasn't helpful
[07:28:19 CET] <kepstin> referencing two "immediately adjacent frames" doesn't even make sense, since e.g. if you have a B frame referencing two P frames, the P frames have to be decoded first, so it would actually be stored as PPB in dts order, wouldn't it?
[07:34:38 CET] <kepstin> so with your IPBBPBBPBB example, it should be as simple as the decoder decodes and stores the first I and P frame, then the two B frames following both use the stored I and P frames as the references
[07:35:14 CET] <kepstin> then after the next P frame, it drops the stored I frame and has P and P‚ stored, which are used as references for the next B frames, and so on
[07:35:49 CET] <kepstin> and obviously the frames get re-ordered from dts into pts order after decoding&
[07:38:03 CET] <thebombzen> kepstin: well that's what I was asking
[07:38:06 CET] <thebombzen> that answers my question
[07:38:40 CET] <thebombzen> in terms of "obviously" well I knew the way I was thinking was mathematically impossible, so the question is "what's wrong with the setup"
[07:38:49 CET] <thebombzen> and you answered my question
[07:39:18 CET] <thebombzen> so thank you
[07:39:31 CET] <thebombzen> that was a really simple answer which I was looking for this whole time
[07:43:53 CET] <kepstin> the IPBBPBBPBB in DTS order would be re-ordered into IBBPBBPBBP in PTS (playback) order after decoding, with the B frames "between" the I/P frames they reference.
[07:50:43 CET] <xtina> hi. i have an ffmpeg command for livestreaming. i find that when i start the stream, the FPS shoots up to 25FPS and bitrate is at 1000 kbps, even though I specified 10FPS and 500 kbps.
[07:50:49 CET] <xtina> then it quickly drops back down to my specified targets
[07:50:57 CET] <xtina> why is the initial FPS so much higher than what I specified?
[07:51:42 CET] <thebombzen> most likely because there's a bit of input delay and it's trying to compensate
[07:51:46 CET] <xtina> here's my command: http://pastebin.com/tVKFR2xX
[07:52:08 CET] <xtina> if there's input delay, shouldn't it be going slower than i specified, not faster?
[07:52:20 CET] <thebombzen> so like if you're grabbing from a device that outputs 30 fps, and it has a bit of startup delay, then its buffer will start filling up
[07:52:41 CET] <thebombzen> and it'll start encoding it faster in order to empty the buffer
[07:52:54 CET] <xtina> is there any way to prevent this?
[07:53:03 CET] <thebombzen> probaby, but I don't know what it is
[07:53:05 CET] <thebombzen> it's also not a problem
[07:53:13 CET] <thebombzen> because if it didn't do that, it would add permanent latency.
[07:53:23 CET] <xtina> it's causing a problem for me where my audio and video get desynced immediately once the stream starts
[07:53:34 CET] <xtina> after the FPS and bitrate settle down, there's no additional desyncing
[07:53:47 CET] <thebombzen> ah. that's probably not caused by the fps issue
[07:53:53 CET] <thebombzen> but rather they're both caused by the same thing
[07:54:00 CET] <thebombzen> which is that there's a bit of startup lag on the video encoder
[07:54:02 CET] <xtina> there's just one ALSA buffer overrun right at the very start, 25s in
[07:54:13 CET] <xtina> hmm i see
[07:54:30 CET] <xtina> so what if i start the video command, then sleep 3s before starting the audio and streaming commands?
[07:54:36 CET] <xtina> (vid, audio, and streaming are 3 separate commands)
[07:54:44 CET] <thebombzen> perhaps, but be careful about buffers
[07:54:56 CET] <xtina> or is there a more graceful way to do it?
[07:55:01 CET] <thebombzen> because if you start the video command and it writes to a blocking buffer, it'll... block
[07:55:13 CET] <thebombzen> my guess is the best way to do it is to somehow use wallclock time
[07:55:16 CET] <thebombzen> but I don't know how to do that
[07:55:56 CET] <thebombzen> i.e. have the muxer/streamer align it with PTS
[07:56:00 CET] <thebombzen> using wallclock time
[07:56:00 CET] <xtina> so let me make sure i understand. there's some delay in starting the video encoder so the buffer fills up, then ffmpeg tries to eat the video very fast to catch up
[07:56:04 CET] <thebombzen> I don't know how to do that though
[07:56:10 CET] <thebombzen> yes
[07:56:12 CET] <thebombzen> that's my guess
[07:56:17 CET] <xtina> oh, i dont' think that's possible for me, my audio/video are raw with no TS
[07:56:19 CET] <thebombzen> I don't exaclty know how RPI's video grabbing works
[07:56:41 CET] <xtina> how does eating the video very fast cause the audio buffer to overrun?
[07:56:57 CET] <thebombzen> it's not that eating the video very fast causes the desync
[07:57:16 CET] <thebombzen> but rather I think that it's more likely that the desync is caused by the same delay causing the buffer to fill up
[07:57:45 CET] <xtina> but shouldn't the video be 'catching up'?
[07:57:52 CET] <xtina> isn't that why it's going at an accelerated FPS
[07:57:54 CET] <xtina> to catch up to the audio?
[07:58:11 CET] <thebombzen> in theory. it's generally a very difficult problem to record video and audio with two separate CLI instances and sync them properly
[07:58:23 CET] <thebombzen> especially with ffmpeg.c which is poorly suited for livestreaming
[07:58:30 CET] <xtina> oh dear
[07:59:28 CET] <xtina> hmm, i guess it didn't compensate enough
[07:59:36 CET] <xtina> even though it tried
[07:59:41 CET] <xtina> hence the audio was still ahead, and overran
[08:00:03 CET] <xtina> i've never seen a bitrate or FPS so high, so i guess i've never seen the video encoder so delayed in starting up
[08:09:23 CET] <xtina> is there any way to make the video encoder startup more quickly?
[08:13:19 CET] <kepstin> Dunno. If the muxing ffmpeg is the problem, well, ffmpeg isn't really designed for this, so it doesn't initialize the outputs until it reads some input :/ no real way to change that.
[08:14:07 CET] <kepstin> like, it'll read a frame of input, then hang while connecting to the rtmp server, then go as fast as it can.
[08:15:07 CET] <xtina> so my bottleneck is my wifi upload speed
[08:18:08 CET] <xtina> so i have 3 cmds, audio rec+encoder, video rec (hardware encoded), and ffmpeg for muxing/streaming
[08:18:22 CET] <xtina> optimally, what order should i use the commands in?
[08:19:07 CET] <xtina> does that matter? right now im doing audio (which starts immediately) then video (which takes like 2s to start capturing) then ffmpeg
[08:29:01 CET] <thebombzen> I think you probably shouldn't use ffmpeg.c here
[08:29:09 CET] <thebombzen> and just write your own frontend to avformat/avcodec
[12:06:00 CET] <fulcan> I need to send an enduser webcam to a webpage for point to multipoint broadcast. any pointers or classes I should be aware of before beginning?  I found ffserver which appears good to go, but basically the monkey with the lighbulb.  webcam -> ffmpeg (local) -> ffserver -> mycam.php  ??
[12:06:49 CET] <fulcan> I'm not sure I even need the server.
[13:22:08 CET] <conseal> Good day all.
[13:23:01 CET] <conseal> I'm trying to fetch h264 video from my Logitech G920 webcam and push it to a Youtube via rtmp://. However, Youtube is complaining about keyframes being sent too slow.
[13:23:38 CET] <conseal> I'm wondering wether "-codec:v copy" is overriding "-g" or "-force_key_frames" ?
[13:25:07 CET] <conseal> no matter what I try to do with framerate/gop or "-force_key_frames" Youtube still complains about a 10 second gap between keyframes.
[13:26:19 CET] <conseal> Any idea what I could be doing wrong?
[13:27:49 CET] <conseal> this is what I am running
[13:28:21 CET] <conseal> ffmpeg -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f v4l2 -input_format h264 -s 1920x1080 -framerate 30 -i /dev/video0 -codec:v copy -acodec aac -ab 128k -g 60 -f flv rtmp://a.rtmp.youtube.com/live2/XXXXXXXXX
[13:57:04 CET] <conseal> anyone able to help?
[13:57:27 CET] <c_14> copying the video codec will overwrite all those options, yes
[13:57:36 CET] <c_14> s/overwrite/ignore/
[14:01:55 CET] <conseal> c_14: I'm running this on a Raspberry Pi 3B, and doing "-codec:v libx264" doesnt seem to work very well. Its studdering and is not even able to stream at 1x speed.
[14:02:15 CET] <JEEB> yes, it's ARM
[14:02:24 CET] <JEEB> you will have to be very light on the settings
[14:02:42 CET] <JEEB> and/or utilize the HW encoder on board, depending on which gives you results you require
[14:03:05 CET] <BtbN> imo the RPi is not suited for that task. I tried using one for almost the exact same purpose.
[14:03:12 CET] <BtbN> The h264 the Logitech-Webcams produce is useless
[14:03:24 CET] <furq> the onboard h264 encoder can't be any worse than h264 spat out of a webcam
[14:03:29 CET] <BtbN> and transcoding their mjpeg to h264 is not possible on an RPi
[14:03:38 CET] <BtbN> Not at realtime speed or useful quality that is
[14:03:40 CET] <conseal> BtbN: so I'm better off recieving rawvideo and encoding it on better hardware?
[14:03:40 CET] <furq> conseal: is youtube actually rejecting the stream or just giving you a warning
[14:03:54 CET] <BtbN> youtube cuts 4 second HLS/DASH segments
[14:03:58 CET] <BtbN> it straight up breaks with a longer gop
[14:04:01 CET] <furq> oh
[14:04:03 CET] <furq> that's dumb
[14:04:06 CET] <conseal> furq: it is giving me a warning, and I sometimes get buffering
[14:04:31 CET] <furq> well yeah if you want to do h264 encoding on the rpi you'll almost certainly need to use the onboard encoder
[14:04:39 CET] <conseal> I tried reducing the resolution + framerate, but its giving me the same error.
[14:04:41 CET] <BtbN> ffmpeg recently gained support for it
[14:04:47 CET] <furq> which either means using an external tool or rebuilding ffmpeg with --enable-omx-rpi
[14:04:53 CET] <BtbN> But imo the quality of it is the worst i have seen so far. memories of mpeg1
[14:05:08 CET] <conseal> BtbN: heh :p
[14:05:15 CET] <furq> it's better than nothing
[14:05:34 CET] <BtbN> Also, you can't get rawvideo from those cameras at good resolutions
[14:05:37 CET] <conseal> BtbN: I'm recording a bird house, I don't need superb quality :)
[14:05:38 CET] <BtbN> usb2 is too slow for that
[14:05:41 CET] <BtbN> it's always at least mjpeg
[14:05:48 CET] <furq> if you have plenty of upload speed you'll be able to get passable quality out of it
[14:05:49 CET] <BtbN> which puts also quite some load on the CPU just for decoding it
[14:06:03 CET] <BtbN> furq, YouTube live has a bandwidth limit
[14:06:17 CET] <furq> it shouldn't be that bad
[14:06:22 CET] <BtbN> 3.5MBit/s
[14:06:24 CET] <furq> oh
[14:06:26 CET] <furq> christ really
[14:06:32 CET] <BtbN> Same limit as Twitch
[14:06:47 CET] <BtbN> And you usually want to go for even lass, as a lot of people can't even handle a 3 Mbit stream
[14:06:54 CET] <furq> don't they transcode it on the fly
[14:06:58 CET] <BtbN> Twitch recommends 2Mbit/s for non-partnered streamers which get transcodes.
[14:07:05 CET] <BtbN> +don't
[14:07:06 CET] <conseal> furq: I can recompile ffmpeg and attempt to do it on the rpi
[14:07:24 CET] <conseal> I've never touched ffmpeg before 2 days ago
[14:07:25 CET] <furq> i bet 2mbit video game footage out of nvenc looks just freat
[14:07:26 CET] <furq> great
[14:07:31 CET] <conseal> so the learning curve is steep
[14:07:33 CET] <BtbN> furq, not on the fly, they only do that for "important" people
[14:07:33 CET] <furq> 1080p video game footage, at that
[14:07:56 CET] <BtbN> furq, you are forced to drop to 480p30 if you want to use nvenc for that
[14:08:03 CET] <furq> nice
[14:08:18 CET] <BtbN> The Official nvidia streaming app uses 480p30 for 2Mbit, and for 720p30 it uses the full 3.5Mbit/s
[14:08:31 CET] <BtbN> More than that isn't possible
[14:08:34 CET] <furq> yeah
[14:08:45 CET] <conseal> Is there a way to see how often my Logitech C920 is inserting keyframes?
[14:08:46 CET] <BtbN> libx264 easily gives you more than decent quality at 2Mbit/s on veryfast
[14:08:50 CET] <furq> i can't imagine what a 1080p60 nvenc encode of an fps would look like at 3.5mbit
[14:08:55 CET] <furq> i expect it looks more like tetris
[14:09:01 CET] <BtbN> A lot of people do that, because they are clueless.
[14:09:27 CET] <BtbN> Been in a lot of streams and tried to explain to them that their stream looks like shit, as nice as I could.
[14:09:31 CET] <BtbN> Most take it as an insult
[14:09:43 CET] <furq> i've never tried to record any fps newer than quakeworld
[14:09:54 CET] <furq> quakeworld at 720p60 would spike up past 10mbit with x264 veryslow
[14:10:22 CET] <BtbN> Funny enough, the thing that kills encoders the most is the classic Pokemon games on Gameboy
[14:10:25 CET] <furq> fast first-person games are some kind of pathological case for modern video encoders
[14:10:33 CET] <conseal> How should I proceed if I want to confirm that my Logitech C920 is not passing keyframes fast enough?
[14:10:34 CET] <BtbN> The battle-intro screen flashes just pixelate the entire screen
[14:10:49 CET] <conseal> Except reading the error messages from Youtube
[14:11:00 CET] <furq> conseal: rebuild ffmpeg with omx, take mjpeg off the camera and encode that with -g 100
[14:11:14 CET] <furq> and pray that god has mercy on your soul
[14:11:18 CET] <BtbN> conseal, ffprobe -show_frames
[14:11:24 CET] <conseal> furq: I'll try that
[14:11:27 CET] <furq> oh right
[14:11:32 CET] <furq> well yeah do what BtbN said first
[14:11:32 CET] <BtbN> grep key_frame
[14:11:35 CET] <furq> but if it is too fast then do what i said
[14:11:41 CET] <furq> s/fast/slow/
[14:12:13 CET] <conseal> I'll give that a shot
[14:14:09 CET] <conseal> Thanks alot for your help
[14:15:14 CET] <furq> you probably also want to cross-compile ffmpeg
[14:15:32 CET] <furq> although it shouldn't be too bad on a pi 3 if you run it on a tmpfs or something
[14:16:01 CET] <conseal> aye
[14:16:20 CET] <conseal> leaves plenty of time for coffee, its np
[14:16:42 CET] <furq> make sure you run make -j4 then
[14:19:19 CET] <conseal> Yup, did that.
[14:20:38 CET] <conseal> so I use "-vcodec h264_omx" still for omx?
[14:20:47 CET] <furq> right
[14:21:19 CET] <conseal> roger
[14:21:48 CET] <conseal> would probably be better off with an Intel NUC for this project.
[14:22:29 CET] <conseal> I considered using a network camera, but only a few of them lets you natively stream rtmp
[14:22:47 CET] <conseal> So I would be stuck with hardware either way
[14:23:09 CET] <furq> well the pi is like 10% of the price, so it's probably worth a try
[14:23:15 CET] <furq> the onboard encoder isn't that bad for low-motion stuff
[14:23:16 CET] <conseal> true that
[14:23:56 CET] <conseal> I dont know what framerate I would get either way on usb2 >720p
[14:24:59 CET] <furq> that's about 330mbit at 30fps for rawvideo, so theoretically usb2 could handle but i wouldn't want to try
[14:25:08 CET] <furq> but if the camera puts out mjpeg you should be fine with 720p30
[14:26:04 CET] <BtbN> no way usb2 can handle that
[14:26:11 CET] <BtbN> the 400Mbit are more of a theoretic nature
[14:26:17 CET] <furq> yeah
[14:26:38 CET] <furq> it's supposedly 480 but you won't ever get that kind of sustained throughput
[14:26:43 CET] <furq> especially not on an rpi
[14:26:52 CET] <conseal> I need to catch a bus, but I'll give it a go once ffmpeg is done compiling when I get home.
[14:27:01 CET] <conseal> I'll let you know how it went :)
[14:27:21 CET] <BtbN> you won't get it on a Pi simply because of the NIC being connected to the same USB bus
[14:27:49 CET] <BtbN> And even with just a single device, the USB protocol overhead brings that down to something like 200-250Mbits at best
[14:28:44 CET] <conseal> Whoever had experience with the Logitech C920, is there any way to manipulate the keyframes the camera is natively sending?
[14:29:36 CET] <BtbN> no
[14:29:51 CET] <BtbN> It's also sending very bad and broken h264, you don't want it
[15:47:32 CET] <sg90> Hi, I'm trying to output 1 frame of black video with a silent stereo audio track. Whenever I pass "-vframes 2" or less, I lose my audio tracks from the output file. Any value of -vframes > 2 preserves the audio in the output file. I am encoding to ProRes 422 mov using prores_ks. http://pastebin.com/ThJsGR7d
[16:44:29 CET] <conseal> So after recompiling with omx rpi support, I'm trying this:
[16:44:33 CET] <conseal> ffmpeg -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f v4l2 -input_format mjpeg -s 1280x720 -thread_queue_size 1024 -i /dev/video0 -codec:v h264_omx -acodec aac -ab 128k -g 100 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/XXXXXX
[16:44:57 CET] <conseal> but I'm getting this spam
[16:45:01 CET] <conseal> DTS -111707752554, next:10897410 st:0 invalid dropping
[16:45:01 CET] <conseal> PTS -111707752554, next:10897410 invalid dropping st:0
[16:45:40 CET] <conseal> and getting horrid bitrates
[16:45:40 CET] <conseal> frame=  331 fps= 10 q=-0.0 Lsize=     292kB time=00:00:11.00 bitrate= 217.3kbits/s speed=0.338x
[16:52:51 CET] <conseal> ah, the hint is actually a few lines up
[16:53:05 CET] <conseal> [swscaler @ 0x1cfd9c0] deprecated pixel format used, make sure you did set range correctly
[16:58:51 CET] <furq> conseal: you can drop -strict experimental
[16:59:17 CET] <furq> you'll also want to set -b:v to something appropriate
[16:59:41 CET] <furq> and also get rid of -re, it's useless (and possibly harmful) with live sources
[16:59:48 CET] <furq> it's meant for simulating a live source
[17:02:28 CET] <conseal> furq: thanks for the info, I'm <novice when it comes to this stuff.
[17:02:47 CET] <furq> i've seen much worse command lines ending in an rtmp url
[17:02:50 CET] <conseal> furq: and what you're seeing is a combination of different posts I found on various blogs
[17:05:13 CET] <conseal> btw
[17:05:14 CET] <conseal> [video4linux2,v4l2 @ 0x24a72b0] The driver changed the time per frame from 1/60 to 1/10
[17:05:20 CET] <conseal> is this a driver or hardware limitation?
[17:06:35 CET] <conseal> Right now the stream is only sitting at about 200kbit/s
[17:06:45 CET] <furq> that's the default bitrate
[17:06:53 CET] <conseal> ah
[17:07:09 CET] <furq> i've never really touched v4l2 but i'm guessing that's what your camera supports
[17:08:04 CET] <conseal> dunno how to interpret this, but "v4l2-ctl --list-formats-ext" says the following
[17:08:18 CET] <conseal>                 Size: Discrete 1280x720
[17:08:18 CET] <conseal>                         Interval: Discrete 0.033s (30.000 fps)
[17:08:18 CET] <conseal>                         Interval: Discrete 0.042s (24.000 fps)
[17:08:22 CET] <conseal> all the way to 5 fps
[17:08:38 CET] <furq> try -framerate 30 before -i
[17:09:14 CET] <furq> also you can replace everything before /dev/zero with -f lavfi -i anullsrc
[17:09:29 CET] <furq> before and including /dev/zero
[17:10:09 CET] <conseal> I found that in a blog about ffmpeg with Windows
[17:10:15 CET] <conseal> Thanks for the suggestion :-)
[17:10:39 CET] <furq>  /dev/zero on windows eh
[17:11:42 CET] <conseal> thats why I thought "anullsrc" was for Win :p
[17:11:58 CET] <furq> oh right
[17:12:05 CET] <furq> well yeah that's internal and also simpler
[17:14:23 CET] <conseal> btw, whats an appropriate pixel format for mjpeg?
[17:14:52 CET] <furq> jpeg always uses full-range yuv so you'll need to convert it anyway
[17:16:03 CET] <conseal> Without "-pix_fmt" I got the "deprecated pixel format" error, so just wondering what to put there
[17:16:20 CET] <conseal> the default value seems to be something it didnt like
[17:16:23 CET] <furq> yuv420p
[17:16:29 CET] <furq> it should be converting to that anyway
[17:17:07 CET] <conseal> thats what I'm using atm
[17:17:21 CET] <conseal> still getting nerfed to 10fps
[17:17:46 CET] <furq> what's your cpu usage like
[17:19:04 CET] <conseal> 34%
[17:19:22 CET] <furq> is that one core or all of them
[17:20:53 CET] <conseal> one
[17:21:00 CET] <furq> that should be ok then
[17:21:03 CET] <conseal> its mostly using one
[17:21:21 CET] <furq> i mean is that 34% of one core
[17:21:38 CET] <furq> if it's maxing out one core then maybe the decoding or format conversion can't keep up
[17:21:57 CET] <furq> or aac encoding
[17:22:02 CET] <conseal> changing to 800x600 gives the following output
[17:22:03 CET] <conseal> [video4linux2,v4l2 @ 0x31f70f0] The driver changed the time per frame from 1/30 to 1/24
[17:22:08 CET] <conseal> so it sounds driver related
[17:22:08 CET] <furq> weird
[17:22:21 CET] <conseal> but its an odd framerate
[17:35:40 CET] <conseal> this gets more and more odd
[17:35:50 CET] <conseal> I just swapped to a Tandberg/Cisco webcam
[17:35:53 CET] <conseal> [video4linux2,v4l2 @ 0x1cb90f0] The V4L2 driver changed the video from 1280x720 to 640x480
[17:35:56 CET] <conseal> [video4linux2,v4l2 @ 0x1cb90f0] The driver changed the time per frame from 1/30 to 1/15
[17:36:17 CET] <conseal> v4l2 seems borked for my use
[20:52:53 CET] <slalom_> Does anybody know how to change the initial start offset time with the HLS muxer?   I am trying to replicate adobe media encoder's HLS output with ffmpeg, but I can't get the playlist to have a different start time.   i think with the normal muxer, -initial_offset is the value but it does nothing for hls muxing.. for some reason it's 1.445400 in my output by default.. i want it to be 10 sec
[22:13:21 CET] <rebel2234> so say i have an input file of 1080p at 60fps at a given bitrate x would I end up with better quality if I re-encode it to 30fps as apposed to 60fps at that bitrate?
[22:14:36 CET] <furq> not really
[22:15:27 CET] <furq> if you're using halfdecent settings then all the frames you drop will have been tiny anyway
[22:15:54 CET] <furq> and if you're reencoding from the encoded file then you're automatically losing quality
[22:16:53 CET] <DHE> you can't improve a video by re-encoding it, short of some really good image filters
[22:17:07 CET] <DHE> first rule of transcoding
[22:17:14 CET] <rebel2234> actually encoding from mpeg2 off cable tuner comes in at 1080 at 60fps and I re-encode to 720 at 60 h264.
[22:18:00 CET] <rebel2234> ok i guess re-encoding was wrong term to use,  transcoding is probably more accurate
[22:18:11 CET] <furq> those are the same term and it's right in either case
[22:18:23 CET] <furq> but yeah i'd leave it at 60
[22:18:36 CET] <furq> you generally shouldn't use abr mode anyway
[22:34:35 CET] <rebel2234> Yeah, just did a side by side comparison and I couldn't tell the difference
[22:35:45 CET] <rebel2234> I just got a workstation box off ebay with 2x e5-2660's in it and 16gb ram and this thing is an h264 encoding beast!
[22:36:54 CET] <rebel2234> 40% CPU with 9x transcoding operations using settings like --->  -vcodec libx264 -preset superfast -crf 28 -maxrate 2600k -bufsize 3500k -vf yadif=0,scale=-2:720 -acodec aac -b:a 128K -f mpegts
[22:39:30 CET] <furq> if you were asking whether you should use yadif 0 or 1 then definitely 1
[22:39:41 CET] <furq> although it makes less difference with clean 1080i sources
[22:41:08 CET] <furq> also if you have that much cpu to spare then you should use a better preset
[22:57:40 CET] <rebel2234> plan on adding possibly 9 other tuners
[22:58:34 CET] <JEEB> welcome to the hell with broadcast sources 8)
[22:58:45 CET] <JEEB> (aka "it works often but then you get a PID switch")
[22:59:11 CET] <JEEB> or dynamic tracks
[22:59:47 CET] <JEEB> also some web streaming solutions really support that badly :<
[23:00:36 CET] <JEEB> as in, even if you get the ingest and tracks through it goes and bites with something not-so-nice
[23:01:01 CET] <JEEB> but yeah, sometimes it just works and you're happy
[23:33:24 CET] <Madd_the_Sane> Is there any way to get anim data from an IFF file with both anim and audio?
[23:34:14 CET] <Madd_the_Sane> It seems it only recognizes the audio data.
[00:00:00 CET] --- Wed Feb 22 2017


More information about the Ffmpeg-devel-irc mailing list