[Ffmpeg-devel-irc] ffmpeg.log.20171128

burek burek021 at gmail.com
Wed Nov 29 03:05:01 EET 2017


[00:56:04 CET] <CCFL_Man> this ts has a video bitrate of 3mbit. it looks good on my tivo. it has to be mpeg4 at that bitrate, right?
[00:56:18 CET] <CCFL_Man> it's an HD feed.
[01:00:23 CET] <Ouchsplat> I am having an issue where I am using ffmpeg, I am using a private rtmp server that is recieving 4 seperate streams then using ffmpeg to merge the streams into tiled video. the problem is that if a streamer logs off ffmpeg stops with an error. I would like to have it replace the feed that disconnected with a static image in the output
[01:00:53 CET] <Ouchsplat> any ideas? I have only been using ffmpeg for 2 days
[01:01:45 CET] <Ouchsplat> this is the command I am using   ffmpeg -i rtmp://192.168.0.4/live/ouchsplat -i rtmp://192.168.0.4/live/biidieyz -i rtmp://192.168.0.4/live/inputend -i rtmp://192.168.0.4/live/sirm -filter_complex "nullsrc=size=1920x1080 [base]; [0:v] setpts=PTS-STARTPTS, scale=960x540 [upperleft]; [1:v] setpts=PTS-STARTPTS, scale=960x540 [upperright]; [2:v] setpts=PTS-STARTPTS, scale=960x540 [lowerleft]; [3:v]
[01:01:45 CET] <Ouchsplat> setpts=PTS-STARTPTS, scale=960x540 [lowerright]; [base][upperleft] overlay=shortest=1 [tmp1]; [tmp1][upperright] overlay=shortest=1:x=960 [tmp2]; [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3]; [tmp3][lowerright] overlay=shortest=1:x=960:y=540" -c:v libx264 output.mkv
[03:34:52 CET] <kinkinkijkin> so I try to vspipe a video from 75fps to 144, and on running the command with ffmpeg I get "pipe:: invalid data found when processing input"
[03:35:10 CET] <kinkinkijkin> source is x265, mkv, if it matters
[03:37:09 CET] <kinkinkijkin> oh wait
[03:37:12 CET] <kinkinkijkin> I'm an idiot
[03:42:57 CET] <kinkinkijkin> yeah, I forgot a dash in the command
[03:43:00 CET] <kinkinkijkin> derp
[09:19:05 CET] <melkor> I am trying to make a stop motion from a video. Right now there are too many frames, so I was trying to sample it. If I use `-vf fps=1` the output video is the same duration, but each frame is shown for a long time. How do I get it to have the proper time too?
[09:19:30 CET] <melkor> eg. a 2 minute move should because a 30second movie.
[09:56:42 CET] <melkor> I used the setpts video filter, which worked. The movie went from 2min to 28sec but, the file got bigger !?
[10:09:57 CET] <pl0t> came to tell that the issue is resolved, nothing wrong with ffmpeg, recompiling the system helped, also, debianuser: thank you for your troubleshooting, it was great!
[12:51:05 CET] <termos> I see "adaptation_sets" option has been added to FFmpeg 3.4, what would be the correct string to supply to get the old behaviour?
[12:54:29 CET] <tyng> what's the difference between theses filters? lowpass highpass treble bass bandpass bandreject allpass equalizer
[12:59:56 CET] <durandal_1707> tyng: you need some serious read
[13:03:51 CET] <termos> seems the adaption_sets option to keep the old behaviour is "id=0,streams=v id=1,streams=a", I wish this was the default as it's the old one people are used to
[13:07:26 CET] <tyng> ffmepg documentation isn't very specific
[13:07:47 CET] <tyng> where can i read about two-pole Butterworth band-rejection?
[15:02:40 CET] <durandal_1707> tyng: google for it
[16:01:41 CET] <godofgrunts> After upgrading to 3.4, using -c copy gives me an error https://pastebin.com/uqEx1Y67
[16:03:08 CET] <godofgrunts> same if I use -c:a copy -c:v copy
[16:08:37 CET] <godofgrunts> using ffmpeg -i vid.flv vid.mp4 works as expected so something changed with how ffmpeg is handling copy I guess
[16:09:32 CET] <godofgrunts> Trying to just isolate audio with -vn -c:a copy out.aac isn't working either so something is definitely screwed up.
[16:14:03 CET] <godofgrunts> Just to confirm, using 2.8 does exactly what I expect it too
[17:34:34 CET] <kerio> CCFL_Man: ffprobe it?
[17:35:14 CET] <kerio> 720p30 h264 content looks decent at 2ish mbps
[19:26:46 CET] <teratorn> if I need to encode the result of a `concat' filter multiple times (specifying multiple outputs on the command line) does that necessitate a `split' filter? or can I somehow do multiple encodes off the concat filter?
[19:44:14 CET] <MegaSoft> hey
[19:45:07 CET] <DHE> teratorn: unless you need or want some custom filters, you can just specify more output options and files
[19:45:36 CET] <DHE> ffmpeg -f concat -i concat.txt  [output options 1] output1.mkv  [output options 2] output2.mp4 [output options 3] output3.flv
[19:48:11 CET] <teratorn> DHE: only the first one gets the right result because i -map the result of the concat *filter*
[19:48:22 CET] <teratorn> (not muxer)
[19:48:50 CET] <teratorn> its ok using a split filter seems like the right thing
[21:35:45 CET] <storrgie> I'm wondering, anyone use any of the black magic design capture cards? I'd like to know if its simple to plug into a Linux box and then use ffmpeg to record from the SDI interface to disk
[21:37:35 CET] <sfan5> ffmpeg has decklink support
[21:37:50 CET] <storrgie> yeah I see that
[21:37:52 CET] <storrgie> (https://www.ffmpeg.org/ffmpeg-devices.html#decklink)
[21:38:02 CET] <storrgie> I'm wondering a bit more about how I could use the capture devices
[21:38:34 CET] <storrgie> I want to record video and audio from a game on a laptop, but I'd also _love_ to be able to record a separate audio channel from the microphone (headset) so that I can de-mux it later
[21:39:09 CET] <storrgie> I see that BMD makes a "Mini Converter HDMI to SDI 6G" which does HDMI -> SDI, and can carry maybe some additional audio channels via an analog input
[21:39:29 CET] <BtbN> Why would you even use SDI for that?
[21:39:55 CET] <storrgie> well the game is intense, and I want to capture the game video, game audio, and microphone audio
[21:40:01 CET] <storrgie> I want to do this for 10-30 machines at the same time
[21:40:18 CET] <BtbN> You're not going to get that working on a single PC
[21:40:25 CET] <storrgie> right, I'd need two machines to do capture
[21:40:40 CET] <BtbN> I'd say more than 8 is a bad idea
[21:40:49 CET] <BtbN> and even then you need quite a big and expensive CPU
[21:40:54 CET] <BtbN> and tons of PCIe lanes
[21:41:08 CET] <storrgie> the tradeoff is... do you let the laptop attempt to capture this stuff locally... then snarf it later (the laptops have two SSD in them), or do I do something like a large capture box(s) with the HDMI to SDI converters
[21:41:13 CET] <storrgie> I also have to capture some video of the room
[21:41:21 CET] <storrgie> which is why I started on HDMI->SDI
[21:41:38 CET] <storrgie> so assuming the budget is infinite, I can probably go this route
[21:41:57 CET] <storrgie> I could also just attempt to use software to capture this stuff on the laptops themselves... but there is an incurred overhead
[21:41:58 CET] <BtbN> If the budget is infinite there are professional video mixing tables designed for stuff like that.
[21:42:07 CET] <BtbN> They are easily 100k¬ or more though
[21:42:19 CET] <storrgie> I was thinking of doing this for like 25k
[21:42:26 CET] <storrgie> also, I need to pack it all up and move it
[21:42:52 CET] <storrgie> I have an 11u sever chassis that I have boxes in, getting powerful machines to run decklink cards wont be a problem
[21:42:56 CET] <BtbN> You'll want an external hardware mixer at least for the audio
[21:43:05 CET] <storrgie> well thats what I'm wondering
[21:43:15 CET] <storrgie> I don't care about levels or anything, I just want the source stuff
[21:43:31 CET] <storrgie> this is being capture during an "Experiment", it gets reviews afterwards for a couple months then discarded
[21:43:38 CET] <storrgie> this isnt meant to be broadcast level stuff
[21:43:41 CET] <BtbN> The biggest setup I dealt with was 4 PCs, and that was already a mess and put the capture PC close to its limits
[21:44:42 CET] <storrgie> so, if I went the software route and literally just saved stuff locally to the machines... what would you suggest
[21:44:58 CET] <storrgie> windows software, nvidia cards (not quadro so we cant use the fancy nvidia broadcast stuff)
[21:45:02 CET] <BtbN> That will put a lot of strain on the CPU of each PC
[21:45:06 CET] <storrgie> right it would
[21:45:14 CET] <storrgie> thats why I was looking at the HDMI->SDI cards
[21:45:29 CET] <BtbN> I still don't understand in what way SDI would be of any use
[21:45:41 CET] <storrgie> its just a way to get signals off the machine over a distance
[21:45:55 CET] <storrgie> and the thought was that if it was a "mirrored display" there would be less overhead
[21:46:38 CET] <BtbN> You're probably going to be cheaper with some DTP transmitter instead of SDI
[21:46:43 CET] <BtbN> SDI is pointlessly expensive
[21:46:52 CET] <storrgie> do you have a suggested product/brand I could look at?
[21:46:56 CET] <BtbN> no
[21:47:03 CET] <storrgie> I was looking for a reputable place to buy stuff and BMD seems like people like it
[21:47:17 CET] <storrgie> also was looking for cameras I could capture the same way that I'd be capturing the laptops
[21:49:38 CET] <storrgie> I mean I know this is entirely silly... but it's an interesting challenge
[21:57:32 CET] <bray90820> Is there a way to split a video by chapter
[22:14:22 CET] <alexpigment> I'm not sure what filter I'm supposed to be using, but I'd like to put a solid black bar over the letterboxed areas in a video (and keep the letterboxing). the current letterboxed areas have a lot of noise
[22:14:33 CET] <alexpigment> I know pad is not what I'm looking for
[22:21:25 CET] <BtbN> alexpigment, crop and then pad?
[22:21:49 CET] <alexpigment> well, it's interlaced, and I'd like to keep it interlaced without introducing any field issues
[22:22:10 CET] <alexpigment> I know I could deinterlace, crop, pad, interlace, etc
[22:22:19 CET] <BtbN> most filters don't support interlaced content.
[22:22:21 CET] <alexpigment> but that just seems like a lot of quality loss potential that i'd like to avoid
[22:22:32 CET] <alexpigment> hmmm
[22:22:36 CET] <alexpigment> ok
[22:22:57 CET] <BtbN> ignoring it being interlaced, and only cropping in multiple of twos should be perfectly fine though?
[22:23:35 CET] <alexpigment> maybe. i'd have to do some tests I suppose
[22:23:48 CET] <alexpigment> but to your knowledge, there's no way to overlay a color in a particular area of the video?
[22:24:30 CET] <BtbN> that'll be a lot less efficient than crop+pad
[22:24:54 CET] <alexpigment> I don't really understand why, but I'll take your word for it
[22:25:13 CET] <BtbN> Because you'll only need two filters and no extra inputs
[22:25:30 CET] <BtbN> instead of at least two extra inputs of the desired size and color, and then two overlay filters
[22:25:52 CET] <BtbN> if you're only cropping at the top/bottom, it will also be way easier to configure
[22:27:23 CET] <durandal_1707> alexpigment: so your video is already padded?
[22:27:35 CET] <alexpigment> it's inherently letterboxed
[22:28:44 CET] <durandal_1707> alexpigment: so you only need to fill box of black color over some area?
[22:28:52 CET] <alexpigment> right
[22:29:07 CET] <alexpigment> 720x60 or thereabouts, starting at the top left corner
[22:29:34 CET] <durandal_1707> i written fillborders filter just for that
[22:29:54 CET] <alexpigment> oh really?
[22:29:57 CET] <durandal_1707> need too finish it and push to master
[22:29:59 CET] <alexpigment> i guess this isn't committed?
[22:30:02 CET] <alexpigment> gotcha
[22:30:20 CET] <alexpigment> well, this actually comes up every now and then for me
[22:30:48 CET] <alexpigment> a lot of SD programming happens to be letterboxed and people put persistent advertisement crap in the top letterboxed area
[22:30:59 CET] <durandal_1707> with it you set borders which you want colored black: top down left right
[22:31:01 CET] <alexpigment> so i look forward to your filter
[22:31:31 CET] <alexpigment> would it be worth following the model of crop and pad?
[22:31:45 CET] <alexpigment> where you specify width:height:xposition:yposition?
[22:32:00 CET] <alexpigment> oh, nevermine
[22:32:02 CET] <alexpigment> *nevermind
[22:32:11 CET] <alexpigment> you're talking about having it always fill from one of the sides
[22:32:16 CET] <alexpigment> in which case, that is actually easier
[22:34:26 CET] <durandal_1707> thing is you can do that already with vapoursynth and avisynth, and with lavfi with drawbox hack
[22:35:18 CET] <alexpigment> vapour/avisynth is on my list of "things to learn"
[22:35:22 CET] <alexpigment> just haven't gotten around to it
[22:35:45 CET] <alexpigment> as ffmpeg gets better and better, i find less of a need to go outside of its own capabilities
[22:49:18 CET] <furq> "drawbox hack"?
[22:55:29 CET] <durandal_1707> furq: drawing black boxes around noisy areas
[22:55:38 CET] <furq> what's hacky about that
[22:55:46 CET] <furq> is it just slow
[22:57:06 CET] <durandal_1707> its bit of work
[22:57:39 CET] <durandal_1707> i didnt said that drawbox filter is hack
[22:57:50 CET] <furq> fair enough
[23:31:48 CET] <tolland> Hi, I have an onvif camera sending events data in the rtp stream. ffprobe detects the stream, but its detected as "Stream #0:1: Data: none " and the probe report shows nothing found: https://pastebin.com/3BkvbhGA
[23:33:03 CET] <tolland> however there is definitely data coming down, as I can dump it out with gst-launch-1.0 rtspsrc location='rtsp://192.168.0.32:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif'  ! application/x-rtp, media=application ! fakesink dump=true
[23:33:18 CET] <tolland> which produces lots of xml which is what I am expecting
[00:00:00 CET] --- Wed Nov 29 2017


More information about the Ffmpeg-devel-irc mailing list