[Ffmpeg-devel-irc] ffmpeg.log.20140831

burek burek021 at gmail.com
Mon Sep 1 02:05:01 CEST 2014


[05:29] <BobLfoot> I am trying to capture with ffmpeg the audio I am listening to, but can't seem to find the correct '-f alsa -i foo' setup.  arecord -L gives me the following info --> http://paste.fedoraproject.org/129843/14094555 ; suggestions welcome
[05:36] <thebombzen> BobLfoot: I only know how to do this from a GUI, but afaik fedora has one. If you go to the system sound settings, there should be a way to add a "monitor" as an input. A monitor is an output-to-input pipe. It should have a device name once you enable it. Then, ffmpeg -f pulse -i <devicename> should work
[05:38] <radleader> i'm exporting a video from after effects that contains alpha data that i want to use ffmpeg filter graph to composit together with a series of PNG images (that show through the alpha) then produce an mpeg output - what format would be lowest computational overhead for ffmpeg for the input alpha video
[05:38] <radleader> should i use h264 and have an extra video file with the alpha mask in it?
[05:38] <radleader> should i use an avi containerand just keep it uncompressed
[05:38] <radleader> not sure what the best plan here is
[05:50] <troy_s> radleader: You will end up with compression artifacts, hence why alpha is never a decent choice in a codec.
[05:51] <troy_s> radleader: If you are wanting to avoid the compression artifacts, you can stow to FFV1 or HuffYUV and force RGB storage. (Even lossless HuffYUV etc. will result in non 1:1 values if it hops over to the YCbCr domain)
[05:51] <troy_s> radleader: Curious why you are relying on codecs here.
[05:54] <radleader> so what i have is a video made by an artist with alpha channels in some picture frames in the scene where he wants me to dynamically place images, so i've rendered each frame of dynamic image content as a singel composit image and now i want to place that image (same dimensions as the video) into the video and produce a new stream, the fact i have an alpha channel is just a processing step, the final video will just be a normal mpeg or wmv depending
[05:54] <troy_s> radleader: By dynamically do you mean like art installation in realtime or such?
[05:55] <BobLfoot> thanks thebombzen - I'll look for a way to do that, the centOS gui doesn't appear to offer it at first blush
[05:55] <troy_s> radleader: Generally I will almost always advise against codecs for any sort of motion picture imaging work. It's a rather unfortunate trend that has hit due to some vendors flogging their particular workflows.
[05:55] <radleader> troy_s: i have a stream of images coming in from an event and a few pieces of video that i basicallky want to place random images into and output it as a normal video
[05:55] <troy_s> radleader: Is this a realtime thing? Or is the project a baked presentation?
[05:56] <radleader> i've got an after effects file rather than any actual videos, so i can render whatever output i need
[05:56] <radleader> it's real time, they expect to wait a couple of minutes to get their video from the images
[05:57] <radleader> for example there is a man talking beside a picture frame on a wall and that frame should contain the appropriate images at the appropriate time stamps, the artist has masked out the frame (which is what i can make an alpha channel from)
[05:57] <troy_s> radleader: Your two options are likely as follows: 1) Use a still image series. 2) Use an RGB internal codec that is lossless to avoid compresison artifacting and such which will blow your alpha. HuffYUV and FFV1 qualify.
[05:57] <troy_s> radleader: Is this going to be done using Processing or something akin to it?
[05:58] <radleader> so what i was thinking is i could just set up a filter graph that takes this so-called alpha channel (lets just call it an 8 bit mask) and the png images and produces a new video of parts of the input pngs masked correctly, then i could apply this over the actual video of the guy talking to produce a single h264 stream output
[05:59] <troy_s> radleader: Sure. You can also probably do something very easy with PureData
[05:59] <troy_s> radleader: And if you wanted quick and dirty where quality is a back seat, go with a codec.
[05:59] <troy_s> radleader: Such as RGB HuffYUV.
[05:59] <troy_s> radleader: Are you familiar with PD?
[06:00] <radleader> i haven't used it but my friends dick around with it pretty often
[06:00] <troy_s> radleader: It might be a very suitable tool for an interactive installation such as this.
[06:00] <troy_s> radleader: Or Processing.
[06:00] <radleader> yeah i've done a lot of processing
[06:00] <radleader> hold on, let me just make an image to verify you know what i am asking
[06:00] <troy_s> radleader: The sort of 'upside' of pure data is that it is very interactive art VJ driven.
[06:01] <troy_s> You are more or less into a realtime keying scenario.
[06:01] <radleader> at it's most basic i just want to place a png image into a video stream that has been masked by another video stream
[06:02] <radleader> and produce a single stream output
[06:02] <troy_s> Yes.
[06:03] <radleader> so you don't think using a filter graph is a good idea? i already made a test doing this but my input video contained the alph already and i just placed the pngs behind it conceptually
[06:05] <troy_s> radleader: It is ultimately your choice. I would gear it to the quality needed.
[06:06] <troy_s> radleader: A realtime installation quality need is probably lower, so you can be loose.
[06:06] <radleader> so where does this filter graph answer sit on the spectrum of quality and performance
[06:07] <radleader> i mean if i can do anything within the filter graph solution to ensure the input i am providing will composit at good speed and look nice i should do it
[06:07] <radleader> i don't know what formats are good to use from after effects or if i should ditch png and have the rasters in some other format before they go in
[06:07] <troy_s> radleader: haven't used a filtergraph for a realtime key.
[06:07] <radleader> or if i should have two video signals (one for the mask, rather than having it in the stream)
[06:07] <troy_s> radleader: I know of some artists that have used PD for such a thing.
[06:08] <radleader> i was pretty happy how well it worked when i put this together
[06:08] <troy_s> radleader: Then roll with it.
[06:08] <troy_s> radleader: Use HuffYUV with RGB.
[06:08] <troy_s> and you have your answer.
[06:08] <radleader> okay cheers
[06:09] <troy_s> radleader: Just make sure you get RGB
[06:09] <troy_s> radleader: Because the transform from RGB, to YCbCr to RGB will mangle your alpha a little more.
[06:09] <troy_s> radleader: And I'm not sure the flag on the command line to control that.
[06:10] <troy_s> radleader: Once it is encoded to RGB, you don't need to worry.
[06:14] <radleader> sweet that looks good thanks troy_s
[06:14] <troy_s> radleader: I _would_ look into Pure Data though.
[06:15] <troy_s> radleader: If only because it could be a very cool thing for this sort of installation.
[06:15] <troy_s> radleader: You can probably overdeliver something unique, _and_ have it totally controllable via some PD widget sliders etc.
[06:15] <radleader> it wouldn't add value to the current render pipeline (actually involves some windows machines (sigh) in the cloud)
[06:15] <troy_s> And have it realtime.
[06:15] <radleader> but yeah i agree
[06:15] <radleader> i've simplified what the image data is doing in my explanation
[06:16] <troy_s> radleader: If you are ultimatley going to take something to an end point like broadcast, I'd heavily avoid all codecs
[06:16] <troy_s> radleader: And use a still frame based imaging pipeline right up to delivery.
[06:16] <troy_s> radleader: If that is of any interest to you, feel free to ping me via PM and I might be able to help you structure it in a sane way if an offline / online pipeline is a little unfamiliar to you.
[06:17] <radleader> sure, i understand, i'd ideally have this stream based end to end, i actually got excited when i saw the zmq filter graph stuff but it's only for signalling unfortunately
[06:17] <radleader> if it goes well this week i might get some actual money to make it better :)O
[06:18] <troy_s> radleader: Good luck. Let me know how it goes. (PM so I get the message.)
[06:18] <radleader> sweet will do
[09:47] <rule_2> I am using this command line to record video from a HDMI to USB3 adapter and it works great, but compresses to x264 in realtime, which is not happening on my laptop
[09:47] <rule_2> ffmpeg -f v4l2 -framerate 30 -video_size 1280x720 -i /dev/video0 -map 0 test.mp4
[09:48] <rule_2> what can I do to get it to record raw video? I get that the file will be huge, and am ok with that
[09:52] <rule_2> nvm, -vcodec rawvideo
[09:53] <rule_2> my brain is trashed from the block party music :(
[12:08] <luc4> Hello! I was looking at the output of ffmpeg after extracting the h264 raw stream from a mpeg ts container. I notice that ffmpeg is extracting the stream even from PES packets where there is no header at the beginning. Is this correct?
[19:18] <mickkie> Hi All, is the option -ab 256k valid, or is it deprecated?  I am trying to understand why it complains about encoder for the audio, from aac to mp2.
[19:20] <JEEB> -ab is an old way of setting it, but IIRC it should work. -b:a 256k is the new way of setting that value
[19:21] <mickkie> Ha!  That was it!  Thank you JEEB. :-)
[19:24] <mickkie> Should in the same vane '-ar 48000' now be 'r:a 48000'?
[19:24] <mickkie> I can't find it in the man pages
[19:25] <mickkie> OK, just confirmed: -ar is still valid as a stream specifier.  Cool
[19:25] <mickkie> Thanks again for your help.
[00:00] --- Mon Sep  1 2014


More information about the Ffmpeg-devel-irc mailing list