[FFmpeg-user] Transcoding, recording, and displaying multiple webcams

Moritz Barsnick barsnick at gmx.net
Tue Dec 11 18:01:26 EET 2018

Hi Rob,
a lot of questions, so let me just hint at a few things:

On Tue, Dec 11, 2018 at 06:58:41 -0800, Rob Lewis wrote:
> I've been using mjpg_streamer to display live video from 2 webcams on a 
> web page (in separate iframes).

So the browsers already play that just fine? Is it actually MJPEG
streams which mjpg_streamer delivers? (Just wondering. ;-))

> It works, but now I need to also record the video while this happens.

You want to record the cameras' outputs on the originating Pi, or you
want to capture the streams elsewhere, in addition to the browser? From
your further text, I reckon the former.

> Can ffmpeg do this?


>   * The webcams are typical USB cameras that output either motion JPEG
>     or raw video. I'm currently using 640x480 MJPEG @30 fps.
>   * They're connected to USB ports on a Raspberry Pi 3 B+.

So presumably mjpg_streamer captures them from their video4linux2
devices. (I used to know mjpg_streamer, but I'm too lazy top look it up

>   * I want to record the video streams in a space-efficient format while
>     simultaneously serving them to browsers connected to the Pi's web
>     server.

Assuming you're aiming to record on the Pi as well: ffmpeg can both
output a file in one compressed format, while forwarding another format
to the net. Or possibly to mjpg_streamer?

>   * Currently using 2 cameras but would like the option for 3 or 4.

The more cameras, the more workload.

> The ffmpeg docs are pretty intimidating for this noob; what would the 
> command(s) to do this look like? (the feeds are available as /dev/video0 
> etc.)

Check the documentation of the v4l2 input device:

The wiki has similar documentation:

That's the input. You also need to configure ffmpeg's output. ffmpeg
can handle multiple outputs from one input (which is your requirement).
The first output, the file part would be something like
$ ffmpeg <input options> -c:v libx264 -profile fast <other libx264 options> /path/to/output.mkv

You would add more output options for the second part, streaming to whatever.

I don't know what sort of stream mjpg_streamer serves, and whether
ffmpeg can do that. ffserver might have been the tool for your needs,
but it's not maintained within ffmpeg anymore (or for the time being).
Since mjpg_streamer apparently can do input_http, you could serve from
ffmpeg to mjpg_streamer.

>   * Directly recording raw or MJPEG streams would use far too much disk
>     space. What would the best format for the disk files be? Could it be
>     something that would be easy to burn to a DVD? (just a thought, not
>     a high priority)

DVD would imply a restricted set of the MPEG2 codec (or MPEG1, but...
forget that). For plain storage, with plenty of options to balance
speed versus space, you would probably use H.264. libx264 (a codec
provided/used by ffmpeg) has presets for varying speed/space tradeoffs
(e.g. veryslow, slow, normal, fast, veryfast, ultrafast, and some
more), so you can play with them until your Pi manages all your
streams. You can also reduce average quality (-crf <number>, where 23
is default, higher is less quality, which should be faster to encode).
Already there you have quite a few parameters to play with.

ffmpeg for RasPi also supports their hardware encoder (omx). That might
be significantly faster. I have my doubts that that encoder can encode
four simultaneous streams, but I may be wrong.

>   * Would it be better and more CPU-efficient to start with the raw
>     video from the cameras?

It's worth a try, but decoding JPEG streams shouldn't use too much CPU.
Just try and report back.

>   * What's the best format to send to browsers, in terms of quality and
>     overall load on the Pi?

I'll leave this to others to answer.

> Bonus questions:
>   * Is there a way to "burn" software-generated text (similar to
>     captions) into one or more of the video feeds?

I believe the drawtext filter's text can be changed at runtime. You may
need to google for this separately. No bonus from me. ;-)

>   * Is there a way to combine 2 input streams into a single "side by
>     side" video stream for recording? IOW, 2 views of the same action,
>     displayed in one video window and combined in one recording.

Sure. An ffmpeg instance can handle two inputs, and you would use the
hstack or vstack filters to put them side by side.

> Tips greatly appreciated.

Good luck,

More information about the ffmpeg-user mailing list