[FFmpeg-user] Detecting frames on Raw video stream

Charl Wentzel charl.wentzel at vodamail.co.za
Wed Jul 13 20:32:18 EEST 2016


Hi Guys

I have a Basler Ace IP camera with which I'm trying to create a video 
stream server that allows multiple connections *The camera does not 
provide a streaming output* what so ever, but has a proprietary 
interface.  Using their SDK I managed to pull out raw images at around 
12 fps.  The images are 8-bit monochrome (grey scale) of 2592x1944 
pixels (~5MB each).

*
**A. What I have achieved so far:*
   Camera --(Eth)--> My app using SDK --> /Each frame as a file/ --> 
FFmpeg --> mp4 movie

Safe each individual frame to file then use FFmpeg to create a movie 
from these file using the following:
$ ffmpeg -pixel_format gray -video_size 2592x1944 -framerate 10 
-start_number 1 -i test_%03d.raw \
     -vcodec libx264 -x264opts sliced-threads -pix_fmt yuv420p -preset 
ultrafast -tune zerolatency -vsync cfr -g 10 \
     -f mp4 test_file.mp4

The result is perfect!

*B. My next step:*
   Camera --(Eth)--> My app with SDK --/(TCP socket)/--> FFmpeg --> mp4 
movie

Now I'm trying to push those same frames to a TCP socket and have FFmpeg 
process them directly with something like:
   $ ffmpeg -f rawvideo -pixel_format gray -video_size 2592x1944 
-framerate 10 -i tcp://192.168.1.40:5556 \
     -vcodec libx264 -x264opts sliced-threads -pix_fmt yuv420p -preset 
ultrafast -tune zerolatency -vsync cfr -g 10 \
     -f mp4 test.mp4

This does not work correctly.  Because FFmpeg only sees a stream of 
bytes it does not know where the start of the frames are, so the image 
is misaligned.

*My q**uestion:**
*1. Is there a way to force FFmpeg to recognise the start of frame 
correctly on a raw feed, e.g. is there a marker I can insert?
2. Is my only hope to put the image into a container format like MPEG 
TS?  If so, what can I use that is relatively straight forward?

I'm hoping someone here has some ideas for me.

*No**tes:*
I'm not using a standard IP camera because:
  - they all provide images streams with H.264 compression which 
generates artifacts on the images,
  - the H.264 compression takes time to compress and then needs to be 
decompresses before I can process the images, which adds a delay
The applications calls for perfect clean raw video data that can be 
processed immediately.
I've considered piping the images directly from my app into FFmpeg, but 
this is impossible as I need multiple streams and the camera only allows 
for one connection, so cannot have multiple streams processing the image.

Thanks
Charl


More information about the ffmpeg-user mailing list