[FFmpeg-user] Meaning of ffprobe output
exwyeorzee at yahoo.com
Sat Feb 2 18:59:47 EET 2019
Sorry for top posting, I can’t figure out how to preserve the formatting while editing from the bottom with this Yahoo Mail mobile editor...
“OK, my understandig was, "de-interlacing" means to re-encode from
interlaced to progressive _and_ do some interpolation or whatever stuff.”
Not entirely but you got the gist of it IMO. My highly ignorant and barely researched understanding:
Some de-interlacers alternate fields (e.g. top/bottom) on each frame while discarding the other field and then interpolate the missing lines information for the frame directly from the single half-frame field to eliminate inter-field interpolation motion jitter/judder, but increasing the image and motion noise in the fixed-frame-rate process of alternating fields that don’t time-align or spatially align with each other as they alternate,
some ignore the same field (e.g. top) entirely on every frame and then interpolate the missing information from the single field to also eliminate the alternating field judder (while reducing the vertical resolution by half),
some create all estimated frames with interpolation between a set of two (or more) fields for less motion blur but also with some loss of motion and image accuracy overall because they algorithmically detect moving objects in the frame and move objects around on the frame in the process of trying to merge the moving objects from the fields without smearing the relatively motionless or disjointedly moving (camera motion) background,
some blend the information in the two fields into a single frame without interpolating much but the background motion and merely de-noise the resulting jaggies on detected moving objects with a filter, or maybe I got the interpolation and blending backwards in this case. Dunno... not expert...
some do interpolation and/or filtering/blending from prior/successive fields to double the frame rate so there’s an output frame for every input field and the motion plus image is as accurately preserved as possible because it updates with every successive field like a true interlaced display does...
then also maybe before de-interlacing there is pull-up or pull-down needed to undo a prior interlaced frame rate conversion between PAL and NTSC...
then there’s the added complexity of making sure that your processing gets the order of the fields correct, because everything gets scrambled if you start your first frame with the top field but your source video starts with the bottom field... and if there was prior clumsy editing with inadequate tools too plus maybe an interlaced pull-up or pull-down frame rate conversion you might find the field order of your source reversing in mid-video and needing to re-align your de-interlacer and/or pull-up/pull-down to the fields in your source automagically or manually...
I don’t have a clear understanding because I’m not a video expert but if you look it up you can find more info. I probably misunderstood and poorly described lots of details. All I really know for sure is...
They don’t all just use variations on fancy math to interpolate a frame from two discrete interlaced fields. The actual method of selecting the input data to be interpolated from varies too depending on whether motion, clarity, judder, processing speed, etc. is more important, and whether the frame rate has already been converted by prior ugly interlaced processing.
I only have familiarity from Handbrake, an older version from Ubuntu 12.04 or prior that still had the external processing filters available for pull-down and special effects etc. Some of my source material was European and I tried my best to undo the pre-existing interlaced pull-up scrambling from PAL to NTSC without guidance from an expert or proper tools. My understanding is incomplete as was my tool set and so were my results. YADIF seemed to work the best for me and the pull-down from NTSC to progressive never quite worked right on some sources because of the pre-existing pull-up from PAL to NTSC that I had no filter to undo completely, so there was still residual judder I couldn’t remove. Overall though the output result was smoother and clearer than was the source when played back on a progressive display without de-interlacing.
If you need help selecting or tuning a method, you will need to do some reading or you won’t understand what the experts are trying to tell you. I’ve done some of this conversion to store progressive scan on my media server and I probably wouldn’t understand the function better either without more reading. Those who know the FFT math are probably rolling their eyes at my hand-waving attempt to explain math I don’t know. Ask an expert directly for more help understanding if you need more guidance.
You can also experiment with the VLC media player de-interlacing menu options to visually compare the result by switching methods on the fly in real time or doing side-by-side playback on two instances of player. The algorithms are probably implemented similarly if not identically to ffmpeg and probably originate from the same source library but VLC likely won’t have all the knobs to twiddle in the user interface that exist in the ffmpeg source library, so that loss of function will bias your comparison.
Pausing and inspecting progressive frames from two different processing methods for side-by-side comparison, and comparing the processing speed for a long video, is the only way I know to check the result between methods. I’m thinking maybe there are better statistically-based methods or GUI tools for a purist expert to choose the best method of processing. But hey, it’s interlaced source so we are already dealing with compromises and there’s diminishing returns for the effort invested. I’d just use yadif for simple de-interlacing on default settings and move on, but that’s consumer level thinking and a very old flow with tools that no longer exist. YMMV
Good luck and I hope I haven’t spammed the list with my ignorance. I’m just trying to help. Feel free to refute and/or to correct my inexpert explanation. I’ve just relayed almost everything I know about video processing lol.
Sent from Yahoo Mail for iPhone
On Friday, February 1, 2019, 2:46 PM, Ulf Zibis <Ulf.Zibis at gmx.de> wrote:
Am 30.01.19 um 15:08 schrieb Carl Eugen Hoyos:
> Without interpolation, this is what all video players do if you disable
> all de-interlacing.
> The problem is what kind of "interpolation" you use, this is called
> de-interlacing, an endless number of algorithms exist.
OK, my understandig was, "de-interlacing" means to re-encode from
interlaced to progressive _and_ do some interpolation or whatever stuff.
ffmpeg-user mailing list
ffmpeg-user at ffmpeg.org
To unsubscribe, visit link above, or email
ffmpeg-user-request at ffmpeg.org with subject "unsubscribe".
More information about the ffmpeg-user