What is it?

The decomb filter only deinterlaces frames that are visibly interlaced. It is enabled in the Picture Settings.

I care because?

You can leave this on all the time and never have to worry about interlacing. It will handle everything for you.

Ummm… okay. And what is interlacing?

A lot of video content out there is interlaced. Whenever things move around in interlaced video, they create combing artifacts that look like this:

An example of an interlaced frame.

It is fixed through a filtering process called deinterlacing. Here is what that frame looks like after being run through a deinterlacer.

An example of an interlaced frame after being filtered by decomb

Great. Why can't I just leave the old deinterlacer on all the time?

The downside of deinterlacing is that it reduces picture quality. The filtering process can lose up to half the frame's horizontal lines, throwing away detail. It's not necessary for progressive video, which isn't interlaced.

A progressive frame looks like this:

An example of a progressive frame

But note the reduction in fine detail when it's deinterlaced:

An example of a progressive frame that has been deinterlaced

So it's not good to run a deinterlacer all the time. If you don't need to reduce picture quality, why do it? Not to mention, it can really slow down an encode.

Sometimes content is mixed, cutting back and forth between interlaced and progressive video. Even in interlaced content, filtering is only necessary when either something in the scene or the camera moves, producing those combing artifacts.

So what's decomb do differently?

The decomb filter looks at each pixel of each frame of a video. It then only deinterlaces frames that show visible amounts of combing.

This means you never have to check if a video you're encoding is interlaced -- just run the decomb filter all the time and it'll take care of everything.

It will deinterlace the first of those examples above, and leave the second one alone. In fact, the "after" shot of the janitor and the "before" shot of ''Bender's Game'' were both the output of the decomb filter.

Technical Stuff

What deinterlacing filter does it use?

By default:

  • When a frame shows a significant amount of combing, decomb runs a tweaked version of HandBrake's "Slower" deinterlacing filter. That's yadif, a spatially (within one frame) and temporally (between several frames) aware deinterlacer. The tweak is that it uses a looks at more pixels when generating the spatial predictions.
  • When a frame is only very slightly combed, decomb runs a blending deinterlacer called a lowpass-5 filter. This preserves more detail from the image than yadif, and in particular prevents noticeable deinterlacing artifacts on progressive video.

Which deinterlacer is used and whether blending is enabled can be customized via a custom decomb options string, however. See "Mode: Deinterlacing mode" below for more information.

How fast is it?

Well, it's slower than using HandBrake's low quality "Fast" deinterlacing filter.

However, it's faster than using the high quality "Slow" or "Slower" filters, provided you don't enable EEDI2 or mcdeint. The only times decomb will take longer than those are when EEDI2 and/or mcdeint are enabled, or when every single frame of the video is moving and the entire thing is interlaced. It's faster to check for interlacing than to deinterlace -- so as long as decomb can skip deinterlacing a decent number of frames, it will come out faster.

How does see what frames are combed?

It combines several techniques gleaned from tritical's decombing filters for AviSynth.

Building the combing mask

For ever pixel in the frame, it asks a few questions.

  • Is the current pixel's intensity significantly different from those of the pixels above and below?
  • Are the intensities of the current pixel and the ones above and below significantly different from the intensities of those pixels in the previous and next frames? (Asking this question makes decomb a temporal filter, and is why it only deinterlaces when there's motion.)
  • If you blur together the current pixel and the ones two above and two below, are they significantly different from what you get if you blur together the pixels directly above and below? This makes the filter less susceptible to noise. It separates the interlaced image into its constituent fields, and then then calculates an average for each field, weighing the current pixel more and the pixels further away less.

As it goes through each pixel of the frame, the filter builds up a bitmap array called the combing mask. It just says whether each pixel got a "Yes" to all of those questions.

Examining the combing mask

Now the filter looks at what it's learned about the frame. It could just add up how many pixels show as combed on the mask, but that's not really a good idea.

The problem is noise. Film grain or digital artifacts or all sorts of other things can lead to tiny single-pixel changes between frames. These imperfections are not combing. In order to lessen their influence on the result, the filter looks for clusters of positive hits.

It goes through the frame in small, non-overlapping blocks. If a significant number of pixels show as combed in any one of those blocks, the filter considers that whole frame combed. It works under the assumption that if that many pixels showed combing all grouped together, a person looking at the video would notice it.

If a fewer but still sizeable number of pixels show combing in a block, the frame is considered to be somewhat-combed.

Using the results

What happens next depends on whether or not the frame is flagged as progressive. It doesn't mean the frame is truly progressive, only that it is part of a progressive segment of the stream.

If the frame is not flagged as progressive, and an examination of the combing mask shows it is combed, then it is handed off to a deinterlacer (by default, yadif with Jon's custom interpolations).

If the frame is not flagged as progressive, and an examination of the combing mask shows it is only somewhat combed, then it is handed off to the lowpass 5 blender (if enabled, otherwise a deinterlacer is used).

If the frame is flagged as progressive, and the combing mask shows it is combed, then it is handed off to the lowpass 5 blender (if enabled, otherwise a deinterlacer is used).

Is it threaded?


It uses two sets of threads.

First, it divides the frame up equally between your CPUs and generates the combing mask in parallel.

Then, if the frame is interlaced, it divides the frame up equally between your CPUs again to perform yadif's line-by-line filtering in parallel.

What do all those parameters mean?

Combing detection parameters:

Mode : Spatial metric : Motion thresh : Spatial thresh : Block thresh : Block width : Block height

EEDI2 parameters (appended to the combing detection parameters):

Magnitude thresh : Variance thresh : Laplacian thresh : Dilation thresh : Erosion thresh : Noise thresh : Max search distance : Post-processing

Plus (appended to the EEDI2 parameters):

Parity (A.K.A. field dominance)

The defaults are pretty good. They are:


These settings aren't as good, but are closer to how the AviSynth Decomb package functions:

7:1:6:20:15:4:8:4:2:50:24:1:-1 (which should emulate its default settings)

Mode: Deinterlacing mode

Controls what deinterlacing filter is applied to combed frames.

  • 0: Does nothing
  • 1: Yadif with spatial and temporal interlacing check This is the "slower" deinterlacer
  • 2: Blending interpolation A "lowpass 5" filter blends fields together
    See "using the results" above for what happens when combining blend with another deinterlacer
  • 4: Cubic interpolation Jon's custom interpolations
  • 8: EEDI2 interpolation High quality but very slow interpolations (mostly useful for animated content)
  • 16: Post-process with mcdeint (motion estimation and compensation, very slow) Mode: 2; QP : 1
    Note: mcdeint does not work by itself; it needs to be fed by another deinterlacer
    Also, there is a bug in decomb's implementation of mcdeint (affects both 0.9.4 and 0.9.5)
  • 32: output the combing mask instead of pictures

Modes can be layered. For example, yadif (1) + EEDI2 (8) = 9, which will feed EEDI2 interpolations to yadif.

Working combos:

1: Just yadif
2: Just blend
3: Switch between yadif and blend
4: Just cubic interpolate
5: Cubic->yadif
6: Switch between cubic and blend
7: Switch between cubic->yadif and blend (the default)
8: Just EEDI2 interpolate
9: EEDI2->yadif
10: Switch between EEDI2 and blend
11: Switch between EEDI2->yadif and blend
12-15: Same as 8-11; EEDI2 will override cubic interpolation
16: DOES NOT WORK BY ITSELF -- mcdeint needs to be fed by another deinterlacer
17: Yadif->mcdeint
18: Blend->mcdeint
19: Switch between blending and yadif -> mcdeint
20: Cubic->mdeint
21: Cubic->yadif->mcdeint
22: Cubic or blend -> mcdeint
23: Cubic->yadif or blend -> mcdeint
24: EEDI2->mcdeint
25: EEDI2->yadif->mcdeint
26: …okay I'm getting bored now listing all these different modes
32: Passes through the combing mask for every combed frame (white for combed pixels, otherwise black)
33+: Overlay the combing mask for every combed frame on top of the filtered output (white for combed pixels)

Metric: Spatial decombing algorithm

Controls what decombing algorithm is used to decide if frames are combed.

  • 0: Transcode's 32detect Label a column of 3 pixels as A, B, and C. If the difference between A and B is greater than the color_diff, and the difference between A and C is less than the color_equal, a pixel is considered to be combed. It uses a hard-coded color_equals of 10 and a color_diff of 15.
  • 1: Decomb's IsCombed This is the most popular way of detecting combing in AviSynth. Label a column of three pixels as A, B, and C. Multiply the difference between A and B times the difference between C and B. If the square root of that is greater than the spatial threshold value, the pixel is considered combed.
  • 2: IsCombedTIVTC This is the method that's replacing IsCombed in the Windows world. It's how MeGUI decides if a frame is combed. Instead of only looking at a column of three pixels, it looks at five. Label a column of five pixels as A, B, and C, D, and E. Pixels A, C, and E are all from one field, and B and D are from the other field. It applies two blur filters, one to the pixels from each field. So now it has one averaged value for pixels ACE and one averaged value for pixels BD. If the difference between them is greater than the spatial threshold, the middle pixel (C) is considered combed.
  • -1: Disable combing detection All frames are deinterlaced.

Motion Threshold: Temporal Check

Interlacing is only visible during movement. Therefore, it doesn't make sense to check if a pixel is combed, if it's staying still.

The motion threshold is how much the intensity at the current pixel location must have changed since the last frame or will change in the next frame for it to be considered in motion. 6-8 are good values. -1 turns off motion detection. Default is 6. This means that if the difference between the current pixel and the same location in the last frame is greater than 6, it'll be combed. Same with the difference between the current pixel and the same location in the next frame.

This motion check is applied regardless of the deinterlacing mode or spatial metric.

Spatial Threshold:

How different the pixel above and/or below need to be for the current one to be seen as combed. The default is 9. This is used in several ways.

First off, it's used in a preliminary check regardless of the mode or metric. Label a column of three pixels as A, B, and C. The filter only checks for combing when the difference between A and B and the difference between C and B are both greater than the spatial threshold. This ensures that there's actually combing, with alternating lines, and not a sharp edges in a progressive image or a gradient.

Then, it's used again in spatial metrics 1 and 2 (see their descriptions for more detail on how it's used there).

Block Threshold:

How many pixels in a block width * block height window must be combed for the frame to be deinterlaced. The window moves left and right, up and down the frame in non-overlapping chunks.

The idea here is to reduce the effect of noise. If you have 80 combed pixels in a frame, but none of them are clumped together, you're just seeing random noise, not combed lines. But if you have 80 combed pixels inside a 16x16 clump, well, that 16x16 window is going to be obviously interlaced to the human eye.

Block Width:

The width of the window used with the block threshold

Block Height:

The height of the window used with the block threshold

EEDI2 parameters:

Based on the EEDI2 readme:

mthresh/lthresh/vthresh (Magnitude thresh, Variance thresh, Laplacian thresh)

These all control edge detection used for building the initial edge map. mthresh is the edge magnitude threshold… its range is from 0 to 255, lower values will detect weaker edges. lthresh is the laplacian threshold… its range is 0 to 510, lower values will detect weaker lines. vthresh is the variance threshold… its range is 0 to a large number, lower values will detect weaker edges.

default: 10 (mtresh), 20 (ltresh), 20 (vthresh)

dstr, estr (Dilation thresh, Erosion thresh)

These are used for dilation and erosion of the edge map. estr sets the required number of edge pixels (<=) in a 3x3 area, in which the center pixel has been detected as an edge pixel, for the center pixel to be removed from the edge map. dstr sets the required number of edge pixels (>=) in a 3x3 area, in which the center pixel has not been detected as an edge pixel, for the center pixel to be added to the edge map. Use the "map" option to tweak these settings as needed.

default: 4 (dstr), 2 (estr)

nt (Noise thresh)

Defines a noise threshold between pixels in the sliding vectors, this is used to set initial starting values. Lower values should reduce artifacts but sacrifice edge reconstruction… while higher values should improve edge reconstruction but lead to more artifacts. The possible range of values is 0 to 256.

default: 50

maxd (Max search distance)

Sets the maximum pixel search distance for determining the interpolation direction. Larger values will be able to connect edges and lines of smaller slope but can lead to artifacts. Sometimes using a smaller maxd will give better results than a larger setting. The maximum possible value for maxd is 29.

default: 24

pp (Post-processing)

Enables two optional post-processing modes aimed at reducing artifacts by identifying problems areas and then using plain vertical linear interpolation in those parts. The possible settings are:

0 - no post-processing
1 - check for spatial consistency of final interpolation directions
2 - check for junctions and corners
3 - do both 1 and 2

Using the pp modes will slow down processing and can cause some loss of edge directedness.

default: 1


Specify whether the source is Top Field First (0) or Bottom Field First (1). The default is -1 (auto-detected by the video decoder)

Last modified 5 years ago Last modified on 05/31/11 23:10:43

Attachments (4)