Tag Archives: bitrate

Why rendering form 8 bit to 8 bit can be a bad thing to do.

When you transcode from 8 bit to 8 bit you will almost always have some issues with banding if there are any changes in the gamma or gain within the image. As you are starting with 8 bits or 240 shades of grey (bits 16 to 255 assuming recording to 109%) and encoding to 240 shades the smallest step you can ever have is 1/240th. If whatever you are encoding or rendering determines that lets say level 128 should now be level 128.5, this can’t be done, we can only record whole bits, so it’s rounded up or down to the closest whole bit. This rounding leads to a reduction in the number of shades recorded overall and can lead to banding.
DISCLAIMER: The numbers are for example only and may not be entirely correct or accurate, I’m just trying to demonstrate the principle.
Consider these original levels, a nice smooth graduation:

128,    129,   130,   131,   132,   133.

Imagine you are doing some grading and you plugin has calculated that these are the new desired values:

128.5, 129, 129.4, 131.5, 132, 133.5
But we cant record half bits, only whole ones so for 8 bit these get rounded to the nearest bit:

129,   129,   129,   132,   132,   134

You can see how easily banding will occur, our smooth gradation now has some marked steps.
If you are rendering to 10 bit you would get more in between steps.
If you render to 10 bit then when step 128 is determined to be be 128.5 by the plugin this can now actually be encoded as the closest 10 bit equivalent because for every 1 step in 8 bit there are 3.9 steps in 10 bit, so (approximately,translating to 10 bit) level 128 would be 499 and 128.5 would be 501
128.5 = 501

129 = 503

129.4 = 505

131.5 = 513

132 = 515

133.5 = 521

So you can see that we now retain in-between steps which are not present when we render to 8 bit so our gradation remains much smoother.

The 8 bit or 10 bit debate.

Over the years there have been many, often heated debates over the differences between 8 bit and 10 bit codecs. This is my take on the situation, from the acquisition point of view.

The first thing to consider is that a 10 bit codec requires a 30% higher bitrate to achieve the same compression ratio as the equivalent 8 bit codec. So recording 10 bit needs bigger files for the same quality. The EBU recently evaluated several different 8 bit and 10 bit acquisition codecs and their conclusion was that for acquisition there was little to be gained by using any of the commonly available 10 bit codecs over 8 bit because of the data overheads.

My experience in post production has been that what limits what you can do with your footage, more than anything else is noise. If you have a noisy image and you start to push and pull it, the noise in the image tends to limit what you can get away with. If you take two recordings, one at a nominal 100Mb/s and another at say 50Mb/s you will be able to do more with the 100Mb/s material because there will be less noise. Encoding and compressing material introduces noise, often in the form of mosquito noise as well as general image blockiness. The more highly compressed the image the more noise and the more blockiness. It’s this noise and blockiness that will limit what you can do with your footage in post production, not whether it is 10 bit over 8 bit. If you have a 100Mb 10 bit HD compressed recording and comparable 100Mb 8 bit recording then you will be able to do more with the 8 bit recording because it will be in effect 30% less compressed which will give a reduction in noise.

Now if you have a 100Mb 8bit recording and a 130Mb 10 bit recording things are more evenly matched and possibly the 10 bit recording if it is from a very clean, noise free source will have a very small edge, but in reality all cameras produce some noise and it’s likely to be the camera noise that limits what you can do with the images so the 10 bit codec has little advantage for acquisition, if any.

I often hear people complaining about the codec they are using, siting that they are seeing banding across gradients such a white walls or the sky. Very often this is nothing to do with the codec. Very often it is being caused by the display they are using. Computers seem to be the worst culprits. Often you are taking an 8 bit YUV codec, crudely converting that to 8 bit RGB and then further converting it to 24 bit VGA or DVI which then gets converted back down to 16 bit by the monitor. It’s very often all these conversions between YUV and RGB that cause banding on the monitor and not the fact that you have shot at 8 bit.

There is certainly an advantage to be had by using 10 bit in post production for any renders, grading or effects. Once in the edit suite you can afford to use larger codecs running at higher bit rates. ProRes HQ or DNxHD at 185Mb/s or 220Mb/s are good choices but these often wouldn’t be practical as shooting codecs eating through memory cards at over 2Gb per minute. It should also be remembered that these are “I” frame only codecs so they are not as efficient as long GoP codecs. From my point of view I believe that to get something the equivalent of 8 bit Mpeg 2 at 50Mb/s you would need a 10 bit I frame codec running at over 160Mb/s. How do I work that out? Well if we consider that Mpeg 2 is 2.5x more efficient than I frame only then we get to 125Mb/s (50 x 2.5). Next we add the required 30% overhead for 10 bit (125 x 1.3) which gives 162.5Mb/s. This assumes the minimum long GoP efficiency of x2.5. Very often the long GoP advantage is closer to x3.

So I hope you can see that 8 bit still makes sense for acquisition. In the future as cameras get less noisy, storage gets cheaper and codecs get better the situation will change. Also if you are studio based and can record uncompressed 10 bit then why not? Do though consider how you are going to store your media in the long term and consider the overheads needed to throw large files over networks or even the extra time it takes to copy big files compared to small files.