Tag Archives: bit

Can I use 8 bit to record S-Log?

My opinion is that while 8 bit, 422 can be used for S-Log, it is not something I would recommend. I’d rather use a cinegamma with 8 bit recording. 10 bit 422 S-log is another matter altogether, this is well worth using and works very well indeed. It’s not so much whether you use 444, 422 or maybe even 420, but the number of bits that you use to record your output.

What you have to consider is this. With 8 bit, you have 240 shades of grey from black to super white. of the 256 bits available, 16 are used for sync, white is at 235 and super white 256 so black to 100% white is only 219. With Rec-709, standard gamma, on an F3 you get about an 8 stop range, so each stop of exposure has about 30 shades of grey. When you go to S-Log, you now have around 13 stops of DR, so now each stop only has 18 shades of grey. Potentially using 8 bit for S-Log, before you even start to grade, your image will be seriously degraded if you have any flat or near flat surfaces like walls or the sky in your scene.

Now think about how you expose S-Log. Mid grey sits at 38% when you shoot. If you then grade this to Rec-709 for display on a normal TV then you are going to stretch the lower end of your image by approx 30%, so when you stretch you 18 steps of S-Log grey to get to Rec-709 you then end up with the equivalent of only around 12 shades of grey for each stop, that’s less than half of what you would have if you had originally shot using Rec-709. I’m sure most of us have at some point seen banding on walls or the sky with standard gammas and 8 bit, just imagine what might happen if you effectively halve the number of grey shades you have.

By way of a contrast, just consider that 10 bit has 956 grey shades from black to super white. the first 64 bits are used for sync and other data, 100% white is bit 940 and super white 1019. So when shooting S-Log using 10 bit you have about 73 grey shades per stop, a four fold improvement over 8 bit S-Log so even after shooting S-Log and grading to Rec-709 there are still almost twice as many grey shades than if you had originally shot at 8 bit Rec-709.

This is a bit of an over simplification as during the grading process, if your workflow is fully optimised you would be grading from 8 bit to 10 bit and there are ways of taking your original 8 bit master and extrapolating additional grey shades from that signal through smoothing or other calculations. But the reality is that 8 bits for a 13 stop dynamic range is really not enough.

The whole reason for S-Log is to give us a way to take the 14ish stop range of a typical linear 12 bit camera sensor and squeeze as much of that signal as possible into a signal that remains useable and will pass through existing editing and post production workflows without the need for extensive processing such as de-bayering or RAW conversion. So our signal which starts at 12 bits has already been heavily processed to get it from 12 bits to 10. Going from 10 bit down to 8 is a step too far IMHO.

Can you see the difference between a 8 bit and 10 bit camera output?

The question is can you see a difference between a camera with a 10 bit output and one with an 8 bit output? This is being asked a lot right now, in particular in relation to the Sony FS100 and the Sony F3. The FS100 has an 8 bit output and the F3 is 10 bit.

If your looking at the raw camera output then you will find it just about impossible to see a difference with normal monitoring equipment. This is because internally the cameras process the images using more than 8 bits (probably at least 10 on the FS100, the EX3 is 12 bit) and then convert to 8 or 10 bit for output so you should have nice smooth mapping of graduations to the full 8 bit output. Then consider that most LCD monitors are not able to display even 8 bits. The vast majority of monitors have a 6 bit panel and even a rare 8 bit monitor wont display all 8 bits as it has to do a gamma correction at 8 bits and this results in less than 8 bits being displayed. 10 bit monitors are very rare and again as gamma correction is normally required there is rarely a 1:1 bit for bit mapping of the 10 bit signal, so even these don’t show the full 10 bits of the input signal.  So it becomes apparent that when you view the original material the differences will not normally be visible and often the only way to determine what the output signal actually is is with a data analyser that can decode the HDSDi stream and tell you whether the 2 extra bits actually contain useful image data or are just padding.
Where the 8 bit, 10 bit difference will become apparent is after grading and post production. I wrote a more in depth article here: Why rendering form 8 bit to 8 bit can be a bad thing to do. But basically when you start manipulating an 8 bit image you will see banding issues a lot sooner than with 10 bit due to the reduced number of luma/color shades in 8 bit. Stretch out or compress 8 bit and some of those shades get removed or shifted and when the number of steps/shades is borderline to start with if you start throwing more away you will get issues.

Why rendering form 8 bit to 8 bit can be a bad thing to do.

When you transcode from 8 bit to 8 bit you will almost always have some issues with banding if there are any changes in the gamma or gain within the image. As you are starting with 8 bits or 240 shades of grey (bits 16 to 255 assuming recording to 109%) and encoding to 240 shades the smallest step you can ever have is 1/240th. If whatever you are encoding or rendering determines that lets say level 128 should now be level 128.5, this can’t be done, we can only record whole bits, so it’s rounded up or down to the closest whole bit. This rounding leads to a reduction in the number of shades recorded overall and can lead to banding.
DISCLAIMER: The numbers are for example only and may not be entirely correct or accurate, I’m just trying to demonstrate the principle.
Consider these original levels, a nice smooth graduation:

128,    129,   130,   131,   132,   133.

Imagine you are doing some grading and you plugin has calculated that these are the new desired values:

128.5, 129, 129.4, 131.5, 132, 133.5
But we cant record half bits, only whole ones so for 8 bit these get rounded to the nearest bit:

129,   129,   129,   132,   132,   134

You can see how easily banding will occur, our smooth gradation now has some marked steps.
If you are rendering to 10 bit you would get more in between steps.
If you render to 10 bit then when step 128 is determined to be be 128.5 by the plugin this can now actually be encoded as the closest 10 bit equivalent because for every 1 step in 8 bit there are 3.9 steps in 10 bit, so (approximately,translating to 10 bit) level 128 would be 499 and 128.5 would be 501
128.5 = 501

129 = 503

129.4 = 505

131.5 = 513

132 = 515

133.5 = 521

So you can see that we now retain in-between steps which are not present when we render to 8 bit so our gradation remains much smoother.

Whites, Super Whites and other Bits and bobs.

Do you know how your NLE is handling your video, are you whites white or whiter than white or does this sound like a washing powder add?

In the analog world you shot within the legal range of black to 100% white. It was simple, easy to understand and pretty straight forward. White was white at 100% and that was that. With digital video it all gets a lot more complicated, especially as we now start to move to greater and greater bit depths and the use of extended range recording with fancy gamma curves becomes more common. In addition computers get used more and more for not just editing but also as the final viewing device for many videos and this brings additional issues of it’s own.

First lets look at some key numbers:

8 bit data gives you 256 possible values 0 to 255.

10 bit data gives you 1024 possible values, 0 to 1023.

Computers use bit 0 to represent black and bit 255 or 1023 to represent peak white.

But video is quite different and this is where things get messy:

With 8 bit video the first 16 bits are used for sync and other data. Zero or black is always bit 16 and peak white or 100% white is always bit 235, so the traditional legal black to white range is 16 to 235, only 219 bits of data. Now in order to get a better looking image with more recording range many cameras take advantage of the bits above 235. Anything above 235 is “super white” or whiter than white in video terms, more than 100%. Cinegammas and Hypergammas take advantage of this extra range, but it’s not without it’s issues, there’s no free lunch.

10 bit video normally uses bit 64 as black and 940 as peak white. With SMPTE 10-bit extended range you can go down to bit 4 for undershoot and you can go up to bit 1019 for overshoots but the legal range is still 64-940. So black is always bit 64 and peak white always bit 940. Anything below 64 is a super black or blacker than black and anything above 940 is brighter than peak white or super white.

At the moment the big problem with 10 bit extended (SMPTE 274M 8.12) and also 8 bit that uses the extra bits above 235  is that some codecs and most software still expects to see the original legal range so anything recorded beyond that range, particularly below range can get truncated or clipped. If it is converted to RGB or you add an RGB filter or layer in your NLE it will almost certainly get clipped as the computer will take the 100% video range (16-235) and convert it to the 100% computer RGB range (0-255). So you run the risk of loosing your super whites altogether. Encoding to another codec can also lead to clipping. FCP and most NLE’s will display super blacks and super whites as these fall within the full 8 or 10 bit ranges used by computer graphics, but further encoding can be problematic as you can’t always be sure whether the conversion will use the full recorded range or just the black to white range. Baselight for example will only unpack the legal range from a codec so you need to bring the codec into legal range before going in to baselight. So as we can see it’s important to be sure that your workflow is not truncating or clipping your recorded range back to the nominal legal or 100% range.

On the other hand if you are doing stuff for the web or computer display where the full 0 to 255 (1023) are used then, you often need to use the illegal video levels above 100% white to get whites to look white and not bright grey! A video legal white at 235 just does not look white on a computer screen where whites are normally displayed using bit 255. There are so many different standards across different platforms that it’s a complete nightmare. Arri with Alexa for example won’t allow you to record extanded range using ProRes because of these issues, while the Alexa HDSDi output will output extended range.

This is also an issues when using computer monitors for monitoring in the edit suite. When you look at this web page or any computer graphics white is set at bit 255 or 1023. But that would be a super white or illegal white for video. As a result “in-range” or legal range videos when viewed on a computer monitor often look dull as the whites will be less bright than the computers own whites. The temptation therefore is to grade the video to make the whites look as bright as the computers whites which leads to illegal levels, clipping, or smply an image that does not look right on a TV or video monitor. You really need to be very careful to ensure that if you shoot using extended range that your workflow keeps that extended range intact and then you need to remember to legalise you video back to within legal range if it’s going to be broadcast.