Tag Archives: depth

Can I use 8 bit to record S-Log?

My opinion is that while 8 bit, 422 can be used for S-Log, it is not something I would recommend. I’d rather use a cinegamma with 8 bit recording. 10 bit 422 S-log is another matter altogether, this is well worth using and works very well indeed. It’s not so much whether you use 444, 422 or maybe even 420, but the number of bits that you use to record your output.

What you have to consider is this. With 8 bit, you have 240 shades of grey from black to super white. of the 256 bits available, 16 are used for sync, white is at 235 and super white 256 so black to 100% white is only 219. With Rec-709, standard gamma, on an F3 you get about an 8 stop range, so each stop of exposure has about 30 shades of grey. When you go to S-Log, you now have around 13 stops of DR, so now each stop only has 18 shades of grey. Potentially using 8 bit for S-Log, before you even start to grade, your image will be seriously degraded if you have any flat or near flat surfaces like walls or the sky in your scene.

Now think about how you expose S-Log. Mid grey sits at 38% when you shoot. If you then grade this to Rec-709 for display on a normal TV then you are going to stretch the lower end of your image by approx 30%, so when you stretch you 18 steps of S-Log grey to get to Rec-709 you then end up with the equivalent of only around 12 shades of grey for each stop, that’s less than half of what you would have if you had originally shot using Rec-709. I’m sure most of us have at some point seen banding on walls or the sky with standard gammas and 8 bit, just imagine what might happen if you effectively halve the number of grey shades you have.

By way of a contrast, just consider that 10 bit has 956 grey shades from black to super white. the first 64 bits are used for sync and other data, 100% white is bit 940 and super white 1019. So when shooting S-Log using 10 bit you have about 73 grey shades per stop, a four fold improvement over 8 bit S-Log so even after shooting S-Log and grading to Rec-709 there are still almost twice as many grey shades than if you had originally shot at 8 bit Rec-709.

This is a bit of an over simplification as during the grading process, if your workflow is fully optimised you would be grading from 8 bit to 10 bit and there are ways of taking your original 8 bit master and extrapolating additional grey shades from that signal through smoothing or other calculations. But the reality is that 8 bits for a 13 stop dynamic range is really not enough.

The whole reason for S-Log is to give us a way to take the 14ish stop range of a typical linear 12 bit camera sensor and squeeze as much of that signal as possible into a signal that remains useable and will pass through existing editing and post production workflows without the need for extensive processing such as de-bayering or RAW conversion. So our signal which starts at 12 bits has already been heavily processed to get it from 12 bits to 10. Going from 10 bit down to 8 is a step too far IMHO.

HD, SD and Depth of Field.


I was reminded of this by Perrone Ford on DVINFO.net. With HD cameras compared to SD cameras the depth of field appears shallower. Why is this and why is it important?

Visually depth of field is the loss of focus as you move away from the object that you have focussed on. If you have two cameras, one HD and one SD and they both have the same lens at the same aperture along with sensors of the same size then the change in focus with distance for both cameras will be exactly the same. However with the HD camera, because the image is sharper to start with, any small changes in focus will be more apparent than with the softer picture from the SD camera. So visually the HD camera will have a shallower depth of field. Now if you take that HD image and convert it to SD then the depth of field appears to increase again. This can be calculated and measured and is defined by the “circle of confusion”

So why is this important? Well lets look at what happens when you shoot an interview or face. The human brain is very good at looking at faces, we “read” faces day in and day out, taking in expressions, skin tone and subtle changes. We use these tiny visual cues to gauge emotion and see how someone is responding to the things that we do. Because of this any imperfection in the look of a face in a video tends to stand out (thats also why you normally expose for faces). With HD it’s quite possible to have a shot of a face where the tip of the persons nose or their ears are in sharp focus while the eyes are slightly soft. With an SD image we would be unlikely to notice this because of the greater depth of field, but HD with it’s visually shallower DoF can show up this small difference in focus and our brain flags it up. Very often you see the HD face and it looks OK, but something in your brain tells you it’s not quite right as the eyes are not quite as sharp as the nose or ears. So this apparently shallower DoF means that you can’t just focus on a face with HD but you must focus on the eyes, as that’s where we normally look when engaged in a conversation with someone.