Well I posted here a few days ago about how Data was distributed across the S-Log curve. David williams (thanks David) questioned some of the things in my post raising some valid questions over it’s accuracy, so I withdrew the post in order to review it further. While the general principles within the post were correct (to the best of my knowledge and research) and I stand by them, some of the numbers given were not quite right and the data/exposure chart was not quite right.
Before going further lets consider the differences between the a video sensor works and the way our eyes work. A video sensor is a linear device while our own visual system is a logarithmic system. Imagine you are in a room with 8 light fittings, each one with the same power and light output. You start with one lamp on, then turn on another. When you turn on the second lamp the room does not appear to get twice as bright even though the amount of light in the room has actually doubled. Now with two lamps on what happens when you turn on a third? Well you wouldn’t actually notice much of a change. To see a significant change you would need to turn on 2 more lamps. Now with 4 lamps on to see a significant difference you would need to turn on a further 4 lamps. Only adding one or two would make little visual difference. This is because our visual system is essentially a logarithmic system.
Now lets think about F-Stops. An f stop (or T-stop) is a doubling or halving of exposure. So again this is a logarithmic system. If with one light bulb your scene is one stop then to increase the scene brightness by one stop you must double the amount of light, so you would add another light bulb. Now to increase the scene brightness by a further stop you would have to take your existing two light bulbs and double it again to 4 light bulbs, and so on… 2, 4, 8, 16, 32, 64….
Now going back to a video sensor, take a look at the illustrative graph below. The horizontal scale is the number of lightbulbs in our hypothetical room and the vertical scale is the video output from an imaginary video sensor in percent. Please note that I am trying to illustrate a point, the numbers etc are not accurate, I’m just trying to explain something that is perhaps miss-understood by many, simply because it is difficult to understand or poorly explained elsewhere. The important thing to note is that the plotted blue line is a straight line, not a curve because the sensor is a linear device.
Now look at this very similar chart. The only difference now is that I have added an f-stop scale to the horizontal axis. Remember that one f-stop is a doubling of the amount of light, not simply one more lightbulb. I have also changed the vertical scale to data bits. To keep things simple I’m going to use something close 10 bit recording which actually has 956 data bits or steps (bits 64 to 1019 out of 1024 bits), but lets just round that up to 1000 data bits to keep life simple for this example.
So we can see that this imaginary video sensor uses bits 0-50 for the first stop, 50-100 for the second stop, 100-200 for the third stop, 200-400 for the fourth and 400-800 for the fifth. So it is easy to see that huge amounts of data are required to record each stop of over exposure. The brighter the image the more data that is required. Clearly if you want to record a wide dynamic range using a linear system you need massive numbers of data bits for the highlights, while the all important mid tones and shadow areas have relatively little data allocated to them. This is obviously not a desirable situation with current data limited recording systems, you really want to have sufficient data allocated to your mid-tones so that in post production you can grade them satisfactorily.
Now look what happens if we allocate the same amount of data to each stop of exposure. The green line is what you get if, in our imaginary camera we use 200 data bits to record each of our 5 stops of dynamic range. Does the shape of this curve look familiar to anyone? The important note here is that compared to the sensors linear output (the blue line) as the image brightness increases less and less data is being used to record the highlights. This mimics the way we see the world and helps ensure that in the mid ranges where skin tones normally reside there is lots of data to play with in post. Our visual system is most acute in the mid range. that’s because some of the most important things that we see are natural tones, plants, fauna and people. We tend to pay much less attention to highlights as these are rarely of interest to us. Because of this we can afford to reduce the amount of information in video highlights without the end user really noticing. This technique is used by most video cameras when the knee kicks in and compresses highlights. It’s also used by extended gamma curves such as cinegamma’s and hypergamma’s.
Anyone that’s seen a hypergamma curve or cinegamma curve plot will have seen a similar shape of curve. Hypergammas and Cinegammas also use less and less data to record highlights (compared to a linear response) and in many ways achieve a similar improvement in the captured dynamic range.
Hypergammas are not the same as S-Log however. Hypergammas are designed to be useable without grading, even if it’s not ideal. Because of this they stay close to standard gammas in the mid range and it’s only really the highlights that are compressed, this also helps with grading if recording using only an 8 bit codec as the amount of pushing and pulling required to get a natural image is less extreme. However because the Hypergammas allocate more data in the 60 to 90 percent exposure range to stay close to standard gamma the highlights have to be more highly compressed than S-Log so there is less highlight data to work with than with S-Log. If we look at the plot below which now includes an approximate S-Log curve (pink line) you can see that log recording has a much larger difference from a standard gamma in the mid ranges, so heavy grading will be required to get a natural looking image.
Because of the amount of grading that will normally be done with S-Log, recording the output using a 10 bit recorder is all but essential.
When I wrote this article I spent a lot of time studying the Sony S-Log white paper and reading up on S-Log and gamma curves all over the place. One thing that I believe leads to some confusion is the way Sony presents the S-Log data curve in the document. The exposure is plotted against the data bits using stops as opposed to image brightness. This is a little confusing if you are used to seeing traditional plots of gamma curves like the ones I have presented above that plot output against percentage light input. It’s confusing as Sony forget that using stops as the horizontal scale means that the horizontal scale is a log scale and this makes the S-Log “curve” appear to be a near straight line.
I have not used S-Log on an F3 yet. It will be interesting to see how it compares to Hypergamma in the real world. I’m sure it will bring some advantages as it allows for an 800% exposure range. I welcome any comments or corrections to this article.
1 Response to S-Log. A Further In Depth Look.