Tag Archives: shooting

To shoot flat or not to shoot flat?

There is a lot of hype around shooting flat. Shooting flat has become a fashionable way to shoot and many individuals and companies have released camera settings said to provide the flattest images or to maximise the camera dynamic range. Don’t get me wrong, I’m not saying that shooting flat is necessarily wrong or that you shouldn’t shoot flat, but you do need to understand the compromises that can result from shooting flat.

First of all what is meant by shooting flat? The term comes from the fact that images shot flat look, err, well…. flat when viewed on a standard TV or monitor. They have low contrast and may often look milky or washed out. Why is this? Well most TV’s and monitors only have a contrast range that is the equivalent of about 7 stops. (Even a state of the art OLED monitor only has a range of about 10 to 11 stops). The whole way we broadcast and distribute video is based on this 7 stop range. The majority of HD TV’s and monitors use a gamma curve based on REC-709, which also only has a 6 to 7 stop range. Our own visual system has a dynamic range of up to 20 stops (there is a lot of debate over exactly how big the range really is and in bright light our dynamic range drops significantly). So we can see a bigger range than most TV’s can show, so we can see bright clouds in the sky as well as deep shadows while a TV would struggle to show the same scene.

Modern camera sensors have dynamic ranges larger than 7 stops, so we can capture a greater dynamic range than the average monitor can show. Now consider this carefully: If you capture a scene with a 6 stop range and then show that scene on a monitor with a 6 stop range, you will have a very true to life and accurate contrast range. You will have a great looking high contrast image. This is where having matching gammas in the camera and on the monitor comes in to play. Match the camera to the monitor and the pictures will look great, 7 stops in, 7 stops out. But, and it’s a big BUT. Real world scenes very often have a greater range than 6 or 7 stops.

A point to remember here: A TV or monitor has a limited brightness range. It can only ever display at it’s maximum brightness and best darkness. Trying to drive it harder with a bigger signal will not make it any brighter.

Feed the monitor with an image with a 7 stop range and the monitor will be showing it’s blackest blacks and it’s brightest whites.

But what happens if we simply feed a 7 stop monitor with an 11 stop image? Well it can’t produce a brighter picture so the brightest parts of the displayed scene are no brighter and the darker, no darker so the image you see appears to have the same brightness range but with less contrast, it starts to look flat and un-interesting. The bigger the dynamic range you try to show on your 7 stop monitor, the flatter the image will look. Clearly this is undesirable for direct TV broadcasting etc. So what is normally done is to map the first 5  stops from the camera more or less directly to the first 5 stops of the display so that the all important shadows and mid-tones have natural looking contrast. Then take the brighter extended range of the camera, which may be 3 or 4 stops and map those into the remaining 2 stops of the monitor. This is a form of compression. In most cases we don’t notice it as it is only effecting highlights and our own visual system tends to concentrate on shadows and mid-tones while largely ignoring highlights. This compression is achieved using techniques such as knee compression and is one of the things that gives video it’s distinctive electronic look.

A slightly different approach to just compressing the highlights is to compress much more of the cameras output. Gamma curves like Sony’s cinegammas or hypergammas use compression that gets progressively more aggressive as you go up the exposure range. This allows even greater dynamic ranges to be captured at the expense of a slight lack of contrast in the viewed image. Taking things to the maximum we have gamma curves that use log based compression where each brighter stop is compressed twice as much as the previous one. Log gamma curves like S-Log or Log-C are capable of capturing massive dynamic ranges of anywhere up to 14 stops. View these log compressed images back on your conventional TV or monitor and because even the mid range is highly compressed  they will look very low contrast and very flat indeed.

So, if you have followed this article so far you should understand that we can capture a greater dynamic range than most monitors can display, but when doing so the image looks un-interesting and flat. So, if the images look bad, why do it? The benefits of capturing a big dynamic range are that highlights are less likely to look over exposed and  your final image contrast can be adjusted in post production. These are the reasons why it is seen as desirable to shoot flat. But there are several catches. One is that the amount of image noise that the camera produces will limit how far you can manipulate your image in post production. The codec that you use to record your pictures may also limit how much you can manipulate your image, the bit depth of the codec may result in banding when stretched and another is that it is quite easy to create a camera profile or setup that produces a flat looking image, for example by artificially raising the shadows, that superficially looks like a flat, high dynamic range image, but doesn’t actually provide a greater dynamic range.

Of course there are different degrees of flat. There is super flat log style shooting as well as intermediate flat-ish cinegamma or hypergamma shooting. But it if you are going to shoot flat it is vital that the recorded image coming from the camera will stand up to the kind of post production manipulation you wish to apply to it. This is especially important when using highly compressed codecs such as AVCHD, XDCAM or P2.

When you use a high compression codec it adds noise to the image, this is in addition to any sensor noise etc. If you create a look in camera, the additional compression noise is added after the look has been created. As the look has been set, the compression noise is not really going to change as you won’t be making big changes to the image. But if you shoot flat, when you start manipulating the image the compression noise gets pushed, shoved and stretched, this can lead to degradation of the image compared to creating the look in camera. In addition you need more data to record a bigger dynamic range, so a very flat (wide dynamic range) image may be pushing the codec very hard resulting in even more compression noise and artefacts.

So if you do want to shoot flat you need a camera with very low noise. You also need a robust codec, preferably 10 bit and you need to ensure that the camera setup or gamma is truly capturing a greater dynamic range, otherwise your really wasting your time.

Shooting flat is a great tool in the cinematographers tool box and with the right equipment can bring great benefits in post production flexibility. Most of the modern large sensor cameras with their low noise sensors and ability to record to high end 10 bit codecs either internally or externally are excellent tools for shooting flat. But small sensor cameras with their higher noise levels do not make the best candidates for shooting flat. In many cases a better result will be obtained by creating your desired look in camera. Or at least getting close to the desired look in camera and then just tweaking and fine tuning the look in post.

As always, test your workflow. Just because so and so shoots flat with camera A, it doesn’t mean that you will get the same result with camera B. Shoot a test before committing to shooting flat on a project, especially if the camera isn’t specifically designed and set up for flat shooting. Shooting flat will not turn a poor cinematographer into a great cinematographer, in fact it may make it harder for a less experienced operator as hitting the cameras exposure sweet spot can be harder and focussing is trickier when you have a flat low contrast image.

 

Exposing when shooting S-Log.

The question over whether to deliberately underexpose or not with S-Log came up recently. I believe that you need to evaluate the entire scene when shooting S-Log and that the often heard “underexpose by a stop” methodology may have some issues. Here’s my take on the situation:

A couple of caveat’s first: Most of my F3 S-Log work has been in indoor situations as I have been tied to recording to various less than portable 10 bit recording solutions, so very often using a restricted contrast range. I’ve only owned S-Log for my F3’s for a short while now, so many of my earlier tests were on 3rd party cameras, some of these were beta cameras.

I have not fully tied down my workflow. I’m still investigating external recorders, everything from the Ninja, Ki-Pro, Sound Devices and of course Gemini. I’m leaning very heavily towards the Gemini as I do a lot of 3D and the Gemini LCD makes for a fantastic monitor.
Back to exposure, this is obviously going to be a slightly contentious area as there is no real “correct way to do it”. While I might not agree with pinning skin tones or anything else for that matter to one particular brightness range, that does not mean I’m right and anyone else is wrong, it is just a different approach and methodology. At the end of the day, if it works for you and gets the results you want, then that will be the way you should go, these things are not black and white, right or wrong.
A very un-scientific test that a did a while back was an eye opener for me. I was exploring the finite latitude of S-Log compared to the F3’s cinegammas. I did a couple of very quick shots, you will find them here: http://www.xdcam-user.com/2011/06/pmw-f3-s-log-and-cinegamma-quick-look/
When I filmed these two examples I was looking at dynamic range, I exposed in both cases with the bright whites of the back wall behind the girl just going into clipping so I could then see how far into the shadows I could still see useable detail. I was not concerned about getting the skin tone exposure correct. When you look at the raw S-Log it really looks pretty shocking and even I wasn’t sure how much I would recover from the highlights and the girl is a good stop overexposed. However after a very simple grade using only the colour corrector in FCP, I was able to extract a pretty good looking image and it’s amazing how much detail was actually retained in what looked like over exposed high lights.  The Girl’s skin tones which I’ve measured at over 85IRE came down very nicely without any issue. A proper grade in a grading suite would I’m sure improve them still further.
What this very crude test told me was that you have incredible flexibility over where you put skin tones, you can comfortably move them up and down in post by a quite significant margin. Also seemingly overexposed S-Log highlights will contain surprisingly large amounts of fully recoverable detail. In the same test I graded the Cinegamma material to try to recover the shadow detail that was lost by due to the reduced latitude. This involved attempting to pull up the shadow areas. While this was somewhat successful, what became very apparent was the way the noise increased quite dramatically, this is something I have been aware of since I started using Cinegammas many years ago, pulling levels up will increase noise.
So… when I expose with Cinegammas (as I have done for many years) I have always been very conscious of the noticeable effect on noise that trying to lift underexposed parts of the image has. Very often in the grade the limiting factor as to how far you can push the image has been down to the noise floor and noise effects. This has mainly been with Sony EX’s which have a 54db noise floor.
Now with the F3 with have a dilemma! S-Log gives us another +1.5ish stops of dynamic range, but at the expense of a +6db increase in noise due to the +1 stop increase in sensitivity associated with S-Log.
Lets say for example that we shoot a shot with a person and we under expose the face by one stop (one stop = 6db).
If we do this with with the Cinegammas and then grade the shot bringing the face up one stop then the noise will increase by 6db from the base noise figure of 63db giving a final noise figure of approx 57db (in the case of signal to noise, a lower number is worse).
If we do this with S-Log and then grade the shot bringing up the face by one stop then the noise will increase by 6db from the base of 57db giving a final noise figure of approx 51db.
So the S-Log image becomes twice as noisy as the cinegamma material and therefore depending on the footage, it is quite possible that you would actually be able to push mid ranges and shadows further with Cinegammas than S-log in an underexposed situation due to noise issues. The S-Log and Cinegamma curves are almost identical up to over 50IRE, so latitude performance under 50IRE is essentially the same. See the charts on this page: http://www.xdcam-user.com/2011/05/s-log-a-further-in-depth-look/
If I get some time at IBC I might see if I can set up some tests to show this in practice.
Now given that I have seen for myself how with S-Log skin tones can be pushed down just as much as up in post, I tend to try to evaluate the entire scene and consider how it will be treated in post before choosing how to expose. In particular I don’t want to expose so that the entire scene will end up being lifted by a significant amount, as noise will become a concern. This isn’t always going to be possible as there are many shots where highlights have to be protected, but I don’t believe that you have to set skins etc at any particular narrow brightness range, I tend to let skin ride somewhere between 45IRE and 70IRE depending on the overall scene.
If I can fit the contrast range of the scene into the 11.5 stops of a cinegamma then I will often use the cinegammas over S-Log because of the noise improvement. S-Log comes into it’s own where you have an extreme contrast range that needs to be captured. However at the end of the day you do still have to remember that the end display device is unlikely to be able to display more than 7 stops with any accuracy!
One tool I have found very useful is the BlackMagic HDLink box. I often use this to connect to a monitor as it has the ability to apply LUT’s very quickly. If you have a PC connected to the HDLink you can go in an modify the LUT curve in real time and in effect do an on-set grade. The HDLink is only $499 USD.

Shooting Snow and other bright scenes.

Well winter is upon us. The north of the UK is seeing some pretty heavy snow fall and it’s due to spread south through the week. I regularly make trips to Norway and Iceland in the winter to shoot the Northern Lights (email me if you want to come) so I am used to shooting in the snow. It can be very difficult. Not only do you have to deal with the cold but also difficult exposure.

First off it’s vital to protect your equipment and investment from the cold weather. A good camera cover is essential, I use Kata covers on my cameras. If you don’t have a proper cover at the very least use a bin liner or other bag to wrap up your camera. If you have a sewing machine you could always use some fleece or waterproof material to make your own cover. If snow is actually falling, it will end up on your lens and probably melt. Most regular lens cloths just smear any water around the lens, leaving you with a blurred image. I find that the best cloth to use in wet conditions is a chamois (shammy) leather. Normally available in car accessory shops these are soft, absorbent leather cloths. Buy a large one, cut it into a couple of smaller pieces, then give it a good wash and you have a couple of excellent lens cloths that will work when wet and won’t damage your lens.

Exposing for snow is tricky. You want it to look bright, but you don’t want to overexpose. If your camera has zebras set them to 95 to 100%. This way you will get a zebra pattern on the snow as it starts to over expose. You also want your snow to look white, so do a manual white balance using clean snow as your white. Don’t however do this at dawn or near sunset as this will remove the orange light normally found at the ends of the day. In these cases it is best to use preset white set to around 5,600k. Don’t use cinegammas or hypergammas with bright snow scenes. They are OK for dull or overcast days, provided you do some grading in post, but on bright days because large areas of your snow scene will be up over 70 to 80% exposure you will end up with a very flat looking image as your snow will be in the compressed part of the exposure curve. You may want to consider using a little bit of negative black gamma to put a bit more contrast into the image.

If the sun is shining, yes I know this may not happen often in the UK, but if it is then the overall brightness of your scene may be very high. Remember to try to avoid stopping down your lens with the iris too far. With 1/3? sensor cameras you should aim to stay more open than f5.6, with 1/2? more than f8 and 2/3? more than f11. You may need to use the cameras built in ND filters or external ND filters to achieve this. Perhaps even a variable ND like the Genus ND Fader. You need to do this to avoid diffraction limiting, which softens the image if the iris is stopped down too much and is particulary noticeable with HD camcorders.

Finally at the end of your day of shooting remember that your camera will be cold. If you take it in to a warm environment (car, house, office) condensation will form both on the outside and on the inside. This moisture can damage the delicate electronics in a camcorder so leave the camera turned off until it has warmed up and ensure it is completely dry before packing it away. This is particularly important if you store your camera in any kind of waterproof case as moisture may remain trapped inside the case leading to long term damage. It is a good idea to keep sachets of silica gel in your camera case to absorb any such moisture. In the arctic and very cold environments the condensation may freeze covering the camera in ice and making it un-useable. In these extreme situations sometimes it is better to leave the camera in the cold rather than repeatedly warming it up and cooling it down.

Have fun, don’t get too cold, oh…  and keep some chemical hand warmers handy to help stop the lens fogging and to keep your fingers from freezing.

Shooting 3D with 2 Cameras and Synchronisation (Camera Rigs)

It is important to understand that no matter how much you slip and slide the clips from you cameras in the edit suite timeline to bring them into sync, if the images captured by the two cameras sensors are not in sync, you may have some big problems. Even if you press the record buttons on the cameras at exactly the same moment, they may not be running in sync. In the edit suite you can only adjust the sync by whole frames while the cameras may be running half a frame out and this can be impossible to correct in post production.
Remember that from the moment you turn a camera on it is capturing frames. When you press the record button the recording doesn’t start instantly, but at the start of the next full frame, so the synchronisation of the camera is dependant on when the camera was turned on and how long it takes to start working, not when you press the record button….. Unless you have Genlock, which overrides the cameras own internal clock and matches it to an external reference signal, forcing the camera to run in sync with the sync source which can be the second camera or a sync generator.
It is possible to shoot 3D with non-sync cameras, but any motion in the scene, or camera movement such as a pan may lead to strange stereoscopic effects including distortion of the 3D space, un wanted depth changes and moving objects appearing to float in front or behind where they should really be.
This doesn’t mean that you can’t use a non sync camera, just that it is less than ideal and it’s use will limit the kinds of shots you are able to do.
If you don’t have genlock a further option is to use a pair of Canon or Sony camcorders with LANC control. It is possible to get special LANC controllers such as LANC Shepherd or the controllers I have listed below:
http://www.berezin.com/3d/Lanc/index.html
http://dsc.ijs.si/3dlancmaster/
http://www.colorcode3d.dk/group.asp?group=42
These work with most camcorders that have a LANC port or AV/R port and provide good sync for periods of up to around 15 minutes at a time. To reset the sync the cameras must be powered off and back on. They work by synchronising the start up of both cameras and then measuring the sync error. The sync won’t be perfect, but it will be good enough for most 3D applications. However as there will always be slight variations in the master oscillators in the cameras, over time the sync will start to drift apart. The controller will tell you how far apart the sync is and when it becomes excessive you will need to re start the cameras to bring them back into sync, typically you get between 10 minutes and 20 minutes of useful synchronisation.

Shooting 3D with 2 Cameras and Synchronisation (Camera Rigs)

It is important to understand that no matter how much you slip and slide the clips from you cameras in the edit suite timeline to bring them into sync, if the images captured by the two cameras sensors are not in sync, you may have some big problems. Even if you press the record buttons on the cameras at exactly the same moment, they may not be running in sync. In the edit suite you can only adjust the sync by whole frames while the cameras may be running half a frame out and this can be impossible to correct in post production.
Remember that from the moment you turn a camera on it is capturing frames. When you press the record button the recording doesn’t start instantly, but at the start of the next full frame, so the synchronisation of the camera is dependant on when the camera was turned on and how long it takes to start working, not when you press the record button….. Unless you have Genlock, which overrides the cameras own internal clock and matches it to an external reference signal, forcing the camera to run in sync with the sync source which can be the second camera or a sync generator.
It is possible to shoot 3D with non-sync cameras, but any motion in the scene, or camera movement such as a pan may lead to strange stereoscopic effects including distortion of the 3D space, un wanted depth changes and moving objects appearing to float in front or behind where they should really be.
This doesn’t mean that you can’t use a non sync camera, just that it is less than ideal and it’s use will limit the kinds of shots you are able to do.
If you don’t have genlock a further option is to use a pair of Canon or Sony camcorders with LANC control. It is possible to get special LANC controllers such as LANC Shepherd or the controllers I have listed below:
http://www.berezin.com/3d/Lanc/index.html
http://dsc.ijs.si/3dlancmaster/
http://www.colorcode3d.dk/group.asp?group=42
These work with most camcorders that have a LANC port or AV/R port and provide good sync for periods of up to around 15 minutes at a time. To reset the sync the cameras must be powered off and back on. They work by synchronising the start up of both cameras and then measuring the sync error. The sync won’t be perfect, but it will be good enough for most 3D applications. However as there will always be slight variations in the master oscillators in the cameras, over time the sync will start to drift apart. The controller will tell you how far apart the sync is and when it becomes excessive you will need to re start the cameras to bring them back into sync, typically you get between 10 minutes and 20 minutes of useful synchronisation.