Tag Archives: white

Lastolite EzyBalance Calibration Card – Pop-up grey card and 90% white card. Review.

Lastolite EzyBalance pop up grey card.
Lastolite EzyBalance pop up grey card.

Fed up with carrying large or bulky grey cards that get bent and creased or get dirty and fade? Why not try one of the great Lastolite pop up grey cards? I have the 30cm 18% grey pop-up grey card and it works really well. When folded it’s only about 12cm across so takes no space at all. It comes in a handy zip up case. This is so much easier to carry and transport than traditional ridged cards. The back of the target is 90% white. Both the grey and white targets appear to be very accurate and the matte surface of the grey card helps eliminate hot spots and reflections. There is a cross hair style focussing target in the center of each side if you need to check focus. They come in different sizes, if you want a larger one there are also 50cm and 75cm versions plus there is even an underwater version. Do note that they come in both 18% and 12% shades of grey. Really handy if shooting with SLog or for setting white balance. If you are working with a video camera you want the 18% grey version, but you may need the 12% version if calibrating a light meter etc. Simple, low cost item that works really well. Recommended!

http://www.lastolite.co.uk/ezybalance-grey-wht-card-03m-lllr1250

Whites, Super Whites and other Bits and bobs.

Do you know how your NLE is handling your video, are you whites white or whiter than white or does this sound like a washing powder add?

In the analog world you shot within the legal range of black to 100% white. It was simple, easy to understand and pretty straight forward. White was white at 100% and that was that. With digital video it all gets a lot more complicated, especially as we now start to move to greater and greater bit depths and the use of extended range recording with fancy gamma curves becomes more common. In addition computers get used more and more for not just editing but also as the final viewing device for many videos and this brings additional issues of it’s own.

First lets look at some key numbers:

8 bit data gives you 256 possible values 0 to 255.

10 bit data gives you 1024 possible values, 0 to 1023.

Computers use bit 0 to represent black and bit 255 or 1023 to represent peak white.

But video is quite different and this is where things get messy:

With 8 bit video the first 16 bits are used for sync and other data. Zero or black is always bit 16 and peak white or 100% white is always bit 235, so the traditional legal black to white range is 16 to 235, only 219 bits of data. Now in order to get a better looking image with more recording range many cameras take advantage of the bits above 235. Anything above 235 is “super white” or whiter than white in video terms, more than 100%. Cinegammas and Hypergammas take advantage of this extra range, but it’s not without it’s issues, there’s no free lunch.

10 bit video normally uses bit 64 as black and 940 as peak white. With SMPTE 10-bit extended range you can go down to bit 4 for undershoot and you can go up to bit 1019 for overshoots but the legal range is still 64-940. So black is always bit 64 and peak white always bit 940. Anything below 64 is a super black or blacker than black and anything above 940 is brighter than peak white or super white.

At the moment the big problem with 10 bit extended (SMPTE 274M 8.12) and also 8 bit that uses the extra bits above 235  is that some codecs and most software still expects to see the original legal range so anything recorded beyond that range, particularly below range can get truncated or clipped. If it is converted to RGB or you add an RGB filter or layer in your NLE it will almost certainly get clipped as the computer will take the 100% video range (16-235) and convert it to the 100% computer RGB range (0-255). So you run the risk of loosing your super whites altogether. Encoding to another codec can also lead to clipping. FCP and most NLE’s will display super blacks and super whites as these fall within the full 8 or 10 bit ranges used by computer graphics, but further encoding can be problematic as you can’t always be sure whether the conversion will use the full recorded range or just the black to white range. Baselight for example will only unpack the legal range from a codec so you need to bring the codec into legal range before going in to baselight. So as we can see it’s important to be sure that your workflow is not truncating or clipping your recorded range back to the nominal legal or 100% range.

On the other hand if you are doing stuff for the web or computer display where the full 0 to 255 (1023) are used then, you often need to use the illegal video levels above 100% white to get whites to look white and not bright grey! A video legal white at 235 just does not look white on a computer screen where whites are normally displayed using bit 255. There are so many different standards across different platforms that it’s a complete nightmare. Arri with Alexa for example won’t allow you to record extanded range using ProRes because of these issues, while the Alexa HDSDi output will output extended range.

This is also an issues when using computer monitors for monitoring in the edit suite. When you look at this web page or any computer graphics white is set at bit 255 or 1023. But that would be a super white or illegal white for video. As a result “in-range” or legal range videos when viewed on a computer monitor often look dull as the whites will be less bright than the computers own whites. The temptation therefore is to grade the video to make the whites look as bright as the computers whites which leads to illegal levels, clipping, or smply an image that does not look right on a TV or video monitor. You really need to be very careful to ensure that if you shoot using extended range that your workflow keeps that extended range intact and then you need to remember to legalise you video back to within legal range if it’s going to be broadcast.

Are Cosmic Rays Damaging my camera and flash memory?

Earth is being constantly bombarded by charged particles from outer space. Many of these cosmic rays come from exploding stars in distant galaxies. Despite being incredibly small some of these particles are travelling very fast and contain a lot of energy for their size. Every now and then one of these particles will pass through your camcorder.  What happens to both CMOS and CCD sensors as well as flash memory is that the energetic particle punches a small hole through the insulator of the pixel or memory cell. In practice what then happens is that charge can leak from the pixel to the substrate or from the substrate to the pixel. In the dark part of an image the amount of photons hitting the sensor is extremely small, each photon (in a perfect sensor) gets turn into an electron. It doesn’t take much of a leak for enough additional electrons to seep through the hole in the insulation to the pixel and give a false, bright readout. With a very small leak, the pixel may still be useable simply be adding an offset to to the read out to account for the elevated black level. In a more severe cases the pixel will be flooded with leaked electrons and appear white, in this case the masking circuits should read out the adjacent pixel.

For a computer running with big voltage/charge swings between 1’s and 0’s this small leakage current is largely inconsequential, but it does not take much to upset the readout of a sensor when your only talking of a handful of electrons. CMOS sensors are easier to mask as each pixel is addressed individually and during the camera start up it is normal to scan the sensor looking for excessively “hot” pixels. In addition many CMOS sensors incorporate pixel level noise reduction that takes a snapshot of the pixels dark voltage and subtracts it from the exposed voltage to reduce noise. A side effect of this is it masks hot pixels quite effectively. Due to the way a CCD’s output is pulled down through the entire sensor, masking is harder to do, so you often have to run a special masking routine to detect and mask hot pixels.

It may not sound much getting a single hot pixel, but if it’s right in the middle of the frame, every time that part of your scene is not brightly illuminated you see it winking away at you and on dark scenes it will stick out like a sore thumb, thankfully masking circuits are very effective at either zeroing out the raised signal level or reading out an adjacent pixel.

Flash memory can also experience these same insulation holes. There are two common types of Flash Memory, SLC and MLC. Single Level Cells have two states, on or off. Any charge means on and no charge means off. A small amount of leakage, in the short term, would have minimal impact as it could take months or years for the cell to full discharge, even then there is a 50/50 chance that the empty cell will still be giving an accurate ouput as it may have been empty to start with. Even so, long term you could loose data and a big insulation leak could discharge a cell quite quickly. MLC or Multi Level Cells are much more problematic, as the name suggests these cells can have several states, each state defined by a specific charge range, so one cell can store several bits of data. A small leak in a MLC cell can quickly alter the state of the cell form one level to the next, corrupting the data by changing the voltage.

The earths magnetic field concentrates these cosmic rays towards the north and south pole. Our atmosphere does provide some protection from them, but some of these particles can actually pass right through the earth, so lead shielding etc has no significant effect unless it is several feet thick. Your camera is at most risk when flying on polar routes. On an HD camera you can expect to have 3 or 4 pixels damaged during a year at sea level, with a CMOS camera you may never see them, with a CCD camera you may only see them with gain switched in.

SxS Pro cards (blue ones) are SLC, SxS-1 (Orange cards) use MLC as MLC works out cheaper as fewer cells are required to store the same amount of data. Most consumer flash memory is MLC. So be warned, storing data long term on flash memory may not be as safe as you might think!

The relationship between White Balance and the Matrix.

So… you want to change the look of the colour in your pictures but are not sure how to do it. One of the first things that you need to understand is the relationship between white balance and the colour matrix. They are two very different things, with two different jobs. As it’s name applies white balance is designed to ensure that whites with the image are white, even when shooting under lighting of different colour temperatures. When you shoot indoors under tungsten lights (you know, the one the EU have decided you can no longer buy) the light is very orange. When you shoot outside under sunlight the light is very blue. Our eyes adjust for this very well, so we barely notice the difference, but an electronic video camera is very sensitive to these changes. When you point a video camera at a white or grey card and do a manual white balance, what happens is that the camera adjusts the gain of the red, blue and green channels to minimise the amount of colour in areas of white (or grey) so that they do in fact appear white, ie with no colour. So the important thing to remember is that white balance is trying to eliminate colour in whites and greys.

The Matrix however deals purely with saturated parts of the image or areas where there is colour. It works be defining the ratio of how each colour is mixed with it’s complimentary colours. So changing the white balance does not alter the matrix and changing the matrix does not alter the white balance (whites will still be white). What changing the matrix will do is change the hue of the image, so you could make greens look bluer for example or reds more green.

So if you want to make your pictures look warmer (more orange or red) overall, then you would do this by offsetting the white balance, as in a warm picture your whites would appear warmer if they are slightly orange. This could be done electronically by adding an offset to the colour temperature settings or by using a warming card, which is a very slightly blue card. If you want to make the reds richer in your pictures then you would use the matrix as this allows you to make the reds stronger relative to the other colours, while whites stay white.

PDW 700 Native White Balance

The PDW-700 cameras are balanced for daylight optically and then corrected electronically for tungsten etc.

Traditionally cameras were balanced for Tungsten and then added colour correction optical filters to get to daylight. This was done as CC filters absorb light and thus make the camera less sensitive. Normally when shooting outdoors in daylight sensitivity is not an issue while shooting indoors under tungsten light you used to need every bit of sensitivity you could get.

The down side to this approach is that tungsten contains very little blue light so to get a natural picture the blue channel was often running at quite a high level of gain which increases noise in the blue channel and thus overall noise. In addition when you rotated in the CC filters to get to daylight the sensitivity of the camera was reduced, so you did not have constant gain.

With the PDW-700 (and also the F350 I believe) the cameras are essentially balanced for daylight, without the use of any CC filters, which helps reduce noise in the blue channel. Then for tungsten shooting you electronically re balance the camera. By doing this the overall sensitivity of the camera is constant whether shooting at 3.2K or 5.6K and you only get additional blue channel noise while shooting under tungsten. If you are worried by blue channel noise you can always correct from daylight down to tungsten with an optical CC filter (80A) and leave the camera set to daylight, although this will reduce the systems overall sensitivity by around 1 and a half stops.