Tag Archives: sensor

Sensor sizes, where do the imperial sizes like 2/3″ or 4/3″ come from?

Video sensor size measurement originates from the first tube cameras where the size designation would have related to the outside diameter of the glass tube. The area of the face of the tube used to create the actual image would have been much smaller, typically about 2/3rds of the tubes outside diameter. So a 1″ tube would give a 2/3″ diameter active area, within which you would have a 4:3 frame with a 16mm diagonal.

An old 2/3″ Tube camera would have had a 4:3 active area of about 8.8mm x 6.6mm giving an 11mm diagonal. This 4:3 11mm diagonal is the size now used to denote a modern 2/3″ sensor. A 1/2″ sensor has a 8mm diagonal and a 1″ sensor a 16mm diagonal.

Yes, it’s confusing, but the same 2/3″ lenses as designed for tube cameras in the 1950’s can still be used today on a modern 2/3″ video camera and will give the same field of view today as they did back then. So the sizes have stuck, even though they have little relationship with the physical size of a modern sensor. A modern 2/3″ sensor is nowhere near 2/3 of an inch across the diagonal.

This is why some manufacturers are now using the term “1 inch type”, as this is the active area that would be the equivalent to the active area of an old 1″ diameter Vidicon/Saticon/Plumbicon Tube from the 1950’s.

For comparison:

1/3″ = 6mm diag.
1/2″ = 8mm
2/3″ = 11mm
1″ = 16mm
4/3″ = 22mm

A camera with a Super35mm sensor would be the equivalent of approx 35-40mm
APS-C would be approx 30mm

PMW-F3 and FS100 Pixel Count Revealed.

This came up over on DVInfo.

An F3 user was given access to the service manual to remove a stuck pixel on their F3. It was found in the service manual that you can address pixel manually to mask them. There are  pixel positions  1 to 2468 Horizontally and  1 to 1398 vertically. This ties in nicely with the published specifications of the F3 at 3.45 Million Pixels.

At the LLB (Sound, Light and Vision) trade fair in Stockholm this week we had both a SRW9000PL and PMW-F3 side by side on the stand, both connected to matching monitors. After changing a couple of basic Picture Profile settings on the F3 (Cinegamma 1, Cinema Matrix)  Just looking at the monitors it was impossible to tell which was which.

Measuring Resolution, Nyquist and Aliasing.

When measuring the resolution of a well designed video camera, you never want to see resolution that is significantly higher than HALF of the sensors resolution. Why is this? Why don’t I get 1920 x1080 resolution from an EX1, which we know has 1920 x1080 pixels, why is the measured resolution often around half to three quarters what you would expect?
There should be an optical low pass filter in front of the sensor in a well designed video camera that prevents frequencies above approx half of the sensors native resolution getting to the sensor. This filter will not have an instantaneous cut off, instead attenuating fine detail at ever increasing amounts centered somewhere around the Nyquist limit for the sensor. The Nyquist limit is normally half of the pixel count with a 3 chip camera or somewhat less than this for a bayer sensor. As a result measured resolution gradually tails off somewhere a little above Nyquist or half of the expected pixel resolution, but why is this?
It is theoretically possible for a sensor to resolve an image at it’s full pixel resolution. If you could line up the black and white lines on a test chart perfectly with the pixels on a 1920 x 1080 sensor then you could resolve 1920 x 1080 lines. But what happens when those lines no longer line up absolutely perfectly with the pixels? lets imagine that each line is offset by exactly half a pixel, what would you see? Well each pixel would see half of the black line and half white line. So each pixel would see 50% white, 50% black and the output from that pixel would be mid grey. With the adjacent pixels all seeing the same thing they would all output mid grey. So by panning the image by half a pixel, instead of now seeing 1920×1080 black and white lines all we see is a totally grey frame. As you continued to shift the chart relative to the pixels, say by panning across it, it would flicker between pin sharp lines and grey. If the camera was not perfectly aligned with the chart some of the image would appear grey or different shades of grey depending on the exact pixel to chart alignment while other parts may show distinct black and white lines. This is aliasing and it’s not nice to look at and can in effect reduce the resolution of the final image to zero. So to counter this you deliberately reduce the system resolution (lens + sensor) to around half the pixel count so that it is impossible for any one pixel to only see one object. By blurring the image across two pixels you ensure that aliasing wont occur. It should also be noted that the same thing can happen with a display or monitor, so trying to show a 1920×1080 image on a 1920×1080 monitor can have the same effect.
When I did my recent F3 resolution tests I used a term called the MTF or modulation transfer function, which is a measure of the contrast between adjacent pixels, so MTF 50 is where there is a 50% of maximum contrast difference between the black and white lines on the test chart.
When visually observing a resolution chart you can see where the lines on the chart can no longer be distinguished from one another, this is the resolution vanishing point and is typically somewhere around MTF15 to MTF5, ie. the contrast between the black and white lines becomes so low that you can no longer distinguish one from the other. But the problem with this is that as you are looking for the point where you can no longer see any difference, you are attempting to measure the invisible so it is prone to gross inaccuracies. In addition the contrast at MTF10 or the vanishing point between black and white will be very, very low, so in a real world image you would often struggle to ever see fine detail at MTF10 unless it was strong black and white edges.
So for resolution tests a more consistent result can be obtained by measuring the point at which the contrast between the black and white lines on the chart reduces to 50% of maximum, or MTF50 (as resolution decreases so too does contrast). So while MTF50 does not determine the ultimate resolution of the system, it gives a very reliable performance indicator that is repeatable and consistent from test to test. What it will tell you is how sharp one camera will appear to be compared to the next.
As the Nyquist frequency  is half the sampling frequency of the system, for a 1920 x 1080 sensor anything over 540 LP/ph will potentially aliase, so we don’t want lots of detail above this.  As Optical Low Pass filters cannot instantly cut off unwanted frequencies there will be a gradual resolution tail off that spans the Nyquist frequency and there is a fine balance between getting a sharp image and excessive aliasing. In addition as real world images are rarely black and white lines (square waves) and fixed high contrast patterns you can afford to push things a little above Nyquist to gain some extra sharpness. A well designed 1920 x 1080 HD video camera should resolve around 1000TVL. This where seeing the MTF curve helps, as it’s important to see how quickly the resolution is attenuated past MTF50.
With Bayer pattern sensors it’s even more problematic due to the reduced pixel count for the R and B samples compared to G.
The resolution of the EX1 and F3 is excellent for a 1080 camera, cameras that boast resolutions significantly higher than 1000TVL will have aliasing issues, indeed the EX1/EX3 can aliase in some situations as does the F3. These cameras are right at the limits of what will allow for a good, sharp image at 1920×1080.

Are Cosmic Rays Damaging my camera and flash memory?

Earth is being constantly bombarded by charged particles from outer space. Many of these cosmic rays come from exploding stars in distant galaxies. Despite being incredibly small some of these particles are travelling very fast and contain a lot of energy for their size. Every now and then one of these particles will pass through your camcorder.  What happens to both CMOS and CCD sensors as well as flash memory is that the energetic particle punches a small hole through the insulator of the pixel or memory cell. In practice what then happens is that charge can leak from the pixel to the substrate or from the substrate to the pixel. In the dark part of an image the amount of photons hitting the sensor is extremely small, each photon (in a perfect sensor) gets turn into an electron. It doesn’t take much of a leak for enough additional electrons to seep through the hole in the insulation to the pixel and give a false, bright readout. With a very small leak, the pixel may still be useable simply be adding an offset to to the read out to account for the elevated black level. In a more severe cases the pixel will be flooded with leaked electrons and appear white, in this case the masking circuits should read out the adjacent pixel.

For a computer running with big voltage/charge swings between 1’s and 0’s this small leakage current is largely inconsequential, but it does not take much to upset the readout of a sensor when your only talking of a handful of electrons. CMOS sensors are easier to mask as each pixel is addressed individually and during the camera start up it is normal to scan the sensor looking for excessively “hot” pixels. In addition many CMOS sensors incorporate pixel level noise reduction that takes a snapshot of the pixels dark voltage and subtracts it from the exposed voltage to reduce noise. A side effect of this is it masks hot pixels quite effectively. Due to the way a CCD’s output is pulled down through the entire sensor, masking is harder to do, so you often have to run a special masking routine to detect and mask hot pixels.

It may not sound much getting a single hot pixel, but if it’s right in the middle of the frame, every time that part of your scene is not brightly illuminated you see it winking away at you and on dark scenes it will stick out like a sore thumb, thankfully masking circuits are very effective at either zeroing out the raised signal level or reading out an adjacent pixel.

Flash memory can also experience these same insulation holes. There are two common types of Flash Memory, SLC and MLC. Single Level Cells have two states, on or off. Any charge means on and no charge means off. A small amount of leakage, in the short term, would have minimal impact as it could take months or years for the cell to full discharge, even then there is a 50/50 chance that the empty cell will still be giving an accurate ouput as it may have been empty to start with. Even so, long term you could loose data and a big insulation leak could discharge a cell quite quickly. MLC or Multi Level Cells are much more problematic, as the name suggests these cells can have several states, each state defined by a specific charge range, so one cell can store several bits of data. A small leak in a MLC cell can quickly alter the state of the cell form one level to the next, corrupting the data by changing the voltage.

The earths magnetic field concentrates these cosmic rays towards the north and south pole. Our atmosphere does provide some protection from them, but some of these particles can actually pass right through the earth, so lead shielding etc has no significant effect unless it is several feet thick. Your camera is at most risk when flying on polar routes. On an HD camera you can expect to have 3 or 4 pixels damaged during a year at sea level, with a CMOS camera you may never see them, with a CCD camera you may only see them with gain switched in.

SxS Pro cards (blue ones) are SLC, SxS-1 (Orange cards) use MLC as MLC works out cheaper as fewer cells are required to store the same amount of data. Most consumer flash memory is MLC. So be warned, storing data long term on flash memory may not be as safe as you might think!

When is 4:4:4 not really 4:4:4.

The new Sony F3 will be landing in end users hands very soon. One of the cameras upgrade options is a 4:4:4 RGB output, but is it really 4:4:4 or is it something else?

4:4:4 should mean no chroma sub-sampling, so the same amount of samples for the R, G and B channels. This would be quite easy to get with a 3 chip camera as each of the 3 chips has the same number of pixels, but what about a bayer sensor as used on the F3 and other bayer cameras too for that matter?

If the sensor is subsampling the aerial image B and R compared to G (Bayer matrix, 2x G samples for each R and B) then no matter how you interpolate those samples, the B and R are still sub sampled and data is missing. Potentially depending on the resolution of the sensor even the G may be sub sampled compared to the frame size. In my mind a true 4:4:4 system means one pixel sample for each colour at every point within the image. So for 2k that’s 2k R, 2K G and 2K B. For a Bayer sensor that would imply a sensor with twice as many horizontal and vertical pixels as the desired resolution or a 3 chip design with a pixel for each sample on each of the R,G and B sensors. It appears that the F3’s sensor has nowhere near this number of pixels, rumour has it at around 2.5k x 1.5k.

If it’s anything less than 1 pixel per colour sample, while the signal coming down the cable may have an even number of RGB data streams the data streams won’t contain even amounts of picture information for each colour, the resolution of the B and R channels will be lower than the Green, so while the signal might be 4:4:4, the system is not truly 4:4:4. Up-converting the 4:2:2 output from a camera to 4:4:4 does not make it a 4:4:4 camera. This is no different to the situation seen with some cameras with 10 bit HDSDI outputs that only contain 8 bits of data. It might be a 10 bit stream, but the data is only 8 bit. It’s like a TV station transmitting an SD TV show on an HD channel. The channel might call itself an HD channel, but the content is still SD even if it has been upscaled to fill in all the missing bits.

Now don’t get me wrong, I’m not saying that there won’t be advantages to getting the 4:4:4 output option. By reading as much information as possible from the sensor, prior to compression there should be an improvement over the 4:2:2 HDSDi output, but it won’t be the same as the 4:4:4 output from an F35 where there is a pixel for every colour sample, but then the price of the F3 isn’t the same as the F35 either!

When is 4k really 4k, Bayer Sensors and resolution.

When is 4k really 4k, Bayer Sensors and resolution.

First lets clarify a couple of term. Resolution can be expressed two ways. It can be expressed as pixel resolution, ie how many individual pixels are there on the sensor. Or as TV lines or TVL/ph, or how many individual lines can I see. If you point a camera at a resolution chart, what you talking about is at what point can I no longer discern one black line from the next. TVL/ph is also the resolution normalised for the picture height, so aspect ratio does not confuse the equation. TVL/ph is a measure of the actual resolution of the camera system.  With video cameras TVL/ph is the normally quoted term, while  pixel resolution or pixel count is often quoted for film replacement cameras. I believe the TVL/ph term to be prefferable as it is a true measure of the visible resolution of the camera.
The term 4k started in film with the use af 4k digital intermediate files for post production and compositing. The exposed film is scanned using a single row scanner that is 4,096 pixels wide. Each line of the film is scanned 3 times, once each through a red, green and blue filter, so each line is made up of three 4K pixel scans, a total of just under 12k per line. Then the next line is scanned in the same manner all the way to the bottom of the frame. For a 35mm 1.33 aspect ratio film frame (4×3) that equates to roughly 4K x 3K. So the end result is that each 35mm film frame is sampled using 3 (RGB) x 4k x 3k, or 36 million samples. That is what 4k originally meant, a 4k x 3k x3 intermediate file.
Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn’t stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. As the human eye is most sensitive to resolution in the middle of the colour spectrum, twice as many of these pixel are used for green compared to red and blue. So you have an array made up of blocks of 4 pixels, BG above GR.
Now all video cameras (at least all correctly designed ones) include a low pass filter in the optical path, right in front of the sensor. This is there to prevent moire that would be created by the fixed pattern of the pixels or samples. To work correctly and completely eliminate moire and aliasing you have to reduce the resolution of the image falling on the sensor below that of the pixel sample rate. You don’t want fine details that the sensor cannot resolve falling on to the sensor, because the missing picture information will create strange patterns called moire and aliasing.
It is impossible to produce an Optical Low Pass Filter that has an instant cut off point and we don’t want any picture detail that cannot be resolved falling on the sensor, so the filter cut-off must start below the sensor resolution. Next we have to consider that a 4k bayer sensor is in effect a 2K horizontal pixel Green sensor combined with a 1K Red and 1K Blue sensor, so where do you put the low pass cut-off? As information from the four pixels in the bayer patter is interpolated, left/right/up/down there is some room to have the low pass cut off above the 2k pixel of the green channel but this can lead to problems when shooting objects that contain lots of primary colours.  If you set the low pass filter to satisfy the Green channel you will get strong aliasing in the R and B channels. If you put it so there would be no aliasing in the R and B channels the image would be very soft indeed. So camera manufacturers will put the low pass cut-off somewhere between the two leading to trade offs in resolution and aliasing. This is why with bayer cameras you often see those little coloured blue and red sparkles around edges in highly saturated parts of the image. It’s aliasing in the R and B channels. This problem is governed by the laws of physics and optics and there is very little that the camera manufacturers can do about it.
In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k to 1.8k TVL/ph without serious aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With a three 1920×1080 pixel sensors, even with a sharp cut-off  low pass filter to eliminate any aliasing in all the channels you should still get at 1k TVL/ph. That’s one reason why bayer sensors despite being around since the 70s and being cheaper to manufacture than 3 chip designs (with their own issues created by big thick prisms) have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add ever more pixels to get higher resolution, like the F35 with it’s (non bayer) 14.4 million pixels.
This is a simplified look at whats going on with these sensors, but it highlights the fact that 4k does not mean 4k, in fact it doesn’t even mean 2k TVL/ph, the laws of physics prevent that. In reality even the very best 4k pixels bayer sensor should NOT be resolving more than 2.5k TVL/ph. If it is it will have serious aliasing issues.
After all that, those that I have not lost yet are probably thinking: well hang on a minute, what about that film scan, why doesn’t that alias as there is no low pass filter there? Well two things are going on. One is that the dynamic structure of all those particles used to create a film image, which is different from frame to frame reduces the fixed pattern effects of the sampling, which causes the aliasing to be totally different from frame to frame so it is far less noticeable. The other is that those particles are of a finite size so the film itself acts as the low pass filter, because it’s resolution is typically lower than that of the 4k scanner.

Micro 4/3, Super 35, DSLR and the impact on traditional Pro Camcorders.

Micro 4/3, Super 35, DSLR and the impact on traditional Pro Camcorders.

I was asked by one of this blogs readers about my thoughts on this. It’s certainly a subject that I have spent a lot of time thinking about. Traditionally broadcast and television professionals have used large and bulky cameras that use 3 sensors arranged around a prism to capture the 3 primary colours. The 3 chip design gives excellent colour reproduction, full resolution images of the very highest quality. It’s not however without it’s problems. First its expensive matching 3 sensors and accurately placing them on a high precision prism made from very exotic glass. That prism also introduces image artefacts that have to be dealt with by careful electronic processing. The lenses that have to work with these thick prisms also require very careful design.
Single sensor colour cameras are not something new. I had a couple of old tube cameras that produced colour pictures from a single tube. Until recently, single chip designs were always regarded as inferior to multi-chip designs. However the rise of digital stills photography forced manufacturers to really improve the technologies used to generate a colour image from a single sensor. Sony’s F35 camera used to shoot movies and high end productions is a single chip design with a special RGB pixel matrix. The most common method used is by a single sensor is a bayer mask which places a colour filter array in front of the individual pixels on the sensor. Bayer sensors now rival 3 chip designs in most respects. There is still some leakage of colours between adjacent pixels and the colour separation is not as precise as with a prism, but in most applications these issues are extremely hard to spot and the stills pictures coming from DSLR’s speak for themselves.
A couple of years ago Canon really shook things up by adding video capabilities to some of their DSLR’s. Even now (at the time of writing at least) these are far from perfect as they are at the end of the day high resolution stills cameras so there are some serious compromises to the way video is done. But the Canons do show what can be done with a low cost single chip camera using interchangeable lenses. The shallow depth of field offered by the large near 35mm size sensors (video cams are normally 2/3?, 1/2? or smaller) can be very pleasing and the lack of a prism makes it easier to use a wide range of lenses. So far I have not seen a DSLR or other stills camera with video that I would swap for a current pro 3 chip camera, but I can see the appeal and the possible benefits. Indeed I have used a Canon DSLR on a couple of shoots as a B camera to get very shallow DoF footage.
Sony’s new NEX-VG10 consumer camcorder was launched a couple of weeks ago. It has the shape and ergonomics of a camcorder but with the sensor and lenses of a 4/3? stills camera. I liked it a lot, but there is no zoom rocker and for day to day pro use it’s not what I’m looking for. Panasonic and Sony both have professional large sensor cameras in the pipelines and it’s these that could really shake things up.
While shallow DoF is often desirable in narrative work, for TV news and fast action its not so desirable. When you are shooting the unexpected or something thats moving about a lot you need to have some leeway in focus. So for many applications a big sensor is not suitable. I dread to think what TV news would look like if it was all shot with DSLR’s!
Having said that a good video camera using a big sensor would be a nice piece of kit to have for those projects where controlling the DoF is beneficial.
What I am hoping is that someone will be clever enough to bring out a camera with a 35mm (or thereabouts) sized sensor that has enough resolution to allow it to be used with DSLR (or 4/3) stills camera lenses but also be windowed down and provided with an adapter to take 2/3? broadcast lenses without adding a focal length increase. This means that the sensor needs to be around 8 to 10 Mega Pixels so that when windowed down use just the center 2/3? and it still has around 3 million active pixels to give 1920×1080 resolution (you need more pixels than resolution with a bayer mask). This creates a problem though when you use the full sensor as the readout of the sensor will have to be very clever to avoid the aliasing issues that plague the current DSLR’s as you will have too much resolution when you use the full sensor. Maybe it will come with lens adapters that will have to incorporate optical low pass filters to give the correct response for each type of lens.
A camera like this would, if designed right dramatically change the industry. It would have a considerable impact on the sales of traditional pro video cameras as one camera could be used for everything from movie production to TV news. By using a single sensor (possibly a DSLR sensor) the cost of the camera should be lower than a 3 chip design. If it has a 10 MP sensor then it could also be made 3D capable through the use of a 3D lens like the 4/3? ones announced by Panasonic. These are exciting time we live in. I think the revolution is just around the corner. Having said all of this, I think it’s also fair to point out while you and I are clearly interested in the cutting edge (or bleeding edge) there are an awful lot of producers and production companies that are not, preferring traditional, tried and tested methods. It takes them years to change and adapt, just look at how long tape is hanging on! So the days of the full size 2/3? camera are not over yet, but those of us that like to ride the latest technology wave have great things to look forward to.

Why is Sensor Size Important: Part 2, Diffraction Limiting

Another thing that you must consider when looking at sensor size is something called “Diffraction Limiting”. For Standard Definition this is not as big a problem as it is for HD. With HD it is a big issue.

Basically the problem is that light doesn’t always travel in straight lines. When a beam of light passes over a sharp edge it gets bent, this is called diffraction. So when the light passes through the lens of a camera the light around the edge of the iris ring gets bent and this means that some of the light hitting the sensor is slightly de-focussed. The smaller you make the iris the greater the percentage of diffracted light with respect to non diffracted light. Eventually the amount of diffracted and thus de-focussed light will become large enough to start to soften the image.

With a very small sensor even a tiny amount of diffraction will bend the light enough to fall on the pixel adjacent to the one it’s supposed to be focussed on. With a bigger sensor and bigger pixels the amount of diffraction required to bend the light to the next pixel is greater. In addition the small lenses on cameras with small sensors means the iris will be smaller.

In practice, this means that an HD camera with 1/3? sensors will noticeably soften if it is more stopped down (closed) more than f5.6, 1/2? cameras more than f8 and 2/3? f11. This is one of the reasons why most pro level cameras have adjustable ND filters. The ND filter acts like a pair of sunglasses cutting down the amount of light entering the lens and as a result allowing you to use a wider iris setting. This softening happens with both HD and SD cameras, the difference is that with the low resolution of SD it was much less noticeable.

Why Is Sensor Size Important: Part 1.


Over the next few posts I’m going to look at why sensor size is important. In most situations larger camera sensors will out perform small sensors. Now that is an over simplified statement as there are many things that effect sensor performance, including continuing improvements in the technologies used, but if you take two current day sensors of similar resolution and one is larger than the other, the larger one will usually outperform the smaller one. Not only will the sensors themselves perform differently but other factors come in to play such as lens design and resolution, diffraction limiting and depth of field, I’ll look at those in subsequent posts, for today I’m just going to look at the actual sensor itself.

Pixel Size:

Pixel size is everything. If you have two sensors with 1920×1080 pixels and one is a 1/3? sensor and the other is a 1/2? sensor then the pixels themselves on the larger 1/2? sensor will be bigger. Bigger pixels will almost always perform better than smaller pixels. Why? Think of a pixel as a bucket that captures photons of light. If you relate that to a bucket that captures water, consider what happens if you put two buckets out in the rain. A large bucket with a large opening will capture more rain than a small bucket.

Small pixels capture less light each

Bigger pixels capture more light each.

It’s the same with the pixels on a CMOS or CCD sensor, the larger the pixel, the more light it will capture, so the more sensitive it will be. Taking that analogy a step further if the buckets are both of the same depth the large bucket will be able to hold more water before it overflows. It’s the same with pixels, a big pixel can store more charge of electrons before it overflows (photons of light get converted into electrical charge within the pixel). This increases the dynamic range of the sensor as a large pixel will be able to hold a bigger charge before overflowing than a small pixel.

Noise:

All the electronics within a sensor generate electrical noise. In a sensor with big pixels which is capturing more photons of light per pixel than a smaller sensor, the ratio of light captured to electrical noise is better, so the noise is less visible in the final image, in addition the heat generated in a sensor will increase the amount of unwanted noise. A big sensor will dissipate any heat better than a small sensor, so once again the big sensor will normally have a further noise advantage.

So as you can see, in most cases a large sensor has several electronic advantages over a smaller one. In the next post I will look at some of the optical advantages.

Why Is Sensor Size Important: Part 1.


Over the next few posts I’m going to look at why sensor size is important. In most situations larger camera sensors will out perform small sensors. Now that is an over simplified statement as there are many things that effect sensor performance, including continuing improvements in the technologies used, but if you take two current day sensors of similar resolution and one is larger than the other, the larger one will usually outperform the smaller one. Not only will the sensors themselves perform differently but other factors come in to play such as lens design and resolution, diffraction limiting and depth of field, I’ll look at those in subsequent posts, for today I’m just going to look at the actual sensor itself.

Pixel Size:

Pixel size is everything. If you have two sensors with 1920×1080 pixels and one is a 1/3? sensor and the other is a 1/2? sensor then the pixels themselves on the larger 1/2? sensor will be bigger. Bigger pixels will almost always perform better than smaller pixels. Why? Think of a pixel as a bucket that captures photons of light. If you relate that to a bucket that captures water, consider what happens if you put two buckets out in the rain. A large bucket with a large opening will capture more rain than a small bucket.

Small pixels capture less light each

Bigger pixels capture more light each.

It’s the same with the pixels on a CMOS or CCD sensor, the larger the pixel, the more light it will capture, so the more sensitive it will be. Taking that analogy a step further if the buckets are both of the same depth the large bucket will be able to hold more water before it overflows. It’s the same with pixels, a big pixel can store more charge of electrons before it overflows (photons of light get converted into electrical charge within the pixel). This increases the dynamic range of the sensor as a large pixel will be able to hold a bigger charge before overflowing than a small pixel.

Noise:

All the electronics within a sensor generate electrical noise. In a sensor with big pixels which is capturing more photons of light per pixel than a smaller sensor, the ratio of light captured to electrical noise is better, so the noise is less visible in the final image, in addition the heat generated in a sensor will increase the amount of unwanted noise. A big sensor will dissipate any heat better than a small sensor, so once again the big sensor will normally have a further noise advantage.

So as you can see, in most cases a large sensor has several electronic advantages over a smaller one. In the next post I will look at some of the optical advantages.