Tag Archives: F3

Genus Matte Box, Rails and Shoulder mount on PMW-F3

Genus kit fitted to an F3

So here is my current set up. As you may know Genus are making the 3D rig that I designed, so I get to play with the latest Genus bits and pieces. This is my F3 with a pre-production universal riser and base plate. The base plate will fit most camcorders and incorporates mounts for a pair of 15mm rails. Up front I have a Genus Wide 4×4 matte box. I’m really pleased with this light weight matte box with added top flag, it is a good match for the Nikon DSLR lenses that I will be using and is much, much lighter than my old Petroff matte box. It fits lenses up to 105mm  diameter so I will need a bigger matte box for many PL mount lenses (and bigger filters) perhaps I’ll get one of those nice TLS Raven Matte Boxes. For smaller lenses though the Genus 4×4 is really nice. Behind the matte box I have a Genus Superior follow focus. This is a silky smooth FF unit and on this lens it’s driving one of Genus’s clever flexible lens gears. This is a little bit like a hose clip, a thumb screw tightens it up so that it fastens snuggly around the lens. It will fit a much larger range of lenses than a traditional rigid lens gear.  The lens in the pictures is a Tokina ATX-Pro 28-70mm parfocal zoom. This is a great lens, plenty sharp enough for HD video and it doesn’t telescope when you zoom. Breathing is minimal. To match the lens to the camera there is one of Mike Tapas (MTF) excellent Nikon to F3 adapters.

Genus hand grips and Follow Focus

Underneath the lens I have a pair of hand grips from the Genus shoulder mount kit. The shoulder pad is right at the back behind the camera. For a little bit of extra convenience I have a Genus GAP adapter plate that allows me to use a quick release VCT-14 tripod plate.

The only thing left to sort is a loupe for the LCD screen. I’ve experimented with a Hoodman DSLR loupe that I have and this almost works. It doesn’t cover the full width of the LCD, but does allow me to use the LCD as a viewfinder when using this shoulder rig. I guess I need to get the Hoodman Hood Riser and Hood Strap to make the loupe fit the LCD correctly. I’ve read elsewhere that it does not fit the F3, but my experiments with the loupe alone suggests it will fit. Anyone else tried it yet? I’d rather go this route than getting a Cineroid viewfinder.

The tripod is a Manfrotto 509. This one of their new silky smooth “bridge” style heads. The 509 is a mid weight head with a pretty high payload capability, true fluid action and variable counter balance. I’m going do a separate write up on the tripod, it’s really rather good, especially considering the price!

PMW-F3 and EX1R aliasing comparison.

PMW-F3(top) and PMW-EX1R(bottom)

Here is a roughly done (sorry) comparison of the aliasing from an EX1R at the bottom and my F3 at the top. The F3 had a Nikon 18-135mm zoom, both cameras were set to default settings, 25P. The F3 clearly shows a lot more chroma aliasing appearing as coloured moire rings in both the horizontal, vertical and diagonal axis. The EX1R is not alias free. The chroma aliasing from the F3 is not entirely unexpected as it has a bayer sensor and there is always a trade off between luma resolution and chroma resolution and the point where you set the optical low pass filter. Frankly I find this performance a little disappointing. More real world test are needed to see how much of a problem this is (or is not). To put it in to some perspective the F35 aliases pretty badly too, but that camera is well known for producing beautiful images. I hope I’m being over critical of this particular aspect of the F3’s performance, because in every other respect I think the camera is fantastic.

UPDATE: I’ve taken a look at the MTF curves for the F3 and they are quite revealing showing that an OLPF is in use which is giving an MTF50 of around 800 LW/PH V and 950 LW/PH H. This is not quite as high as an EX1 and are quite reasonable figures for a 1920×1080 camcorder.  This suggests that the aliasing is largely limited to the chroma sampling of the sensor. As this is a bayer (or similar) type sensor the chroma is sampled at a reduced rate compare to luma, which is why coloured moire is not entirely unusual.

Tests performed with a Tokina ATX-Pro 28-70mm lens at 25P

Feeling a bit better about my F3 now 🙂

PMW-F3 Horizontal MTF
PMW-F3 Vertical MTF

PMW-F3 Picture Profiles. First Batch.

OK here we go. Here are some notes from testing my PMW-F3. First thing is… aliasing… a zone plate looks pretty bad with a fair amount of aliasing. I had heard rumours of this from others with pre-production units, but in the field I had not seen anything that would worry me. While the zone plate is not pretty, real world aliasing looks acceptable. I usually use brickwork and roof tiles to test for moire and these look clean on my F3. I think a fine patterned shirt could cause concern and I need to look into this further. I am surprised that there is not more about this on the web!

Excessive detail correction does increase the aliasing, however turning detail and aperture off does not reduce the aliasing significantly. Keep the detail level below -15 to avoid increasing the strength of the aliases. Above -15 the aliasing artefacts are more noticeable. Detail “Off” appears to be the same as Detail -25. Below -25 the image softens, below -45 very noticeably and there are some strange increases in aliasing below -50. For the moment I will be using detail at -17 or off.

The aperture setting can be used to add a little sharpness to the image to compensate for not using detail or a low detail setting. Aperture does not increase the appearance of the aliasing artefacts as strongly as the detail correction. I like the added crispness I can get with Aperture set to +30 combined with detail at -17. I would strongly recommend against using a raised aperture setting if you have detail higher than -15 as this will add sharpness to any detail corrected aliases and lead to twittering edges on horizontal and vertical lines.

Colours have that usual Sony look. Not bad and pretty natural looking, but for me a little on the green side. For a more natural 1:1 look I quite like these Matrix settings:
R-G +10, R-B +4, G-R 0, G-B +14, B-R +3, B-G -3, Std Matrix.

For a more Canon like look with Rec-709 Matrix I came up with these:
R-G -2, R-B +9, G-R -11, G-B +2, B-R -16, B-G -10, Std Matrix, level +14, Blk Gamma -20

For use with Cinegamma 1 I use the above with Matrix Level +25, Blk Gamma -36. Highlights are a little washy, but as with any Cinegamma the best results are obtained by grading in post production.

Can you use a 2/3″ Zoom on a 35mm camera??

Can you use a 2/3″ B4 broadcast zoom on a 35mm camera. Well yesterday I would have said “no”, but having seen this video on the AbelCine web site, now I’m not so sure. UPDATE: OK Should have read the specs…. it’s only suitable for smaller sensors as it has a 22mm image circle, the F3 has a 27mm diagonal. It’s still a viable option for the AF100 however.

http://blog.abelcine.com/2011/02/11/using-23-lenses-on-the-panasonic-af100/

The HDx2 adapter magnifies the image to fill a 35mm sensor, doubling the focal length at the same time. This is very intriguing as 35mm zooms are few and far between and very expensive. There is a 2 stop light loss (well if you expand the image 2 times that’s what happens) but most broadcast zooms are pretty fast lenses to start with. I can’t help but think that the pictures might be a little soft, but if you already have decent 2/3″ glass then the $5,500 for the adapter might make a lot of sense. Anyone out there with experience of one of these? I’d love to know how it performs.

Can you use a 2/3″ Zoom on a 35mm camera??

Can you use a 2/3″ B4 broadcast zoom on a 35mm camera. Well yesterday I would have said “no”, but having seen this video on the AbelCine web site, now I’m not so sure. UPDATE: OK Should have read the specs…. it’s only suitable for smaller sensors as it has a 22mm image circle, the F3 has a 27mm diagonal. It’s still a viable option for the AF100 however.

http://blog.abelcine.com/2011/02/11/using-23-lenses-on-the-panasonic-af100/

The HDx2 adapter magnifies the image to fill a 35mm sensor, doubling the focal length at the same time. This is very intriguing as 35mm zooms are few and far between and very expensive. There is a 2 stop light loss (well if you expand the image 2 times that’s what happens) but most broadcast zooms are pretty fast lenses to start with. I can’t help but think that the pictures might be a little soft, but if you already have decent 2/3″ glass then the $5,500 for the adapter might make a lot of sense. Anyone out there with experience of one of these? I’d love to know how it performs.

When is 4:4:4 not really 4:4:4.

The new Sony F3 will be landing in end users hands very soon. One of the cameras upgrade options is a 4:4:4 RGB output, but is it really 4:4:4 or is it something else?

4:4:4 should mean no chroma sub-sampling, so the same amount of samples for the R, G and B channels. This would be quite easy to get with a 3 chip camera as each of the 3 chips has the same number of pixels, but what about a bayer sensor as used on the F3 and other bayer cameras too for that matter?

If the sensor is subsampling the aerial image B and R compared to G (Bayer matrix, 2x G samples for each R and B) then no matter how you interpolate those samples, the B and R are still sub sampled and data is missing. Potentially depending on the resolution of the sensor even the G may be sub sampled compared to the frame size. In my mind a true 4:4:4 system means one pixel sample for each colour at every point within the image. So for 2k that’s 2k R, 2K G and 2K B. For a Bayer sensor that would imply a sensor with twice as many horizontal and vertical pixels as the desired resolution or a 3 chip design with a pixel for each sample on each of the R,G and B sensors. It appears that the F3’s sensor has nowhere near this number of pixels, rumour has it at around 2.5k x 1.5k.

If it’s anything less than 1 pixel per colour sample, while the signal coming down the cable may have an even number of RGB data streams the data streams won’t contain even amounts of picture information for each colour, the resolution of the B and R channels will be lower than the Green, so while the signal might be 4:4:4, the system is not truly 4:4:4. Up-converting the 4:2:2 output from a camera to 4:4:4 does not make it a 4:4:4 camera. This is no different to the situation seen with some cameras with 10 bit HDSDI outputs that only contain 8 bits of data. It might be a 10 bit stream, but the data is only 8 bit. It’s like a TV station transmitting an SD TV show on an HD channel. The channel might call itself an HD channel, but the content is still SD even if it has been upscaled to fill in all the missing bits.

Now don’t get me wrong, I’m not saying that there won’t be advantages to getting the 4:4:4 output option. By reading as much information as possible from the sensor, prior to compression there should be an improvement over the 4:2:2 HDSDi output, but it won’t be the same as the 4:4:4 output from an F35 where there is a pixel for every colour sample, but then the price of the F3 isn’t the same as the F35 either!

Understanding Gamma, Cinegamma, Hypergamma and S-Log


Standard Gamma Curve

The graph to the left shows and idealised, normal gamma curve for a video production chain. The main thing to observe is that the curve is in fact pretty close to a straight line (actual gamma curves are very gentle, slight curves). This is important as what that means is that when the filmed scene gets twice as bright the output shown on the display also appears twice as bright, so the image we see on the display looks natural and normal. This is the type of gamma curve that would often be referred to as a standard gamma and it is very much what you see is what you get. In reality there are small variations of these standard gamma curves designed to suit different television standards, but those slight variations only make a small difference to the final viewed image. Standard gammas are typically restricted to around a 7 stop exposure range. These days this limited range is not so much to do with the lattitude of the camera but by the inability of most monitors and TV display systems to accurately reproduce more than a 7 stop range and to ensure that all viewers whether they have 20 year old TV or an ultra modern display get a sensible looking picture. This means that we have a problem. Modern cameras can capture great brightness ranges, helping the video maker or cinematographer capture high contrast scenes, but simply taking a 12 stop scene and showing it on a 7 stop display isn’t going to work. This is where modified gamma curves come in to play.

Standard Gamma Curve and Cinegamma Curve

The second graph here shows a modified type of gamma curve. This is similar to the hypergamma or cinegamma curves found on many professional camcorders. What does the graph tell us? Well first of all we can see that the range of brightness or lattitude is greater as the curve extends out towards a range of 10 T stops compared to the 7 stops the standard gamma offers. Each additional stop is a doubling of lattitude. This means that a camera set up with this type of gamma curve can capture a far greater contrast range, but it’s not quite as simple as that.

Un-natural image response area

Un-natural response

Look at the area shaded red on the graph. This is the area where the cameras capture gamma curve deviates from the standard gamma curve used not just for image capture but also for image display. What this means is that the area of the image shaded in red will not look natural because where something in that part of the filmed scene gets 100% brighter it will only be displayed as getting 50% brighter for example. In practice what this means is that while you are capturing a greater brightness range you will also need to grade or correct this range somewhat in the post production process to make the image look natural. Generally scenes shot using hypergammas or cinegammas can look a little washed out or flat. Cinegammas and Hypergammas keep the important central exposure range nice an linear, so the region from black up to around 75% is much like a standard gamma curve, so faces, skin, flora and fauna tend to have a natural contrast range, it is only really highlights such as the sky that is getting compressed and we don’t tend to notice this much in the end picture. This is because our visual system is very good at discerning fine detail in shadow and mid tones but less accurate in highlights, so we tend not to find this high light compression objectionable.

S-Log Gamma Curve

S-Log Gamma Curve

Taking things a step further this  even more extreme gamma curve is similar to Sony’s S-Log gamma curve. As you can see this deviates greatly from the standard gamma curve. Now the entire linear output of the sensor is sampled using a logarithmic scale. This allows more of the data to be allocated to the shadows and midtones where the eye is most sensitive. The end result is a huge improvement in the recorded dynamic range (greater than 12 stops) combined with less data being used for highlights and more being used where it counts. However, the image when viewed on a standard monitor with no correction that looks very washed out, lacks contrast and generally looks incredibly flat and uninteresting.

S-Log Looks Flat and Washed Out

Red area indicates where image will not look natural with S-Log without LUT

In fact the uncorrected image is so flat and washed out that it can make judging the optimum exposure difficult and crews using S-Log will often use traditional light meters to set the exposure rather than a monitor or rely on zebras and known references such as grey cards. For on set monitoring with S-Log you need to apply a LUT (look Up Table) to the cameras output. A LUT is in effect a reverse gamma curve that cancels out the S-Log curve so that the image you see on the monitor is closer to a standard gamma image or your desired final pictures. The problem with this though is that the monitor is now no longer showing the full contrast range being captured and recorded so accurate exposure assessment can be tricky as you may want to bias your exposure range towards light or dark depending on how you will grade the final production. In addition because you absolutely must adjust the image in post production quite heavily to get an acceptable and pleasing image it is vital that the recording method is up to the job. Highly compressed 8 bit codecs are not good enough for S-Log. That’s why S-Log is normally recorded using 10 bit 4:4:4 with very low compression ratios. Any compression artefacts can become exaggerated when the image is manipulated and pushed and pulled in the grade to give a pleasing image. You could use 4:2:2 10 bit at a push, but the chroma sub sampling may lead to banding in highly saturated areas, really Hypergammas and Cinegammas are better suited to 4:2:2 and S-Log is best reserved for 4:4:4.

Understanding Gamma, Cinegamma, Hypergamma and S-Log


Standard Gamma Curve

The graph to the left shows and idealised, normal gamma curve for a video production chain. The main thing to observe is that the curve is in fact pretty close to a straight line (actual gamma curves are very gentle, slight curves). This is important as what that means is that when the filmed scene gets twice as bright the output shown on the display also appears twice as bright, so the image we see on the display looks natural and normal. This is the type of gamma curve that would often be referred to as a standard gamma and it is very much what you see is what you get. In reality there are small variations of these standard gamma curves designed to suit different television standards, but those slight variations only make a small difference to the final viewed image. Standard gammas are typically restricted to around a 7 stop exposure range. These days this limited range is not so much to do with the lattitude of the camera but by the inability of most monitors and TV display systems to accurately reproduce more than a 7 stop range and to ensure that all viewers whether they have 20 year old TV or an ultra modern display get a sensible looking picture. This means that we have a problem. Modern cameras can capture great brightness ranges, helping the video maker or cinematographer capture high contrast scenes, but simply taking a 12 stop scene and showing it on a 7 stop display isn’t going to work. This is where modified gamma curves come in to play.

Standard Gamma Curve and Cinegamma Curve

The second graph here shows a modified type of gamma curve. This is similar to the hypergamma or cinegamma curves found on many professional camcorders. What does the graph tell us? Well first of all we can see that the range of brightness or lattitude is greater as the curve extends out towards a range of 10 T stops compared to the 7 stops the standard gamma offers. Each additional stop is a doubling of lattitude. This means that a camera set up with this type of gamma curve can capture a far greater contrast range, but it’s not quite as simple as that.

Un-natural image response area

Un-natural response

Look at the area shaded red on the graph. This is the area where the cameras capture gamma curve deviates from the standard gamma curve used not just for image capture but also for image display. What this means is that the area of the image shaded in red will not look natural because where something in that part of the filmed scene gets 100% brighter it will only be displayed as getting 50% brighter for example. In practice what this means is that while you are capturing a greater brightness range you will also need to grade or correct this range somewhat in the post production process to make the image look natural. Generally scenes shot using hypergammas or cinegammas can look a little washed out or flat. Cinegammas and Hypergammas keep the important central exposure range nice an linear, so the region from black up to around 75% is much like a standard gamma curve, so faces, skin, flora and fauna tend to have a natural contrast range, it is only really highlights such as the sky that is getting compressed and we don’t tend to notice this much in the end picture. This is because our visual system is very good at discerning fine detail in shadow and mid tones but less accurate in highlights, so we tend not to find this high light compression objectionable.

S-Log Gamma Curve

S-Log Gamma Curve

Taking things a step further this  even more extreme gamma curve is similar to Sony’s S-Log gamma curve. As you can see this deviates greatly from the standard gamma curve. Now the entire linear output of the sensor is sampled using a logarithmic scale. This allows more of the data to be allocated to the shadows and midtones where the eye is most sensitive. The end result is a huge improvement in the recorded dynamic range (greater than 12 stops) combined with less data being used for highlights and more being used where it counts. However, the image when viewed on a standard monitor with no correction that looks very washed out, lacks contrast and generally looks incredibly flat and uninteresting.

S-Log Looks Flat and Washed Out

Red area indicates where image will not look natural with S-Log without LUT

In fact the uncorrected image is so flat and washed out that it can make judging the optimum exposure difficult and crews using S-Log will often use traditional light meters to set the exposure rather than a monitor or rely on zebras and known references such as grey cards. For on set monitoring with S-Log you need to apply a LUT (look Up Table) to the cameras output. A LUT is in effect a reverse gamma curve that cancels out the S-Log curve so that the image you see on the monitor is closer to a standard gamma image or your desired final pictures. The problem with this though is that the monitor is now no longer showing the full contrast range being captured and recorded so accurate exposure assessment can be tricky as you may want to bias your exposure range towards light or dark depending on how you will grade the final production. In addition because you absolutely must adjust the image in post production quite heavily to get an acceptable and pleasing image it is vital that the recording method is up to the job. Highly compressed 8 bit codecs are not good enough for S-Log. That’s why S-Log is normally recorded using 10 bit 4:4:4 with very low compression ratios. Any compression artefacts can become exaggerated when the image is manipulated and pushed and pulled in the grade to give a pleasing image. You could use 4:2:2 10 bit at a push, but the chroma sub sampling may lead to banding in highly saturated areas, really Hypergammas and Cinegammas are better suited to 4:2:2 and S-Log is best reserved for 4:4:4.

When is 4k really 4k, Bayer Sensors and resolution.

When is 4k really 4k, Bayer Sensors and resolution.

First lets clarify a couple of term. Resolution can be expressed two ways. It can be expressed as pixel resolution, ie how many individual pixels are there on the sensor. Or as TV lines or TVL/ph, or how many individual lines can I see. If you point a camera at a resolution chart, what you talking about is at what point can I no longer discern one black line from the next. TVL/ph is also the resolution normalised for the picture height, so aspect ratio does not confuse the equation. TVL/ph is a measure of the actual resolution of the camera system.  With video cameras TVL/ph is the normally quoted term, while  pixel resolution or pixel count is often quoted for film replacement cameras. I believe the TVL/ph term to be prefferable as it is a true measure of the visible resolution of the camera.
The term 4k started in film with the use af 4k digital intermediate files for post production and compositing. The exposed film is scanned using a single row scanner that is 4,096 pixels wide. Each line of the film is scanned 3 times, once each through a red, green and blue filter, so each line is made up of three 4K pixel scans, a total of just under 12k per line. Then the next line is scanned in the same manner all the way to the bottom of the frame. For a 35mm 1.33 aspect ratio film frame (4Ă—3) that equates to roughly 4K x 3K. So the end result is that each 35mm film frame is sampled using 3 (RGB) x 4k x 3k, or 36 million samples. That is what 4k originally meant, a 4k x 3k x3 intermediate file.
Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn’t stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. As the human eye is most sensitive to resolution in the middle of the colour spectrum, twice as many of these pixel are used for green compared to red and blue. So you have an array made up of blocks of 4 pixels, BG above GR.
Now all video cameras (at least all correctly designed ones) include a low pass filter in the optical path, right in front of the sensor. This is there to prevent moire that would be created by the fixed pattern of the pixels or samples. To work correctly and completely eliminate moire and aliasing you have to reduce the resolution of the image falling on the sensor below that of the pixel sample rate. You don’t want fine details that the sensor cannot resolve falling on to the sensor, because the missing picture information will create strange patterns called moire and aliasing.
It is impossible to produce an Optical Low Pass Filter that has an instant cut off point and we don’t want any picture detail that cannot be resolved falling on the sensor, so the filter cut-off must start below the sensor resolution. Next we have to consider that a 4k bayer sensor is in effect a 2K horizontal pixel Green sensor combined with a 1K Red and 1K Blue sensor, so where do you put the low pass cut-off? As information from the four pixels in the bayer patter is interpolated, left/right/up/down there is some room to have the low pass cut off above the 2k pixel of the green channel but this can lead to problems when shooting objects that contain lots of primary colours.  If you set the low pass filter to satisfy the Green channel you will get strong aliasing in the R and B channels. If you put it so there would be no aliasing in the R and B channels the image would be very soft indeed. So camera manufacturers will put the low pass cut-off somewhere between the two leading to trade offs in resolution and aliasing. This is why with bayer cameras you often see those little coloured blue and red sparkles around edges in highly saturated parts of the image. It’s aliasing in the R and B channels. This problem is governed by the laws of physics and optics and there is very little that the camera manufacturers can do about it.
In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k to 1.8k TVL/ph without serious aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With a three 1920×1080 pixel sensors, even with a sharp cut-off  low pass filter to eliminate any aliasing in all the channels you should still get at 1k TVL/ph. That’s one reason why bayer sensors despite being around since the 70s and being cheaper to manufacture than 3 chip designs (with their own issues created by big thick prisms) have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add ever more pixels to get higher resolution, like the F35 with it’s (non bayer) 14.4 million pixels.
This is a simplified look at whats going on with these sensors, but it highlights the fact that 4k does not mean 4k, in fact it doesn’t even mean 2k TVL/ph, the laws of physics prevent that. In reality even the very best 4k pixels bayer sensor should NOT be resolving more than 2.5k TVL/ph. If it is it will have serious aliasing issues.
After all that, those that I have not lost yet are probably thinking: well hang on a minute, what about that film scan, why doesn’t that alias as there is no low pass filter there? Well two things are going on. One is that the dynamic structure of all those particles used to create a film image, which is different from frame to frame reduces the fixed pattern effects of the sampling, which causes the aliasing to be totally different from frame to frame so it is far less noticeable. The other is that those particles are of a finite size so the film itself acts as the low pass filter, because it’s resolution is typically lower than that of the 4k scanner.

Lens Choices for the PMW-F3


The PMW-F3 has two lens mounts out of the box. The PL mount (via a supplied adapter) and the new F mount. PL mount lenses were developed by Arriflex for use with movie cameras, so PL mount lenses are an obvious choice. You used to be able to pick up older PL mount lenses quite cheaply, but when RED came along most of these got snapped up, so now PL mount lenses tend to be expensive. Sony will be producing a low cost three lens kit comprising of 35mm, 50mm and 85mm lightweight PL mount lenses. If you want top quality then Zeiss or Cooke lenses are the obvious choice. If your budget won’t stretch that far there are a number of 35mm SLR lenses that have been converted to PL mount.

PL  mount lenses often have witness marks for focus. This are factory engraved markings, individual to that lens for exact focus distances. They also often feature T stops instead of F stops for aperture. An F stop is the ratio of the iris opening to the focal length of the lens and gives the theoretical  amount of light that will pass through the lens if it was 100% efficient. A T stop on the other hand is the actual amount of light passing through the lens taking into account aperture size and transmission losses through the lens. A prime lens with an f1.4 aperture may only be a T2 lens after loss through the glass elements is taken into account. A multi element zoom lens will have higher losses, so a f2.8 lens may have a T stop of T4. However it is the iris size and thus the f stop that determines the Depth of Field.

But what about the F mount on the F3. What will that let you use? well right now there are no F mount lenses, but Sony are planning on a motorised zoom for next year. I am expecting a range of F mount to DSLR mount adapters to become available when the camera is released. These adapters will allow you to use DSLR lenses. The best mount IMHO is the Nikon mount. Why? Well most modern DSLR lenses don’t have iris controls. The iris is controlled by the camera. Nikon are the only manufacturer that has kept manual control of the iris on the lens body. When choosing a lens you want to look for fast lenses, f2.8 or faster  (f1.8, f1.4) to allow you to get shallow Depth of Field. You want a lens designed for a full frame 35mm sensor to avoid problems with vignetting or light loss in the corners of the image. You want a large manual focus ring to make focus control easy. Prime lenses (non zoom) with their simpler design with fewer lens elements normally produce the best results, but a zoom might be handy for it’s quick focal length changes. Do be aware however that zooms designed for stills photography normally don’t hold constant focus through the zoom range like a video lens so you may need to re-focus as you zoom. I have a nice Mk1 Tokina 28 to 70mm f2.6 Pro zoom. The optics in this lens are based on the Angineux 28 to 70mm and it’s a great all round lens. I also have a Nikkor 50mm f1.8, Pentax 58mm f1.4 and a few others. Of course you can also hire in lenses (DSLR and PL) as you need them.