A key question for many photographers (and microscopists) is, what is the smallest detail I can see? This was the question that Carl Zeiss asked of local physics professor, Ernst Abbe, in 1868. The answer surprised them both (see p. 147 in Optics f2f). Abbe initially thought that the answer was to reduce the lens diameter in order to reduce abberations, but when Zeiss found that this made things worse, Abbe realised it was diffraction, not aberration, that limits the resolution of an image. The larger the effective diameter of the lens, i.e. the larger numerial aperture, the finer the detail we can see.
Next time, you think about investing in a camera with more pixels, spend some time thinking about diffraction, and whether those extra pixels are going to help, because the fundamental limit to the quality of any image is usually diffraction . In this post, we consider the question of pixel size and diffraction in the formation of a colour image.
As a gentle introduction, let’s look at an typical image taken with a digital camera, Figure 1. As you zoom in on a part of the image you can start to see the individual pixels. The questions you might ask are, if I had a better camera would like see more detail, e.g. could I resolve the hairs on the bees leg, and what does ‘better’ mean, more pixels, smaller pixels, or a more expensive lens? You might also wonder where the strange colours (the yellows and purples in the zoomed image) come from. We will also discuss this.
Figure 1: Image of a bee (top left). If we zoom in (bottom left) we can see how well we resolve the detail. If we zoom in more (right) eventually we see the individual pixels, in this case they are 4 microns across.
To start to answer these questions we need a bit of theory. The diffraction limit of a lens, by which we mean the smallest spot we can see on the sensor, Δx, is roughly two times the f-number times the wavelength,
Δx ~ 2 f# λ
where the f-number is simply the ratio of the lens focal lens to the lens diameter (f# = f/D). This is a rough estimate as a diffraction limited spot does not have a hard edge, but it is a good rule of thumb . Using our rough rule of thumb we can estimate that using an f-2 lens, the minimum feature size I can expect to resolve using red or blue light (with a wavelengths of 0.65 and 0.45 microns respectively) is approximately 3 and 2 microns, respectively. If we aperture the lens down to f-22, then the minimum feature size using red or blue light (with a wavelengths of 0.65 and 0.45 microns, respectively) is 28 and 20 microns, respectively. These latter values are much larger than that the typical pixel size of most cameras (often 4–6 microns on camera sensors, although smartphones often have pixels as small as 1 micron), so it is important to remember that when using a small or medium aperture (high or mid-range f-number) the quality of an image is limited by diffraction, not pixel size.
For colour images, the fact that diffraction is wavelength dependent becomes important. The equation above tells us that the focal spot size is dependent on the wavelength, i.e. it is harder to focus red light than blue, so even if we have a perfect lens with no chromatic abberation we could still find that a focused white spot (such as the image of a star) has a reddish tinge at the edge (we will see this in some simulated images later). We could correct for diffraction effects in post-processing but we have to make some assumptions that may compromise the image in other ways and often in practice other abberations or motional blurr are also as important.
Another complication is that the sensors used for imaging are not sensitive to colour, so to construct a colour image they are coated by a mosiac of colour filters such that only particular pixels only sensitive to particular colours. The most common type of filter is known as the Bayer filter, where in each 2×2 array of 4 pixels there are two green, one red and one blue. We can see how the Bayer array works in the image below which shows how the image of a white ‘point-like’ object such as a distance star is recorded by the red (R), green (G) and blue (B) pixels (three images in the top row), and then below how the image is reconsructed from the individual pixel data. The middle image in the bottom row shows how the white spot is reconstructed from the RGB pixel data by interpolating between pixels. The image on the right shows what we would get with a monochrome sensor. The size of the image is diffraction limited. The example shown is for say 1 micron pixels with an f-22 lens (we will look at lower f-numbers laters). The key point of this image is to show how the red image is larger which leads to the reddish tinge around the image.
Figure 2: Top row: From left to right. The R, G, and B response of a colour sensor to a focused white-light ‘spot’. Bottom row: From left to right. The combined RGB response. The combined RGB response with spatial averaging. The ideal white-light image.
Now that we have a model of a colour sensor, it is interesting to look at more interesting images. The simplest question is can we resolve two bright spots such as two nearby stars (or two nearby hairs on a leg of a bumble bee as in Figure 1). The image below shows the case of two white spots. By clicking on the image you can access an interactive plot which allows you to vary the spacing between the spot and the f-number of the lens.
Figure 3: Similar to Figure 2 except now we are trying to image to white spots. If you click on the figure you get an interactive version.
Try setting the f-number to f-11 and varying the separation. Again we can see the effects of diffraction as in the reconstructed image (in the middle of the bottom row) we see the less well focused red filling the space between the two spots.
In these examples we are diffraction limited and the finite pixel size is not playing a role. If we did have a very small f-number and relatively large pixels we might start to the see that effects of pixelation. The first clue the we are pixel limited if the false colour effects of the Bayer filter. If our feature width is less that the separation of either RG or B pixels as in Figures 4 and 5, then the Bayer filter produces false color as illustrated in the image of three white squares below. The bottom left image is what the sensor measure, the bottom middle is the output after averaging.
Figure 4: Image of three white squares where the diffraction limit is smaller than the pixel size.
Figure 5: Similar to Figure 4 but now showing three white line and zooming in to only 20×20 pixels. The Bayer discoloration (bottom middle) is particular pronouced in this case.
We saw these colour artifacts in the bee photo shown in Figure 1. The averaging algorithm constructs inappropriate RGB values on particular pixels. Basically the algorithm has to make up the colour based on values on nearby pixels and if the colours are changing rapidly then it gets this wrong.
To summarise, almost certainly your camera images are diffraction limited rather than pixel limited so buying a camera with more pixels is not going to help. Better to invest in a better lens with a lower f-number than a sensor with more pixels. If you have a good lens then the first clue that you might be reaching the pixel limit is colour articfacts due to the Bayer filter.
 In comtemporary microscopy there are some way to beat the diffraction limit, such as STED, the topic of the 2014 Nobel Prize in Chemistry.
 The exact formulas are given in Chapter 9 of in Optics f2f, where we also learn that resolution limit is also a question of signal-to-noise, see e.g. Fig. 9.6. At best we should think of concepts such aqs the Rayleigh criterion as a very rough rule of thumb.