Diffraction-limited imaging: the resolution limit in photography

A key question for many photographers (and microscopists) is, what is the smallest detail I can see? This was the question that Carl Zeiss asked of local physics professor, Ernst Abbe, in 1868. The answer surprised them both (see p. 147 in Optics f2f). Abbe initially thought that the answer was to reduce the lens diameter in order to reduce abberations, but when Zeiss found that this made things worse, Abbe realised it was diffraction, not aberration, that limits the resolution of an image. The larger the effective diameter of the lens, i.e. the larger numerial aperture, the finer the detail we can see.

Next time, you think about investing in a camera with more pixels, spend some time thinking about diffraction, and whether those extra pixels are going to help, because the fundamental limit to the quality of any image is usually diffraction [1]. In this post, we consider the question of pixel size and diffraction in the formation of a colour image.

As a gentle introduction, let’s look at an typical image taken with a digital camera, Figure 1. As you zoom in on a part of the image you can start to see the individual pixels. The questions you might ask are, if I had a better camera would like see more detail, e.g. could I resolve the hairs on the bees leg, and what does ‘better’ mean, more pixels, smaller pixels, or a more expensive lens? You might also wonder where the strange colours (the yellows and purples in the zoomed image) come from. We will also discuss this.


Figure 1: Image of a bee (top left). If we zoom in (bottom left) we can see how well we resolve the detail. If we zoom in more (right) eventually we see the individual pixels, in this case they are 4 microns across.

To start to answer these questions we need a bit of theory. The diffraction limit of a lens, by which we mean the smallest spot we can see on the sensor,  Δx, is roughly two times the f-number times the wavelength,

Δx ~ 2 f# λ

where the f-number is simply the ratio of the lens focal lens to the lens diameter (f# = f/D). This is a rough estimate as a diffraction limited spot does not have a hard edge, but it is a good rule of thumb [2]. Using our rough rule of thumb we can estimate that using an f-2 lens, the minimum feature size I can expect to resolve using red or blue light (with a wavelengths of 0.65 and 0.45 microns respectively) is approximately 3 and 2 microns, respectively. If we aperture the lens down to f-22, then the minimum feature size using red or blue light (with a wavelengths of 0.65 and 0.45 microns, respectively) is 28 and 20 microns, respectively. These latter values are much larger than that the typical pixel size of most cameras (often 4–6 microns on camera sensors, although smartphones often have pixels as small as 1 micron), so it is important to remember that when using a small  or medium aperture (high or mid-range f-number) the quality of an image is limited by diffraction, not pixel size.

For colour images, the fact that diffraction is wavelength dependent becomes important. The equation above tells us that the focal spot size is dependent on the wavelength, i.e. it is harder to focus red light than blue, so even if we have a perfect lens with no chromatic abberation we could still find that a focused white spot (such as the image of a star) has a reddish tinge at the edge (we will see this in some simulated images later). We could correct for diffraction effects in post-processing but we have to make some assumptions that may compromise the image in other ways and often in practice other abberations or motional blurr are also as important.

Another complication is that the sensors used for imaging are not sensitive to colour, so to construct a colour image they are coated by a mosiac of colour filters such that only particular pixels only sensitive to particular colours. The most common type of filter is known as the Bayer filter, where in each 2×2 array of 4 pixels there are two green, one red and one blue. We can see how the Bayer array works in the image below which shows how the image of a white ‘point-like’ object such as a distance star is recorded by the red (R), green (G) and blue (B) pixels (three images in the top row), and then below how the image is reconsructed from the individual pixel data. The middle image in the bottom row shows how the white spot is reconstructed from the RGB pixel data by interpolating between pixels.  The image on the right shows what we would get with a monochrome sensor. The size of the image is diffraction limited. The example shown is for say 1 micron pixels with an f-22 lens (we will look at lower f-numbers laters). The key point of this image is to show how the red image is larger which leads to the reddish tinge around the image.


Figure 2: Top row: From left to right. The R, G, and B response of a colour sensor to a focused white-light ‘spot’. Bottom row: From left to right. The combined RGB response. The combined RGB response with spatial averaging. The ideal white-light image.

Now that we have a model of a colour sensor, it is interesting to look at more interesting images. The simplest question is can we resolve two bright spots such as two nearby  stars (or two nearby  hairs on a leg of a bumble bee as in Figure 1). The image below shows the case of two white spots. By clicking on the image you can access an interactive plot  which allows you to vary the spacing between the spot and the f-number of the lens.



Figure 3: Similar to Figure 2 except now we are trying to image to white spots. If you click on the figure you get an interactive version.

Try setting the f-number to f-11 and varying the separation. Again we can see the effects of diffraction as in the reconstructed image (in the middle of the bottom row) we see the less well focused red filling the space between the two spots.

In these examples we are diffraction limited and the finite pixel size is not playing a role. If we did have a very small f-number and relatively large pixels we might start to the see that effects of pixelation. The first clue the we are pixel limited if the false colour effects of the Bayer filter. If our feature width is less that the separation of either RG or B pixels as in Figures 4 and 5, then the Bayer filter produces false color as illustrated in the image of three white squares below. The bottom left image is what the sensor measure, the bottom middle is the output after averaging.



Figure 4: Image of three white squares where the diffraction limit is smaller than the pixel size.



Figure 5: Similar to Figure 4 but now showing three white line and zooming in to only 20×20 pixels. The Bayer discoloration (bottom middle) is particular pronouced in this case.

We saw these colour artifacts in the bee photo shown in Figure 1. The averaging algorithm constructs inappropriate RGB values on particular pixels. Basically the algorithm has to make up the colour based on values on nearby pixels and if the colours are changing rapidly then it gets this wrong.

To summarise, almost certainly your camera images are diffraction limited rather than pixel limited so buying a camera with more pixels is not going to help. Better to invest in a better lens with a lower f-number than a sensor with more pixels. If you have a good lens then the first clue that you might be reaching the pixel limit is colour articfacts due to the Bayer filter.

[1] In comtemporary microscopy there are some way to beat the diffraction limit, such as STED, the topic of the 2014 Nobel Prize in Chemistry.

[2] The exact formulas are given in Chapter 9 of in Optics f2f, where we also learn that resolution limit is also a question of signal-to-noise, see e.g. Fig. 9.6. At best we should think of concepts such aqs the Rayleigh criterion as a very rough rule of thumb.

Climate Data: Fourier analysis

The natural language of any signal, periodic in space or time or both is Fourier. A large part of Optics f2f (all about light waves that are periodic in space and time) is devoted to Fourier analysis and many of the figures are made using Fourier transforms. In our view Fourier is the key to understanding light propagation, and many other topics in physics. To get more insight into Fourier techniques it is worth exploring other contexts. This article illustrates a data analysis example where Fourier techniques provide useful insight. Not optics, but still interesting and very important.

One important dataset (both historically and looking ahead) is the CO2 record collected at the Mauna Loa Observatory first by Charles Keeling, then Ralph Keeling, and others. Everyone should analyse the CO2 record and share their conclusions. This exercise highlights some of the important choices we have to make when we analyse data. Would you make the same choices?

The data is available via FTP (link to .txt file) from the NOAA.  The code below (partly inspired by this post) is written in python and run using  anaconda. You can cut and paste the code into your own python notebook or download the full notebook here (right click and save target). First some preliminaries.

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

dpi=[196.0/255.0,59.0/255.0,142.0/255.0] #  Durham pink
dr=[170.0/255.0,43.0/255.0,74.0/255.0] #  Durham red
dy=[232.0/255.0,227.0/255.0,145.0/255.0] #  Durham yellow
dg=[159.0/255.0,161.0/255.0,97.0/255.0] # Durham green
db=[0,99.0/255.0,136.0/255.0] # Durham blue
dp=[126.0/255.0,49.0/255.0,123.0/255.0] # Durham purple
dv=[216.0/255.0,172.0/255.0,244.0/255.0] # Durham violet

Now import the data and plot.

co2_file = "ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_weekly_mlo.txt"
df_co2 = pd.read_csv(co2_file, sep='\s+', comment='#', header=None,

#print( 6*"   . . .  ")

x = pd.Series(df_co2['time']).values
y = pd.Series(df_co2['CO2_ppm']).values

for ii in range (0,np.size(y)): #set missing data to previous value
    if y[ii]<0:

plt.rcParams.update({'font.size': 18})
fig = plt.figure(1,facecolor='white')
ax = fig.add_axes([0.05, 0.3, 0.8, 1.0])
plt.ylabel('CO$_2$ (ppm)')

This outputs a plot of the complete data set from 1974 to May 2019.


As the CO2 data is periodic and we can fit it using Fourier analysis. To see what harmonics contribute we perform a numerical Fourier transform using pythons built-in fft module.

from numpy.fft import fft, ifft, fftshift

n_yrs=x[num-1]-x[0] #no. of year

freq=np.arange(0,num/n_yrs,1/n_yrs) # lowest freq, i.e. freq step is 1/n_yrs

plt.rcParams.update({'font.size': 24})
fig = plt.figure(1,facecolor='white')
ax = fig.add_axes([0.05, 0.3, 0.8, 1.0])
plt.xlabel('Frequency (yr$^{-1}$)')
plt.ylabel('Normalised spectrum'))

The plot shows that power spectrum, see p. 134 in Optics f2f.


A crucial part in this code is the frequency scale (line 7). The trick (see p. 252 in Optics f2f for how the fft algorithm works) is that we know that the smallest frequency is once over the total time of the whole data series (n_yrs), i.e. the increments are 1/(n_yrs). The number of points along the frequency axis must be the same as the number of points in the time series (num) so the maximum frequency value is num/(n_yrs). We are lucky with this dataset because we know the fundamental is once per year, so if we get the scaling wrong it is pretty obvious.

The Fourier spectrum suggests that there is a significant second harmonic (biannual oscillation) and some third and forth. Let’s choose to fit the data with the fundamental and second harmonic only. This next piece of code does the curve fit (up to line 10) and make some plots (all the rest). We can call our fit the Business-As-Usual (BAU) model and in principle we can use this to predict the future assuming that the patterns of the last 45 years continue to be followed. The fit function has seven parameters, A, B and C characterise the secular trend, D and E the amplitude and phase of the seasonal variation, and F and G the second harmonic. We can either fit the whole data or choose a cut-off date to see how well the model predicted the recent history. Below we omit the last two years and then see how the model is doing over the last two years.

from scipy.optimize import leastsq, curve_fit

def fit3(x,A,B,C,D,E,F,G):
    return A+B*x+C*x*x+D*np.cos(2*np.pi*x+E)+F*np.cos(2*2*np.pi*x+G)

date_index=2250 #cut-off date for checking progress
A,B,C,D,E,F,G=curve_fit(fit3,x[0:date_index],y[0:date_index])[0] # exclude last two years take only 2250 points


plt.rcParams.update({'font.size': 18})
fig = plt.figure(1,facecolor='white')
ax = fig.add_axes([0.05, 0.3, 0.8, 1.0])
plt.ylabel('CO$_2$ (ppm)')

ax = fig.add_axes([0.25, 0.75, 0.5, 0.5])

ax = fig.add_axes([1.0, 0.3, 0.8, 1.0])

ax = fig.add_axes([1.2, 0.4, 0.5, 0.5])



Although the model reproduces the seasonal variation, the secular variation sometimes drifts lower (as in the 90s) or higher (after 2015) than the model value. We shall come back to this.

As an aside we compare the BAU model secular variation to the temperature anomaly (HadCRUT4, you need to copy the data file and save it locally in the same direction).

temp_file = "HadCRUT_2019_06_02.txt"
df_temp = pd.read_csv(temp_file, sep='\s+', comment='#', header=None)

temp = pd.Series(df_temp[1]).values

def fit1(x,A1,B1,C1):
    return A1+B1*x+C1*x*x

A1,B1,C1=curve_fit(fit1,time[1400:2031],temp[1400:2031])[0] # data range to match CO2

fig = plt.figure(1,facecolor='white')
ax = fig.add_axes([0.05, 0.3, 0.8, 1.0])

plt.ylabel('Temp. anomaly $^\circ$C')

ax = fig.add_axes([1.2, 0.3, 0.8, 1.0])
plt.ylabel('CO$_2$ (ppm)')


This plot illustrates the strong correlation between the level of CO2 and temperature (at least for the period 1975-2020). In principle we can use this to predict the future,and this is how we arrive at the +3 – +4 degree rise by 2100 under the BAU scenario. But if we reduce our emissions, how soon might we know if we are doing better?

To answer this we can look at the residuals (the difference between the actual measurement that the BAU model prediction). If the recent the trend is below the statistical uncertainty we can start to say that we are making progress. The plot below shows the residuals for the two time periods considered above. We have included 5-week average data with errorbars.


# 5 week average

fig = plt.figure(1,facecolor='white')
ms=25 # marker size
ax = fig.add_axes([0.05, 0.3, 0.75, 1.0])
for ii in range (0,470):
plt.ylabel('Residuals: Data $-$ BAU Model (ppm)')

ax = fig.add_axes([0.85, 0.3, 0.75, 1.0])
for ii in range (0,470):

ax = fig.add_axes([1.65, 0.3, 0.55, 1.0])
n, bins, patches=ax.hist(new_y[0:date_index],nbins,range=(-3.0,3.0),orientation='horizontal',facecolor=db, edgecolor=db, alpha=0.25)
n1, bins1, patches=ax.hist(new_y[0:date_index],40,range=(-1,1),orientation='horizontal',facecolor=db,edgecolor=db, alpha=0.25)
n2, bin21, patches=ax.hist(new_y[date_index:2350],nbins,range=(-3.0,3.0),orientation='horizontal',facecolor=dg,edgecolor=dg, alpha=0.75)
plt.xlabel('No. of weeks')

print (np.array2string(100*np.sum(n[40:80])/np.size(x[0:date_index]),precision=1),'% within 1 ppm of BAU prediction')

What these plots show is that the BAU model is not perfect because of a long term drift in the secular level that is not included. This drift is around between 1 and 2 ppm, and in the short term makes it hard to say that any policy change is having an effect until we see an undershoot of > 2 ppm.

On the right we show a histogram of the residuals. Over 75% of the data are within +/- 1 ppm of the BAU prediction, but only 11% are within standard error +/- 0.15 ppm. So we cannot say that the BAU model is a ‘good fit’ [1], because it does not capture this long term drift. The data for the last two years is shown in green. Clearly, the recent trend is worse that predicted by the BAU model for the previous 4 decades. My conclusion, so far, no progress, rather things seems to b egetting worse although too early to say whether is something more than the type of drift that we have seem before.


Finally, we can repeat the Fourier transform on the residuals to see if there is any structure left. This time we have chosen a loglog scale so we can look at the long term drift.


plt.rcParams.update({'font.size': 24})
fig = plt.figure(1,facecolor='white')
ax = fig.add_axes([0.05, 0.3, 0.8, 1.0])
plt.xlabel('Frequency (yr$^{-1}$)')
plt.ylabel('Normalised spectrum')

The results looks like 1/f noise as we might expect.



[1] Hughes and Hase, Measurements and their uncertainties, OUP 2010.

Experimental tests of diffraction theory

In our undergraduate laboratory teaching we encourage students to think about their data and how to communicate what it means. Some years ago we introduced a prize for graphical excellence named after Florence Nightingale, an early pioneer in the creative presentation of data. The gold standard in the scientific method often means experiment and theory on the same graph. Odd then that many optics textbooks show lots of theory curves and photos of interference or diffraction but rarely both on the same graph.

This article discusses a simple example where we take a photo of a diffraction pattern and compare it to the prediction of Fraunhofer diffraction theory (see p. 85 in Optics f2f). All you need is a light source, a diffracting obstacle, and a camera. However, the trade-off is that to make the theory easier you need a more `idealised’ experiment.

First, we show a photo of the far-field diffraction pattern recorded on a camera when an obstacle consisting of 5 slits is illuminated by a laser.


The experiment was performed by Sarah Bunton an Ogden Trust Intern at Durham in 2016. The image is in black and white and saved as a .png file. The image is read using the python imread command and ‘replotted’ with axes.

from numpy import shape
from matplotlib.pyplot import close, figure, show,  cm
from matplotlib.image import imread

[ypix, xpix]=shape(img)
print ypix, xpix

fig = figure(1,facecolor='white')
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
ax.axison = True

The code outputs the image size in terms of horizontal (x) and vertical (y) pixels, and allows us to identify a region of interest (rows 500 to 600 say). Now we want to analyse the data. First (lines 20-25), for illustration purposes only, we replot it as a colormap and reduce noise by binning the pixel data into superpixels made up of 2×2 real pixels.

from numpy import shape, arange, sin, pi
from matplotlib import rc
rc('font',**{'family':'serif','serif':['Times New Roman']})
from matplotlib.pyplot import close, figure, show, pcolormesh, xlabel, ylabel, rcParams,  cm
from matplotlib.image import imread


[ypix, xpix]=shape(img) # find size of image

fig = figure(1,facecolor='white')
ax = fig.add_axes([0.15, 0.3, 0.8, 1.0])
ax.axison = False # Start with True to find region of interest
ax.imshow(img[500:600,:],cmap=cm.gray) #Show region

small=img.reshape([ypix/4, 4, xpix/4, 4]).mean(3).mean(1) # create super pixels
ax = fig.add_axes([0.15, 0.875, 0.8, 0.1])
ax.axison = False
ax.pcolormesh(small) #plot image of super pixels

cut=557 #Select horizontal line to compare to theory
data=data**1.5 #Gamma correction

ax = fig.add_axes([0.15, 0.1, 0.8, 0.6])
ax.axison = True

x=arange(0.0001,xpix,1.0) # x values for theory line
xlabel('Position in the $x$ direction')
ylabel('${\cal I}/(N^2{\cal I}_0a^2/\lambda z)$')
ax.set_xticks([xoff-pi*flamd, xoff,(xoff+pi*flamd)])
ax.set_xticklabels(['$-(\lambda/d)z$', '0','$(\lambda/d)z$'])


Second, we select a row through the intensity maximum (lines 27-35) so that we can compare the intensity along the horizontal axis with the prediction of theory. For the theory (lines 37-48), we can measure the distances in the experiment (slit width, separation, distance to detector, pixel size and laser wavelength) or we can fit the data. Here we have only done a by-eye `fit’.


One thing you might notice in the code (line 29) is that we need to perform a scaling for the Gamma correction used by the camera. Some cameras may allow you to turn this off, otherwise you need to correct it in post-processing.

It is worth mentioning that the imread command is very versatile and can read many file formats, but if the image is colour then the data array may have three or four layers (RGB + tranparency) as well as horizontal and vertical position. Here is an example where we read in a jpg, direct from a camera, of Young’s double slit experiment using sunlight (more on that later). This is much more difficult to analyse unless we know how the exact spectral response of the RGB filters used in the camera, but still you can see some fun stuff like red is diffracted more than blue!

from numpy import shape
from matplotlib import rc
rc('font',**{'family':'serif','serif':['Times New Roman']})
from matplotlib.pyplot import close, figure, show, rcParams,  cm
from matplotlib.image import imread


[ypix, xpix, dim]=shape(img)

fig = figure(1,facecolor='white')
ax = fig.add_axes([0.15, 0.3, 0.8, 1.0])
ax.axison = False



ax = fig.add_axes([0.15, 0.35, 0.8, 0.25])
ax.axison = True

ax = fig.add_axes([0.15, 0.075, 0.8, 0.25])
ax.axison = True


ax.axison = True



Finally, here is an example for Biot’s sugar experiment,


where we see the effect of the overlap between the RGB channels, e.g. the scattered blue laser light even shows up in the red channel. The unsaturated data on the right (obtained through the horizontal polarizer) gives a better indication of the relative light levels.

Polarization: linear in the circular basis

We can describe the polarization of light (Chapter 4 in Optics f2f) in terms of a superposition (or statistical mixture) of left- and right-circularly polarized components (see p. 55 in Optics f2f). For example, linearly-polarized light consists of a superposition of equal amounts of left- and right-circular as in the Figure below. If you click on the interactive version you can watch the time evolution (use the left and right buttons on a keyboard to move forwards and backwards in time).


Click here to see interactive figure.

The field (shown in the centre) propagates from left to right and we have shown left and right components displaced towards the back and front, respectively. Notice, how as we move forward in time and look into the field, the left-hand and right-hand components (at a particular position as indicated by the red vectors) rotate anti-clockwise and clockwise, respectively.

An interesting effect occurs when we change the relative phase of the left and right components. Such a phase shift arises whenever light propagates through a circularly birefringent medium (p. 61 in Optics f2f), such as a solution of chiral molecules (e.g. sugar as discussed here) or an optical medium in a magnetic field. The effect of changing the relative phase (with time fixed) is illustrated in the next interactive figure.


Click here to see interactive figure.

We find the superposition remains linearly polarized but the plane of polarization rotates. This effect is the basis of optical activity and the Faraday effect (see p. 62 in Optics f2f).

From Fresnel to the lighthouse

From bees to Vikings (see p. 51 in Optics f2f), from Harrison’s chronometer to atomic clocks and GPS, navigation has always been a matter of live and death. Often, what begins as blue-skies scientific curiosity, may later save lives. A beautiful example is the work of Augustin-Jean Fresnel.

One early success of Young’s experiments was to inspire Frensel to study shadows using a honey-drop lens in 1815. To explain his observations, Fresnel developed the theory of diffraction that we still use today, see Chapter 5 in Optics f2f. Fresnel’s experiments and theory firmly established the wave theory of light, and began a new way of thinking about optical devices such as the humble lens. For Fresnel, a lens was less about bending light, than ensuring that all parts of the wave arrive with their peaks and troughs synchronised (in phase) at the focus.

Fresnel realised if we observe the light amplitude at a point then it is made up of contributions from the input plane that sometimes add in-phase and sometimes out-of-phase due to differences in the optical path. He separated the regions that add in and out-of phase into zones, where all the odd zones (white in the image below) have say a positive phase and all the even zones (grey in the image below) have a negative phase.


Fresnel zones: The white and grey regions contribute with positive and negative phase respectively and therefore interfere destructively with one another.

He showed that the light contributions from neighbouring zones where equal and opposite. So if you use a circular aperture to block all the light except for the two central zones then you observe zero light intensity (on axis). If you increase the radius of the aperture to allow the third zone to contribute then you observe high intensity again. This is illustrated in the two images below which show the light intensity pattern downstream of a circular hole. In both cases, the top image is light intensity observed in particular plane and the bottom image is how the light distribution is changing as it propagates (the green line marks the observation plane). If you click on the interactive plot you can vary the diameter of the hole and see how the pattern changes. Note in particular how the observed intensity on-axis oscillates between zero and a maximum as more and more Fresnel zones are allowed to contribute.


Click here to see interactive figure.

Once you have this concept of Fresnel zones then you realise that to make all the light contribute with same (or nearly the same) phase you just need a small adjustment to the phase within each zones to make each contribution interference constructively at the focus. Consequently, to make a lens, instead of a big thick piece of glass we can use segments of glass that produce the desired phase shift for each zone. This idea, illustrated on the right below, is known as the Fresnel lens.


The Fresnel lens completely revolutionised lighthouse design as now it was possible to make much larger lenses constructed from smaller lighter segments. The first Fresnel lens was installed in the Cordouan lighthouse only 8 years after Frensel began his blue-skies experiment using a honey-drop lens.

Postscript: The lighhouse story continues with another giant of nineteenth century physics, Michael Faraday. If you have the chance, go and visit his workshop below the Trinity Buoy Lighthouse in London.


Young’s double-slit in colour

Young’s double slit has become the standard paradigm to explore two-path interference. Not that it matters that Young’s did a different experiment that requires a different explanation. [1]

In the far-field, the ‘classic’ two-slit experiment produces fringes whose spacing is inversely proportional to the slit spacing. You can see this by varying the slit separation in the interactive figure linked below (credit to Nikola Sibalic).


Click here to see interactive figure.

Most textbooks only tell us about the far field, where the effect of diffraction dominates over the transverse size of the light distribution and we can make the Fraunhofer approximation, see e.g. p. 39 in Optics f2f. But now using computers it is easy to calculate the field everywhere. The code below uses Fresnel integrals, see p. 85 in Optics f2f, but we could also perform the calculation using Fourier methods, p. 99 in Optics f2f.

from numpy import dstack, linspace, size, sqrt, pi, clip
from pylab import figure, clf, show, imshow, meshgrid
from scipy.special import fresnel
from time import clock
from matplotlib.pyplot import close
startTime = clock()

Z, X = meshgrid(linspace(1,10001,480), linspace(-120,120,480))
R=0*X #initialise RGB arrays

wavelengths=linspace(0.4, 0.675, 12) # define 12 wavelengths + RGB conversion
rw=[0.1, 0.05, 0.0, 0.0,   0.0,   0.0, 0.0, 0.1, 0.13, 0.2, 0.2, 0.05]
gw=[0.0,  0.0,   0.0, 0.067, 0.133, 0.2, 0.2, 0.1, 0.067, 0.0, 0.0, 0.0]
bw=[0.1, 0.15, 0.2, 0.133, 0.067, 0.0, 0.0, 0.0,  0.0,   0.0, 0.0, 0.0]

a=12.5 #slit half width
off1=25.0 #slit offset

for ii in range (0,12): #add intensity patterns into the RGB channels

brightness=4.0 #saturate
RGB=dstack((R, G, B))

fig = figure(1,facecolor='white')
ax = fig.add_axes([0.05, 0.05, 0.9, 0.9])
ax.axison = False
print 'elapsed time: ', (clock() - startTime)

The code is so fast that it is easy to add colour as in the above example, where we have used the ideas from Interferometry in colour to create a wavelength to RGB mapping. By modifying the spectrum of the input light it is possible to explore coherence, see Chapter 8 in Optics f2f. By reducing the range of wavelengths, e.g. using a green filter as in the image below (in the code above, take the wavelength index from 4 to 8 instead of 0 to 12), we see higher fringe visibility off axis, see also Fig. 8.15 in Optics f2f, and we say that the light is more coherent.


Click here to see interactive figure.

[1] In his 1804 paper Young describes a experiment where a small hole is made in a screen to create a beam of sunlight. A thin card is placed in this beam to block the central portion such that the light can pass on either side and `interfere’ downstream. Such an arrangement is not easily explained by two-path interference. The diffraction pattern shown below is described by Fresnel diffraction in the near field and is a good example of Babinet‘s principle in the far-field, see p. 106 in Optics f2f.


Optical Fourier Transforms (of letters)

In Optics f2f we frequently use the example of letters to illustrate Fraunhofer diffraction (Chapter 6), convolution (Chapter 10 and Appendix B), spatial filtering (Chapter 10) and the properties of Fourier transforms in general (Chapters 6, 10 and Appendix B). Below, we share a python code, based on eqns (6.35) and (9.9) in Optics f2f, that calculates the intensity patterns associated with the optical Fourier transforms of letters.

The code works by printing a letter as an image, e.g. an E


then digitising this image as an array, calculating the Fourier transform, and finally plotting the modulus squared as a colormap.


The code (last updated 2019-02-14 [1]) evaluates eqns (6.35) and (9.9) in Optics f2f for cases where the input aperture function has the shape of a letter.

from numpy import frombuffer, uint8, mean, shape, zeros, log, flipud, meshgrid
from numpy.fft import fft2, fftshift, ifftshift
from matplotlib import figure, cm
from matplotlib.pyplot import close, show, figure, pcolormesh
fig = figure(1,facecolor='white')
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) #add axis for letter
ax.axison = False
ax.text(-0.5,-1.0, 'Z', fontsize=460,family='sans-serif') #print letter
fig.canvas.draw() #extract data from figure
data = frombuffer(fig.canvas.tostring_rgb(), dtype=uint8)
data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
bw-=1.0 #invert
[xpix, ypix]=shape(bw)
mul=4 # add zeros to increase resolution
F=flipud(fftshift(fft2(H))) #perform Fourier transform and centre
F2=Fr*Fr+Fi*Fi #calculate modulus squared
fig = figure(2,facecolor='white')
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
ax.axison = False
p=ax.pcolormesh(log(Fs),cmap=cm.afmhot) # plot log(intensity) pattern
p.set_clim(12,20) #choose limits to highlight desired region

[1] Note, code works using python 2 and 3. Cut and paste text into jupyter notebook. Last tested using jupyter with python 3.7.1 numpy 1.15.4 and matplotlib 3.0.2 .

Biot’s sugar experiment

Of all experiments in optics, one can make a case that none pack more pedagogical punch then optical rotation in sugar. In one beautiful and simple experiment we learn about the physics of light scattering, polarization, chiral chemistry, and fluorescence.


Optical rotation was first discovered by Jean-Baptiste Biot in 1815, and has an almost immediate impact by providing a quantitative measure of the purity of sugar. [1] In 1848 Louis Pasteur establsihed chiral chemistry by using Biot’s optical rotation technique to distinguish between left- and right-handed tartrate crystals.


The modern variant of the experiment is to send a laser beam through a sugar solution, for example, corn syrup which is liquid at room temperature and about 1/3 glucose by weight.

For the laser we use the Hexa-beam laser which has the advantages of 6 colours in one box. Below we show photographs of first a green laser and second a blue laser passing through a tube of corn syrup.



In both cases we can see the laser beam as it propagages through the sugar but oddly the green laser seems to oscillate between bright green and a fainter orange, whereas the blue laser oscillates between blue and green. What is going on?

Light scattering

Given that we should never look directly at a laser beam we only ever see laser light because it is scattered. This light scattering effect is the same as we see in the sky. We are not looking at the sunlight directly, only sunlight that is scattered by molecules in the atmosphere (which scatter more blue than red, see p. 223 in Optics f2f).

If the light is propagating across our field of view and we are observing from the side then we only vertically polarized light is scattered. This is illustrated below for the case of unpolarized light propoaging from left to right and we observed vertically polarized light.


This effect is well known to landscape photographers who may exploit this effect using a polarizing filter. The images on the left and right below are taken with the polarizer aligned vertically and horizontally, respectively, allowing first vertical and second horizontally polarized light to pass. When we detect horizontal light (right image) the sky looks very dark because the scattered light is mostly vertically polarized and does not pass through the polarizer.


The difference in the sugar experiment is that the input is linearly polarized and we will only see scattered light from the side if the polarization is vertical and remains vertical. The fact that scattered green and blue in the experiment sometimes disappears suggests that the polarization might be changing as the light propagates.

Optical rotation

In fact, what happens is that the axis of linear polarization rotates, as shown in this figure.


The light starts out vertically polarized and we can see vertically polarized scattered light but then rotates to horizontal and we see nothing (in microscopic terms the electric charges in the sugar, cannot emit light in the direction of their oscillation, i.e. in the direction of the observer). As the light continues it rotates back to vertical and we see scattered light again. This explains the bands of light and dark observed in the experiment.

The reason for this rotation is that the sugar molecules are chiral (see p. 62 in Optics f2f).

As in landscape photography, we can verify the polarization of the scattered light by placing a linear polarizer between the object and the detector. Below we show what happens if we view the sugar experiment through both vertical (on the left) and horizontal (on the right) polarizing sheets. We have overlapped the two sheets slightly to show that through crossed polarizers we really do see nothing.


Through the vertical polarizers there is not much change, whereas through the horizontal polarizer all the scattered light (which is vertically polarized) is blocked and we see nothing. Well, not quite nothing. In fact we see some light of a different colour. This is fluorescence.


In fluorescence, the incident light is absorbed (not scattered), both the polarization and the wavelength of the emitted light can be different to the input. In this example the fluorescence is unpolarized which is why we can see it through both the vertical and linear polarizers. By energy conversation the wavelength of the emission cannot be shorter than that of the input which explains why the green and blue lasers produce orange and green fluorescence, respectively.

[1] According to The Shadow of Enlightenment
Optical and Political Transparency in France 1789-1848
Theresa Levitt, “the price of a batch of sugar was determined directly by its polarimeter reading”.

Laser design using the complex beam parameter

Three decades ago I began my PhD with the great Allister Ferguson. My goal would become to design and build the world’s first single-frequency Ti:sapphire laser. In January 1989 I began by designing a Nd:YAG ring laser. Below is a photograph of the first page of my PhD notebook.


To design a laser cavity, you start with the equations that describe the propagation of laser beams, and on the left (in the image) you can see a matrix equation that describes how the complex beam parameter q of a laser beam changes when it passes through a lens with focal length f  (or is reflected by a curved mirror). I had learnt this equation from Geoff Brooker, except that his equation (see here) included an odd factor called “cancel me”. At the time I did not understand why I needed to “cancel me”  but as long the equations gave the right answers (which they do) I could start building lasers.

The Nd:YAG laser worked and we published a paper, then I started work on the Ti:sapphire laser. Using the same equations this is the desgin I came up with.


Using an early version of the laser we published a paper on Doppler-free spectroscopy of rubidium atoms. Later this laser was developed into a commercial product by Microlase who subsequently became Coherent Scotland. Hundreds of these lasers (perhaps one thousand by now costing around 100k each) have been sold worldwide. I mention this because it helps us to appreciate the utility of both maths and physics.

Thirty years on, while writing a textbook on optics, I found the motivation to revisit those old equations on the complex beam parameter and rewrite them in a more elegant form. As I suspected there is no need for “cancel me” or even to write the equation in the form of a matrix times a vector. The point is that a gaussian beam is simply a paraxial spherical wave with a complex argument (see p. 178 in Optics f2f) and an ideal lens or curved mirror is simply an element which modifies the curvature of a spherical wave (see p. 27-8 of Optics f2f).

This beautiful piece of mathematics (p. 181 in Optics f2f), with real applications, has been putting `bread on the table’ for many, over many years.

Interferometry in colour

Optics textbooks tend to be rather monochrome. When we plot Young’s interferference fringes or diffraction patterns we tend to assume monochromatic light. However, some of the most interesting interference effects like soap films (see e.g. here and below) are observed using ‘white light’. In Chapter 8 of  Optics f2f we discuss white light and in the Excercises include a plot of Young’s double-slit experiment using more than one wavelength (p. 146 in Optics f2f). Making colour plots of diffraction or interference phenomena raises some interesting issues that are not widely discussed. The purpose of this post is to illustrate the issues involved using another example – Newton’s rings.

The image below (by James Keaveney) is an award-winning photograph of a Newton’s ring pattern produced when two pieces of quartz (one with a slight curvature) are placed almost touching and illuminated with white light. This amazing optical component was made by the brilliant David Sarkisyan and colleagues in Armenia, and is used to perform experiments on light-matter interactions in nanoscale geometries (see e.g. here).


In the middle, where the separation between the two reflecting surface is less than the wavelength, we see darkness surrounded a white fringe, which in turn is surrounded coloured fringes. Moving away from the centre we only see one more darkish fringe before the interference pattern ‘washes out’, as expected for white light (see e.g. Figures 8.5, 8.6 and 8.15 in Optics f2f).

We can model this assuming an intensity pattern of the form (see p. 41 in Optics f2f)


where rho is the distance from the centre, lambda is the wavelength and R is the radius of curvature of one surface. To model white light we simple add together the intensity patterns for each wavelength. But this is where it gets tricky.

The spectrum of sunlight (shown below, data from here) and most other ‘white-light’ sources is complex (e.g. the narrow dips in the solar spectrum are the famous Fraunhofer lines).


In principle, we could simulate sunlight interference by sampling the intensity at say 1 nm intervals between 400 and 650 nm and then summing the resulting 250 intensity patterns but this is computationally rather intensity. Alternatively, we could try to exploit the fact that the eye and colour image sensors only measure 3 colour channels, R, G, and B. So, is it sufficient to reproduce the pattern by summing only 3 intensity patterns corresponding to the red, green, and blue channels?

The answer is no. Although, at first this may seem surprising, the reason why should be apparent to students of Optics f2f). Adding the intensity patterns for different colour is about adding waves with different spatial frequency and we learn from Fourier that a discrete spectrum is not great at reproducing and continuous spectrum. We see this perferectly in e.g. Figures 8.5 of Optics f2f) where we see that the interference pattern for 5 colour channels on row (iv) does not gives the same result as the continous spectrum on row (v), although the approximation is not bad for small path differences.

To you and me the right mix of RGB (what photographers would call white balance) looks like white light and we cannot tell the difference between a discrete and a continuous spectrum until we do interferometry (oil films or soap bubbles are probably the most familiar examples).

So coming back to Newton’s rings. First, I show the pattern calculated for three discrete colours (an R, G, and B channel only). As a source this looks like white light but if we look at the Newton’s ring pattern it looks very different to the James Keaveney photo.


In particular, we see different colours (yellow) and a repeat of the dark fringes towards the edges of the pattern.

Now if we increase the number of wavelengths to 12 we get this.


This is much closer to the photo and in my subjective view is a reasonable approximation to case of continuum white-light illumination. In particular, we see how the intensity  fringes ‘wash out’ as we move away from the centre, like in the photo. However, as we are still using a discrete spectrum, if we extended the image further we would again see a repeat of the dark fringes. As Fourier taught us, only a continuous spectrum will suppress all repeats of the pattern.

Now to make the 12 wavelength image we have to decide on a wavelength to RGB mapping. Wavelength to RGB conversion is used to create the colour in the solar spectrum plot above. Exactly how this is done is somewhat subjective (an example is given here). Bascially we are free to play around with the map untike we like the result (similar to how a photographer adjusts white balance in post-processing). Colour is a more of an art than a science which is perhaps why it is seldom covered in optics textbooks.

Here is the python code to make the Newton’s rings pictures.

from numpy import linspace, sin, cos, shape, concatenate, dstack, pi, size
from pylab import figure, clf, show, imshow, meshgrid
from matplotlib.pyplot import close

X, Y = meshgrid(linspace(-0.85*pi,0.85*pi,500), linspace(1*-0.85*pi,1.0*0.85*pi,500))
R=0*X #initialise RGB arrays

wavelengths=linspace(0.4, 0.675, 12) # define 12 wavelengths + RGB conversion
rw=[0.1, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.13, 0.2, 0.2, 0.05]
gw=[0.0, 0.0, 0.0, 0.067, 0.133, 0.2, 0.2, 0.1, 0.067, 0.0, 0.0, 0.0]
bw=[0.1, 0.15, 0.2, 0.133, 0.067, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
wavelengths=linspace(0.45, 0.6, 3) # define 3 wavelengths
rw=[0.0, 0.0, 0.5]
gw=[0.0, 0.5, 0.0]
bw=[0.5, 0.0, 0.0]

for ii in range (0,num): #add intensity patterns into the RGB channels

RGB=dstack((R, G, B))

fig = figure(1,facecolor='white')
ax = fig.add_axes([0.05, 0.05, 0.9, 0.9])
ax.axison = False

Exersize: Finally, as an exercise, how does thickness vary with position in the two soap film pictures below?


Continue reading “Interferometry in colour”