# Experimental tests of diffraction theory

In our undergraduate laboratory teaching we encourage students to think about their data and how to communicate what it means. Some years ago we introduced a prize for graphical excellence named after Florence Nightingale, an early pioneer in the creative presentation of data. The gold standard in the scientific method often means experiment and theory on the same graph. Odd then that many optics textbooks show lots of theory curves and photos of interference or diffraction but rarely both on the same graph.

This article discusses a simple example where we take a photo of a diffraction pattern and compare it to the prediction of Fraunhofer diffraction theory (see p. 85 in Optics f2f). All you need is a light source, a diffracting obstacle, and a camera. However, the trade-off is that to make the theory easier you need a more idealised’ experiment.

First, we show a photo of the far-field diffraction pattern recorded on a camera when an obstacle consisting of 5 slits is illuminated by a laser. The experiment was performed by Sarah Bunton an Ogden Trust Intern at Durham in 2016. The image is in black and white and saved as a .png file. The image is read using the python imread command and ‘replotted’ with axes.

from numpy import shape
from matplotlib.pyplot import close, figure, show,  cm

[ypix, xpix]=shape(img)
print ypix, xpix

close("all")
fig = figure(1,facecolor='white')
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
ax.axison = True
ax.imshow(img,cmap=cm.gray)
show()

The code outputs the image size in terms of horizontal (x) and vertical (y) pixels, and allows us to identify a region of interest (rows 500 to 600 say). Now we want to analyse the data. First (lines 20-25), for illustration purposes only, we replot it as a colormap and reduce noise by binning the pixel data into superpixels made up of 2×2 real pixels.

from numpy import shape, arange, sin, pi
from matplotlib import rc
rc('font',**{'family':'serif','serif':['Times New Roman']})
from matplotlib.pyplot import close, figure, show, pcolormesh, xlabel, ylabel, rcParams,  cm

fs=32
params={'axes.labelsize':fs,'xtick.labelsize':fs,'ytick.labelsize':fs,'figure.figsize':(12.8,10.24)}
rcParams.update(params)

[ypix, xpix]=shape(img) # find size of image

close("all")
fig = figure(1,facecolor='white')
ax = fig.add_axes([0.15, 0.3, 0.8, 1.0])
ax.axison = False # Start with True to find region of interest
ax.imshow(img[500:600,:],cmap=cm.gray) #Show region

small=img.reshape([ypix/4, 4, xpix/4, 4]).mean(3).mean(1) # create super pixels
ax = fig.add_axes([0.15, 0.875, 0.8, 0.1])
ax.axison = False
ax.pcolormesh(small) #plot image of super pixels
ax.set_xlim(0,xpix/4)
ax.set_ylim(138-6,138+6)

cut=557 #Select horizontal line to compare to theory
data=img[cut,:]
data=data**1.5 #Gamma correction

ax = fig.add_axes([0.15, 0.1, 0.8, 0.6])
ax.axison = True
p=ax.plot(data,'.',markersize=10,color=[0.2,0.2,0.2],alpha=0.5)
ax.set_xlim(0,xpix)
ax.set_ylim(-0.05,1.05)

x=arange(0.0001,xpix,1.0) # x values for theory line
xoff=780
xd=x-xoff
flamd=252/(pi)
env=3.5*flamd
y=1.0*sin(xd/env)/(xd/env)*sin(5*xd/flamd)/(5*sin(xd/flamd))
y=y*y
p=ax.plot(x,y,linewidth=2,color=[0.0,0.0,0.0],alpha=0.75)
xlabel('Position in the $x$ direction')
ylabel('${\cal I}/(N^2{\cal I}_0a^2/\lambda z)$')
ax.set_xticks([xoff-pi*flamd, xoff,(xoff+pi*flamd)])
ax.set_xticklabels(['$-(\lambda/d)z$', '0','$(\lambda/d)z$'])

show()

Second, we select a row through the intensity maximum (lines 27-35) so that we can compare the intensity along the horizontal axis with the prediction of theory. For the theory (lines 37-48), we can measure the distances in the experiment (slit width, separation, distance to detector, pixel size and laser wavelength) or we can fit the data. Here we have only done a by-eye fit’. One thing you might notice in the code (line 29) is that we need to perform a scaling for the Gamma correction used by the camera. Some cameras may allow you to turn this off, otherwise you need to correct it in post-processing.

It is worth mentioning that the imread command is very versatile and can read many file formats, but if the image is colour then the data array may have three or four layers (RGB + tranparency) as well as horizontal and vertical position. Here is an example where we read in a jpg, direct from a camera, of Young’s double slit experiment using sunlight (more on that later). This is much more difficult to analyse unless we know how the exact spectral response of the RGB filters used in the camera, but still you can see some fun stuff like red is diffracted more than blue!

from numpy import shape
from matplotlib import rc
rc('font',**{'family':'serif','serif':['Times New Roman']})
from matplotlib.pyplot import close, figure, show, rcParams,  cm

fs=32
params={'axes.labelsize':fs,'xtick.labelsize':fs,'ytick.labelsize':fs,'figure.figsize':(12.8,10.24)}
rcParams.update(params)

[ypix, xpix, dim]=shape(img)

close("all")
fig = figure(1,facecolor='white')
ax = fig.add_axes([0.15, 0.3, 0.8, 1.0])
ax.imshow(img)
ax.axison = False

cut=40
data1=img[cut,:,0]-25
data2=img[cut,:,1]-10
data3=img[cut,:,2]

ms=10

ax = fig.add_axes([0.15, 0.35, 0.8, 0.25])
ax.axison = True
p=ax.plot(data1,'.',markersize=ms,color=[0.5,0.0,0.0],alpha=0.5)
p=ax.plot(data2,'.',markersize=ms,color=[0.0,0.5,0.0],alpha=0.5)
p=ax.plot(data3,'.',markersize=ms,color=[0.0,0.0,0.5],alpha=0.5)
p=ax.plot(data1,color=[0.5,0.0,0.0],alpha=0.5)
p=ax.plot(data2,color=[0.0,0.5,0.0],alpha=0.5)
p=ax.plot(data3,color=[0.0,0.0,0.5],alpha=0.5)
ax.set_xlim(0,xpix)
ax.set_ylim(-10,265)
ax.set_xticklabels([])

ax = fig.add_axes([0.15, 0.075, 0.8, 0.25])
ax.axison = True

cut=65
data1=img[cut,:,0]
data2=img[cut,:,1]
data3=img[cut,:,2]

ax.axison = True
p=ax.plot(data1,'.',markersize=ms,color=[0.5,0.0,0.0],alpha=0.5)
p=ax.plot(data2,'.',markersize=ms,color=[0.0,0.5,0.0],alpha=0.5)
p=ax.plot(data3,'.',markersize=ms,color=[0.0,0.0,0.5],alpha=0.5)
p=ax.plot(data1,color=[0.5,0.0,0.0],alpha=0.5)
p=ax.plot(data2,color=[0.0,0.5,0.0],alpha=0.5)
p=ax.plot(data3,color=[0.0,0.0,0.5],alpha=0.5)
ax.set_xlim(0,xpix)
ax.set_ylim(-10,275)

show() Finally, here is an example for Biot’s sugar experiment, where we see the effect of the overlap between the RGB channels, e.g. the scattered blue laser light even shows up in the red channel. The unsaturated data on the right (obtained through the horizontal polarizer) gives a better indication of the relative light levels.