Optics Bootcamp

From Course Wiki
Revision as of 19:22, 3 February 2016 by Josephinebagnall (Talk | contribs)

Jump to: navigation, search
20.309: Biological Instrumentation and Measurement

ImageBar 774.jpg

Mens et Manus.jpg

Overview

You are going to build a microscope next week. The goal of the Optics Bootcamp exercise is to make that seem like a less intimidating task. The bootcamp mixes a little bit of mens and a little bit manus. It starts with a few written problems, followed by some exercises in the lab. The written problems will help you with the lab work. After you do the problems, come on by the lab. (Or do the problems in lab if you would like some help.) You will build an imaging apparatus made from some of the same optical components you will use in the microscopy lab, including an LED illuminator, an object with precisely spaced markings, a lens, and a CCD camera. You will compare measurements of object distance, image distance, and magnification to the values predicted by the lens makers' formula that was covered in class.

Problems

Problem 1: Snell's law

A laser beam shines on to a rectangular piece of glass of thickness $ T $ at an angle $ \theta $ of 45° from the surface normal, as shown in the diagram below. The index of refraction of the glass, ng, is 1.41 ≈√2. The index of refraction for air is 1.00.

Optics bootcamp snells law problem.png


(a) At what angle does the beam emerge from the back of the glass?

(b) When the beam emerges, in what direction (up or down) is it displaced?

Optional:

(c) By how much will the beam be displaced from its original axis of propagation?

Problem 2: chelonian size estimation

In the diagram below, an observer at height $ S $ above the surface of the water looks straight down at a turtle swimming in a pool. The turtle has length $ L $, height $ H $, and swims at depth $ D $.

Turtle problem.png

(a) Use ray tracing to locate the image of the turtle. Show your work.
(b) Is the image real or virtual?
(c) Is the image of the turtle deeper, shallower, or the same depth as its true depth, $ D $?
(d) Is the image of the turtle longer, shorter, or the same length as its true length, $ L $?
(e) Is the image of the turtle taller, squatter, or the same height as its true height, $ H $?

Problem 3: ray tracing with thin, ideal lenses

Lenses L1 and L2 have focal lengths of f1 = 1 cm and f2 = 2 cm. The distance between the two lenses is 7 cm. Assume that the lenses are thin. The diagram is drawn to scale. (The gridlines are spaced at 0.5 cm.) Note: Feel free to print out this diagram so you can trace the rays directly onto it. Or maybe use one of those fancy tablet thingies that the kids seem to like so much these days.

RayTracing1.png


(a) Use ray tracing to determine the location of the image. Indicate the location on the diagram.
(b) Is the image upright or inverted? Is the image real or virtual?
(c) What is the magnification of this system?

Optional:

(d) Lens L1 is made of BK7 glass with a refractive index n1 of 1.5. Lens L2 is made of fluorite glass with a refractive index n2 of 1.4. Compute the focal lengths of L1 and L2 if they are submerged in microscope oil (refractive index no = 1.5).

Problem 4: this will be really useful later

In the two-lens system shown in the figure below, the rectangle on the left represents an unspecified lens L1 of focal length $ f_1 $ separated by 0.5 cm from another lens L2 with focal length $ f_2 $ of 1 cm.

UnknownLens.png

Find the value of $ f_1 $ such that all the rays incident parallel on this system will be focused at the observation plane, located at a distance d of 2 cm away from L2.

Lab exercises

Welcome to the '309 Lab. Before you get started, take a little time to learn your way around. This page gives an overview of all the wonderful resources in the lab.

Lab exercise 1: measure focal length of lenses

Measure the focal length of the four lenses marked A, B, C, and D located near the lens measuring station.

Figure 1: Imaging apparatus with illuminator, object, lens, and CCD camera mounted with cage rods and optical posts.

Lab exercise 2: imaging with a lens

Figure 1 is a picture of the thing you are about to build. From left to right, the apparatus includes an illuminator, an object (a glass slide with a precision microruler pattern on it), an imaging lens, and a camera to capture images. All of the components are mounted in an optical cage made out of cage rods and cage plates. The cage plates are held up by optical posts inserted into post holders that are mounted on an optical table. You will be able to adjust the positions of the object and lens by sliding them along the cage rods.

Gather materials

You can spend a huge amount of time walking around the lab just getting things in the lab, so it makes sense to grab as many parts as possible in one trip. Figure 1 should give you some idea of what the parts look like.

The materials lists below include part numbers and descriptive names of all the components. It is likely that you will find some of the terms not-all-that-self-explanatory. Most of the parts are manufactured by a company called ThorLabs. If you have a question about any of the components, the ThorLabs website can be very helpful. For example, if the procedure calls for an SPW602 spanner wrench and you have no idea what such a thing might look like, try googling the term: "thorlabs SPW602". You will find your virtual self just a click or two away from a handsome photo and detailed specifications.

Screw sizes are specified as <diameter>-<thread pitch> x <length> <type>. The diameter specification is confusing. Diameters ¼" and larger are measured in fractional inches, whereas diameters smaller than ¼" are expressed as an integer number that is defined in the Unified Thread Standard. The thread pitch is measured in threads per inch, and the length of the screw is also measured in fractional inch units. So an example screw specification is: ¼-20 x 3/4. Watch this video to see how to use a screw gauge to measure screws. (There is a white, plastic screw gauge located near the screw bins.) The type tells you what kind of head the screw has on it. We mostly use stainless steel socket head cap screws (SHCS) and set screws. If you are unfamiliar with screw types, take a look at the main screw page on the McMaster-Carr website. Notice the useful about ... links on the left side of the page. Click these links for more information about screw sizes and attributes. This link will take you to an awesome chart of SHCS sizes.

Optomechanics Screws and Posts

located in plastic bins on top of the center parts cabinet:

  • 2 x 1" Lens tube (SM1L10)
  • 2 x Lens tube slip ring (SM1RC)
  • 2 x 2" Cage plate (LCP01, looks like an "O" in a square)
  • 4 x Cage plate adapter (LCP02, looks like an "X")
  • 2 x 2" Retaining rings (SM2RR)

located on the counter above the west drawers:

  • 3 x ER8 cage assembly rod (The last digit of the part number is the length in inches.)
  • 2 x 1" Retaining rings (SM1RR)

located on top of the west parts cabinet:

  • 3 x Post holders (PH2)
  • 3 x Optical posts (TR2)
  • 3 x Mounting base (BA1)
  • 3 x 8-32 set screws
  • 3 x ¼-20 x 5/16" socket head cap screws
  • 1 x ¼-20 set screw
  • 4 washers
  • 4 x ¼-20 x ½" socket head cap screws
Optics Other

located in the west drawers:

  • 1 x LA1951 plano-convex, f = 25 mm lens (this will be used as a condenser for your illuminator)
  • 1 x LB1811 biconvex, f = 35 mm lens (this will be used to form an image of your object)

located on top of the east cabinet

  • 1 x ND filter
  • 1 x red, super-bright LED (mounted in heatsink)
  • Microruler calibration slide mounted to a cage plate adapter

Most of the tools you will need are located in the drawers next to your lab station. Hex keys (also called Allen wrenches) are used to operate SHCSs. Some hex keys have a flat end and others have a ball on the end, called balldrivers. The ball makes it possible to use the driver at an angle to the screw axis, which is very useful in tight spaces. You can get things tighter (and tight things looser) with a flat driver. Here is a list of the tools you will need:

  • 1 x 3/16 hex balldriver for 1/4-20 cap screws
  • 1 x 9/64 hex balldriver
  • 1 x 0.050" hex balldriver for 4-40 set screws (tiny)
  • 1 x SPW602 spanner wrench

You will also need to use an adjustable spanner wrench. The adjustable spanner resides at the lens cleaning station. There are only one or two of these in the lab. It is likely that one of your classmates neglected to return it to the proper place. This situation can frequently be remedied by yelling, "who has the adjustable spanner wrench?" at the top of your lungs. Try not to use any expletives. And please return the adjustable spanner wrench to the lens cleaning station when you are done.

  • 1 x SPW801 adjustable spanner wrench
Things that should already be (and stay at) your lab station
  • 1 x Manta CCD camera
  • 1 x Calrad 45-601 power adapter for CCD
  • 1 x ethernet cable connected to the lab station computer

Build the apparatus

Use the image of the apparatus to think about how to put your system together. The following guidelines should help get you started. If at any point you have questions, do not hesitate to ask an instructor for help.

140729 OpticsBootcamp 05.jpg 140729 OpticsBootcamp 07.jpg Mount the LED light source in a cage plate
  • In the LCP01 cage plate, the LED will get sandwiched in-between two SM2RR retaining rings. First screw in one SM2RR only 1 mm deep.
  • Next place the LED above it.
  • Finally tighten down the second SM2RR using the SPW801 adjustable spanner wrench. The SPW801 can be opened until its width matches the SM2RR diameter, the separation between the ring's notches.
AttachBA1.JPG MountedLEDonTable.JPG Mount the LED on a base

Optical posts and post holders are used to mount components to an optical table (or breadboard) and position them at a certain height. Use a post to secure the cage plate that holds the LED.

  • Affix a TR2 optical post to the LCP01 cage plate (holding the LED) using an 8-32 set screw.
  • Use a ¼-20x¼" SHCS to connect a BA1 mounting base to a PH2 post holder.
  • Use one or two washers and one or two ¼-20 x ½ cap screws to secure the base to the table. One fastener is fine.
  • Insert the post into the holder and tighten the thumbscrew to fix its height.
LensInLensTube.JPG LensTubeLCP02.JPG Mount a lens in a lens tube

We typically mount our lenses in lens tubes so that we can easily add them to or remove them from cage plates in our optical systems.

  • Carefully (use lens paper to protect the lens surface) place the 25 mm plano-convex lens into a 1" lens tube with the hemisphere facing up
  • Carefully tighten down a retaining ring (SM1RR) using an adjustable spanner (SPW801) so as not to scratch the lens (the red SPW602 spanner will likely scratch the 25mm lens)
  • Screw this lens tube into a LCP02 cage plate adapter
  • Mount the 35mm in another 1" lens tube, but you may use the red SPW602 spanner to tighten the retaining ring.
TightenSetScrew.JPG Set up the cage rods

We will use cage rods to frame our optical system so that we can adjust the position of cage plates by sliding them along the rods. There are 4-40 set screws that you can tighten to set the cage plate positions. Cage rods also help us to make sure our optical components are aligned to one another.

  • Place three cage rods (ER8) through the holes in the LCP01 cage plate holding the LED. We will practice to use three cage rods instead of four, since when we use 1" cage plates, this will enable us to switch out lens tubes easily.
  • Once the cage rods are flush against the back of the LCP01 cage plate, set their position by tightening the 4-40 set screws (see picture).
  • Now you can slide on your other cage plates (25mm lens, object, 35mm lens)
  • Secure the open end of your cage rods with another LCP01 cage plate mounted to an optical post, and placed into a post holder. Mount that post holder to the table, as done previously with the LED.
  • Add an additional LCP02 cage plate adapter at the end as a setting for the camera. Tighten the set screws of the final two cage plates.
  • Note: you may extend the ER8 rods by screwing on additional cage rods. Just make sure that the cage plates can slide across them.
140729 OpticsBootcamp 17.jpg Mount the CCD camera (if it isn't already mounted)
  • Affix a TR2 optical post to the CCD camera's mounting plate using a ¼-20 set screw.
  • Place optical post into post holder with BA1 mounting base.
  • Connect the CCD to the computer using a red ethernet cable.
  • Power up the CCD using the Calrad 45-601 power adapter.
  • Make sure the heights of the components are adjusted by adjusting the position of the optical posts.
140730 OpticsBootcamp 1.jpg 140730 OpticsBootcamp 2.jpg Power the LED light source
  • Turn the power supply on.
  • Make sure the power supply is not enabled (green LED below the OUTPUT button is not lit).
  • Use the righthand set of knobs to set the current and voltage
    • Adjust the CH1/MASTER VOLTAGE knob so the display reads about 5 Volts.
    • Adjust the CH1/MASTER CURRENT knob to so the display reads 0.02 Amps.
    • IMPORTANT: Never set the CURRENT to a value greater than 0.5A, as this will burn out the LED.
  • Connect the + (red) terminal of channel CH1 on the power supply to the red wire of the LED.
  • Connect the - (black) terminal of channel CH1 on the power supply to the black wire of the LED.
  • Press the OUTPUT button to enable the power supply and light the LED.
  • Adjust the LED brightness using the power supply's CURRENT knob.

Visualize, capture, and save images in Matlab

20.309 130909 ImagingWithLens.png

Now that you've learned the basics of mounting, aligning and adjusting optical components, you will through this lab exercise

  • Verify the lens maker and the magnification formulae:
$ {1 \over S_o} + {1 \over S_i} = {1 \over f} $
$ M = {h_i \over h_o} = {S_i \over S_o} $
  • Become familiar with image acquisition and distance measurement using the Matlab software.
Imaqtool.PNG
  • Log on to the computer, launch Matlab, and run imaqtool.
  • Select the Manta_G-032B(gentl-1) Mono 12" hardware in the left bar.
  • The Start Preview button will bring up a window of the live image from the CCD camera.
  • Move the lens and the micrometer calibration object to produce a focused image.
    • Start with the object at the 2$ f $ position, i.e., 70 mm from the lens.
    • Divide additional images into an equal number with the target, the object, placed at less than and greater than 2$ f $ from the lens
  • Under the Device Properties tab, optimize the Exposure Time Abs field for good contrast without pixel saturation.
  • You may need to reduce the current in your LED, and add an ND filter after your light source to decrease its intensity
  • Measure the distance $ S_o $ from the target object to the lens and the distance $ S_i $ from the lens to the CCD active imaging plane.
    • Does the lens maker formula $ {1 \over S_o} + {1 \over S_i} = {1 \over f} $ apply as it should when the image focus is optimized?
140730 Matlab 02.png
  • Save images in Matlab:
    • Make sure you limit to 1 the number of Frames Per Trigger in the General tab of the Acquisition Parameters;
    • Use the Start Acquisition and Export Data buttons;
    • Navigate to the CourseMaterials\StudentData\Spring 2016\ directory accessible from the computer desktop to save your data files remotely on a server you'll be able to browse from your home computer.
      • Recall from one of the initial Stellar announcements that you must use your kerberos ID, preceded by win\, as your username. For example, Professor Nagle would enter "win\sfnagle". Use your kerberos password as well. Remember to disconnect the mapped drive when you are done at your lab station, or log out of the Windows session entirely.
    • The file extension will be .MAT (e.g. object_01.mat), although this extension will not be visible in the Windows Explorer. The variable within this file (e.g. im01) will represent the image as a 492x656 matrix of 16-bit integers.

Examine images in Matlab

140730 Matlab 03.png
  • To display the image in Matlab, use the imshow command:
    • In Matlab, open your saved image file ('object_01.mat') from the Student Data\ Spring 2016\ directory.
    • Its contents 'im01' now appear in your workspace.
    • When the 12-bit numbers from the camera get transferred to the computer, they are converted to 16-bit numbers. 16-bit numbers can represent a range of values from 0-65535. This leaves a considerable portion of the number range unoccupied. Because of this, if you type imshow( im01 ), you will see an image that looks almost completely black.
    • Adjust the image to fill the full range by typing imshow( 16.0037 * im01 ).
Note: 16.0037 equals 65535 / 4095. This factor maps values in the range 0-4095 to 0-65535.
So 70mm imdistline.PNG
  • Determine the distance (in pixels) between two specific points in the image:
    • Either use the Data Cursor Tool and some trigonometry to display the X and Y coordinate of your mouse pointer;
    • or type imdistline on the console to make a very useful measuring tool appear on the image (recommended);
    • or use the interactive improfile function from the Matlab command window, which lets you trace a segment across the active figure (visualized as a dotted line) and generates a plot of pixel intensity vs. pixel position along the segment in a new figure.
    • This manipulation allows you to calculate the image size $ h_i $, taking into account the CCD pixel size: 7.4 μm x 7.4 μm.
  • Confirm the corresponding object size $ h_o $:
    • Each small tick mark is 10 um apart
    • Each larger tick mark is 100 um apart
  • Do both magnification relationships $ M = {h_i \over h_o} = {S_i \over S_o} $ match ?
140730 Matlab 05.png

Plot and discuss your results

  • Repeat these measurements of $ S_o $, $ S_i $, $ h_o $, and $ h_i $ for several values of $ S_o $.
  • Plot $ {1 \over S_i} $ as a function of $ {1 \over f} - {1 \over S_o} $.
  • Plot $ {h_i \over h_o} $ as a function of $ {S_i \over S_o} $.
  • What sources of error affect your measurements?
  • Given the sources of error, how far off could your measurements of magnification be?

Don't take your apparatus apart just yet. You will use it in the next section.

Lab exercise 3: noise in images

Almost all measurements of the physical world suffer from some kind of noise. Capturing an image involves measuring light intensity at numerous locations in space. The Manta G-032 CCD cameras in the lab measure about 320,000 unique locations for each image. Every one of those measurements is subject to noise. In this part of the lab, you will quantify the random noise in images made with the Manta cameras, which are the same ones that you will use in the microscopy lab.

Figure 2: Noise measurement experiment. The cameras in the lab produce images with 656 horizontal by 492 vertical picture elements, or pixels. At regular intervals, the camera measures the intensity of light falling on each pixel and returns an array of pixel values $ P_{x,y}(t) $. The pixel values are in units of analog-digital units (ADU).

So what is noise in an image? Imagine that you pointed a camera at a perfectly static scene — nothing at all is changing. Then you made a movie of, say, 100 frames without moving the camera or anything in the scene or changing the lighting at all. In this ideal scenario, you might expect that every frame of the movie would be exactly the same as all the others. Figure 2 depicts dataset generated by this thought experiment as a mathematical function $ P_{x,y}[t] $. If there is no noise at all, the numerical value of each pixel in all 100 of the images would be the same in every frame:

$ P_{x,y}[t]=P_{x,y}[0] $,

where $ P_{x,y}[t] $ is the the pixel value reported by the camera of at pixel$ x,y $ in the frame that was captured at time $ t $. The square braces indicated that $ P_{x,y} $ is a discrete-time function. It is only defined at certain values of time $ t=n\tau $, where $ n $ is the frame number and $ \tau $ is the interval between frame captures. $ \tau $ is equal to the inverse of the frame rate, which is the frequency at which the images were captured.

You probably can guess that IRL, the frames will not be perfectly identical. We will talk in class about why this is so. For now, let's just measure the phenomenon and see what we get. A good way to make the measurement is to go ahead and actually do our thought experiment: make a 100 frame movie of a static scene and then see how much each pixel varies over the course of the movie. Any variation in a particular pixel's value over time must be caused by random noise of one kind or another. Simple enough.

(An alternate way to do this experiment would be to simultaneously capture the same image in 100 identical, parallel universes. This will obviously reduce the time needed to acquire the data. You are welcome to use this alternative approach in the lab.)

Figure 3: Pixel variance versus mean mystery plot. Can you stand the suspense?

We need a quantitative measure of noise. Variance is a good, simple metric that specifies exactly how unsteady a quantity is, so let's use that. In case it's been a while, variance is defined as $ \operatorname{Var}(P)=\mathbf{E}\left((P-\bar{P})^2\right) $.

So here's the plan:

  • Point your camera at a static scene that has a range of light intensities from bright to dark.
  • Make a movie.
  • Compute the variance of each pixel over time.
  • Make a scatter plot of each pixel's variance on the vertical axis versus its mean value on the horizontal axis, as shown in Figure Figure 3.

Plotting the data this way will reveal whether or not the quantity of noise depends on intensity. With zero noise, the plot would be a horizontal line on the axis. But you know that's not going to happen. How do you think the plot will look?

Set up the scene and adjust the exposure time

You can make the measurement with the same setup you put together for the previous lab exercise. Follow the detailed procedure below:

Figure 4: Example intensity histogram with approximately uniform distribution of pixel values over the range 10-2000 ADU.
  1. Set up the scene.
    1. Slide the lens as close to the camera as it gets.
    2. Slide the microruler slide to produce an in-focus image.
    3. Use the CURRENT knob on the power supply to set the LED current to 0.2 A. Make sure you have an ND filter in your illuminator.
    4. In the Device Properties tab, set the Exposure Time Abs property to 100.
  2. Check the exposure.
    1. Press the Start Acquisition button in the Acquire pane of the Preview window.
    2. Press the Export Data... button.
    3. Select MATLAB Workspace in the Data Destination popup menu and type exposureTest in the Variable Name edit box.
    4. Switch to the MATLAB console window. (Press alt-tab until the console appears.)
    5. Plot a histogram of your data on log-log axes (use the MATLAB code below).
  3. Use your histogram to obtain the correct exposure and current settings.
    1. Your histogram should show a reasonably uniform distribution of pixel values between about 10 and 2000. Figure 4 shows a reasonably nice histogram.
    2. If necessary, change your camera exposure or LED current setting to get a good distribution of values.

MATLAB code for plotting a histogram:

[ counts, bins ] = hist( double( squeeze( exposureTest(:) ) ), 100);
loglog( bins, counts )
loglog( bins, counts, 'LineWidth', 3 )
xlabel( 'Intensity (ADU)' )
ylabel( 'Counts' )
title( 'Image Intensity Histogram' )

Acquire movie and plot results

Once you are set up correctly, make a movie and plot the results using the procedure below:

  1. Capture a 100 frame movie.
    1. Go to the General tab of the Acquisition Parameters pane and change the Frames per trigger property from 1 to 100.
    2. Press Start Acquisition.
    3. Press the Export Data... button.
    4. Select MATLAB Workspace in the Data Destination popup menu and type noiseMovie in the Variable Name edit box.
  2. Plot pixel variance versus mean.
    1. Switch to the MATLAB console window. (Press alt-tab until the console appears.)
    2. Use the code below to make your plot.
pixelMean = mean( double( squeeze( noiseMovie) ), 3 );
pixelVariance = var( double( squeeze( noiseMovie) ), 0, 3 );
[ counts, binValues, binIndexes ] = histcounts( pixelMean(:), 250 );
[counts, binValues, binIndexes ] = histcounts( pixelMean(:), 250 );
binnedVariances = accumarray( binIndexes(:), pixelVariance(:), [], @mean );
binnedMeans = accumarray( binIndexes(:), pixelMean(:), [], @mean );
loglog( pixelMean(:), pixelVariance(:), 'x' );
hold on
loglog( binnedMeans, binnedVariances, 'LineWidth', 3 )
xlabel( 'Mean (ADU)' )
ylabel( 'Variance (ADU^2)')
title( 'Image Noise Versus Intensity' )

Questions

  • Did the plot look the way you expected?
  • How does noise vary as a function of light intensity?
  • Include the plot in your writeup.