User:Darrell Bonn/Notebook/307L Lab book/lab 5 Summary

From OpenWetWare

Jump to: navigation, search

Speed of Light Lab

  • Lab Partner: Boleszek
  • Beginning with lab procedure number 10 in Gold's: [lab manual]

Summary

We set out to measure the speed of light using a pulsed LED as the light source and a photo multiplier tube as the receiver, which produced a voltage signal in response to the incoming light pulse. The outgoing pulse from the LED and resultant pulse from the PMT were fed together into a Time Amplitude Converter (TAC). This was used to convert the time delay between two pulses to a proportional voltage reading, which was then read via an oscilloscope. The time between the start of the pulse and it’s reception can then be measured, directly measuring the time delay between the LED pulse and it’s reception by the PMT. By varying the distance between LED and PMT, the time it takes light to travel the distance can be directly measured.


There is one great measurement trick to acquiring data with this setup; The TAC is very susceptible to varying input voltage levels on it’s inputs; the TAC voltage is a direct function of the voltage inputs from the pulses it is measuring. As one of these is the output for the PMT, this voltage varies greatly based on it’s distance from the LED. To account for this there is a polarizing filter on the input to the PMT. By rotating the PMT in it’s stand a crude output level control is established. By splitting off a sample of this and feeding it into the oscilloscope, it is fairly straight forward task to keep the output of the PMT roughly level.


The first data required was for calibration of the TAC. This was accomplished by running the PMT output through a series of delay cables and measuring the voltage offset. This data provided volts/nano-second time data and was acquired twice each day of the lab, once at start and once at finish. Then distance data acquisition would be acquired in units of volts/meters and the two together used to calculate speed of light Our first experiments with the equipment revealed that two error sources would most likely dominate; quantization noise from the digital oscilloscope and operator error in getting things exactly right to make as accurate a measurement as possible. With that in mind we determined to acquire 2 basic sets of distance data emphasizing small, regular changes and then repeats of large changes. For the first set, a zero point is established, then data is acquired in 10cm distance changes over a 1.5m range. This was to be repeated 5 times with the zero point changed by about 5cm each time. Then another set of up to 10 data points would be acquired taken only at a larger distance of 1 to 1.5m. It was expected that the larger steps would provide greater accuracy in relation to the operator error and that the large number of smaller measurements (oversampling) would be able to overcome the errors inherent in our instruments. Unfortunately, due to our inexperience with the wiki editing process, we weren’t aware that we had been automatically logged out and our changes to our wiki page would not be accepted. Our final save lost most of our data. What remained was only the first two of the data sets and our four calibrations. A data set from the first day was also already recorded in the analysis Matlab file I had begun, and that is also used here.


Once data was acquired, the speed of light was calculated directly from the existing data. As each data set is measured from a relative starting point, the speed of light is calculated independently for each point.SJK 01:38, 20 October 2008 (EDT)
01:38, 20 October 2008 (EDT)I don't understand this special treatment of the first data point.  You should have the same amount of error on that point as you do all the rest.  As I ask below, why not a linear fit?
01:38, 20 October 2008 (EDT)
I don't understand this special treatment of the first data point. You should have the same amount of error on that point as you do all the rest. As I ask below, why not a linear fit?
All of these were then averaged together for the final result.


Speed of light = 2.93(47)E+008 m/s

SJK 01:45, 20 October 2008 (EDT)
01:45, 20 October 2008 (EDT)Good job presenting result with uncertainty and units.  I do think you used "standard deviation", though you are implying a "standard error of the mean" by writing it that way.  Now the next step:  What does it mean?  Isn't it worth mentioning what the accepted value of the speed of light is, and then comparing with your value to see if it's consistent?  And you have no comments on how close it is or anything about how to do better.  I guess you were still broken hearted about losing the data?
01:45, 20 October 2008 (EDT)
Good job presenting result with uncertainty and units. I do think you used "standard deviation", though you are implying a "standard error of the mean" by writing it that way. Now the next step: What does it mean? Isn't it worth mentioning what the accepted value of the speed of light is, and then comparing with your value to see if it's consistent? And you have no comments on how close it is or anything about how to do better. I guess you were still broken hearted about losing the data?

Further comments on data and analysis

All calculations were done by the attached Matlab file [1].

The four time (calibration) and three range data sets are shown below as well as the final speed of light calculation data.

Image:solScatterCal.jpg Image:solScatterData.jpg

SJK 01:32, 20 October 2008 (EDT)
01:32, 20 October 2008 (EDT)OK, so it took me a while to figure out what you were doing.  But it seems to me that what you did was take each data point (i=2 and above) and compare it with the first, calculating a time delay and distance, and thus speed.  Then you average all of these speeds.  I tried to look at it on paper a bit, and I think what this does is weight the data points non-optimally...I think it weights the second data point more than the 3rd, which is weighted more than the 4th, etc. (actually I'm not sure...now I think it accentuates the error of your first point))  I could be wrong, but in any case, why not a linear fit?  We definitely have looked at this a couple times by now and linear regression is built into Matlab.  I think if you develop your own algorithm (which you have to do sometimes), then you need to explain it and justify its use.
01:32, 20 October 2008 (EDT)
OK, so it took me a while to figure out what you were doing. But it seems to me that what you did was take each data point (i=2 and above) and compare it with the first, calculating a time delay and distance, and thus speed. Then you average all of these speeds. I tried to look at it on paper a bit, and I think what this does is weight the data points non-optimally...I think it weights the second data point more than the 3rd, which is weighted more than the 4th, etc. (actually I'm not sure...now I think it accentuates the error of your first point)) I could be wrong, but in any case, why not a linear fit? We definitely have looked at this a couple times by now and linear regression is built into Matlab. I think if you develop your own algorithm (which you have to do sometimes), then you need to explain it and justify its use.

Image:solData.jpg

Personal tools