Physics307L:People/Gibson/Notebook/071119

From OpenWetWare

Jump to: navigation, search

Contents

Objective

Using the Multichannel Analyzer (MCA), we attempted to record a series of random events, which from what Dr. Koch told us, would be nice if it was from cosmic events. The distribution that best describes these events is that of the Poisson Distribution. The data we took was recorded over various apertures of time, one such instance being 256 bins of 100 milli-seconds each. Changing up the time, as well as the number of counts can give us different approximations, but we still hope to remain close to the Poisson distribution.

Experiment

  • Setup

We have a setup that consists of a photomultiplier tube that is attached to a NaI scintillator, both are housed in a structure of lead bricks. The arrangement is wired to a high voltage power supply (1000 volts) and then run through some sort of bridge and to a data acquisition board on a computer. The computer is running a program called PCAIII, which handles the data acquisition process. The photomultiplier tube and scintillator were connected by way of coaxial cables to the power supply which we connected to the bridge, and from the bridge into the computer. There were some erroneous cables coming from the data acquisition board that we had no need to mess with. To begin taking data was as simple as choosing a new memory window on the computer then choosing our time and number of counts. From there it was a pain-staking processing waiting for it to finish.

Theory

Collecting large amounts of data is a good way to get good precision on a set number but in this experiment we're more concerned with the probability distribution of large data sets.

The Binomial Distribution

When analyzing any randomly distributed situation a binomial distribution:


B(x)=\frac{N!}{x!(N-x)!}p^nq^{N-n}

with a standard deviation of

\sigma=\sqrt{pN(1-p)}

and a mean of

a = pN

is used. With N = the number of counts, p = the probability of counts occurring, and q = the probability of counts not occurring. In all instances p + q = 1, since something either happens or it doesn't, p and q must sum to 1. In context of our experiment, we have a very large N with a very small p. Undergoing several manipulations we can approximate the binomial distribution to be the Poisson distribution. Info on the Binomial Dist: here


The Gaussian Distribution

When analyzing a situation in which there is a high probability of occurrence (large p) we use the Gaussian (or normal) distribution. The Gaussian distribution is given by

G(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{\left(x-a\right)^2}{2\sigma^2}},

with a = the mean, σ = the standard deviation. The Gaussian distribution is often used to model probabilities and is useful because if the standard deviation and mean are optimal then the actual mean and standard deviation values will match those given theoretically. The graph of a Gaussian is what we hope to achieve from our data sets.

The Poisson Distribution

The Poisson is a "discrete probability distribution" for the probability of a number of events occuring in a fixed period of time, such that the events occur at a known average rate. When analyzing a random situation in which there is a very low probability of occurrence (large N and small p)we use the Poisson distribution. The standard form is given by:

P(x)=e^{-a}\frac{a^x}{x!}

with a standard deviation of

\sigma=\sqrt{a},

with a = the mean. The Poisson distribution appears only around zero and, unlike the Gaussian or binomial distributions, can only reflect positive integers. You could think of a Gaussian distribution that has been normalized so it can only take on positive values with a mean greater than zero. A place to get comfortable with Poisson distributions can be accessed here 1

Data and Results

We took four sets of data corresponding to different combinations of bin number and time delay. The time delay represented for how long measurements/counts were taken from each bin. The number of bins represents basically how many mini experiments were done for each data set. Using the MCA we were able to repeat the experiment of making counts of background radiation many many times with relative ease. The four sets of data that we took can be broken down such that, for data set 1 we used 512 bins and a time delay of 800ms, data set 2 512 bins and 100ms, data set 3 256 bins and 10s, and finaly for data set four we used 4096 bins with a time delay of 40s which took approximately 45 hours to run.

Figure 1:Plot of Counts vs Bin # for 512 Bins and 800ms time delay
Figure 1:Plot of Counts vs Bin # for 512 Bins and 800ms time delay

This is Data set 1, where mentioned above we used 512 bins and a time delay of 800ms. For this data set we found a mean value of a = 10.53 and standard deviation of σ = 3.55


Figure 2:Plot of Counts vs Bin # for 512 Bins with 100ms time delay
Figure 2:Plot of Counts vs Bin # for 512 Bins with 100ms time delay

This is Data set 2, where mentioned above we used 512 bins and a time delay of 100ms. For this data set we found a mean value of a = 1.36 and standard deviation of σ = 1.37

Figure 3:Plot of Counts vs Bin # for 256 Bins with 10s time delay
Figure 3:Plot of Counts vs Bin # for 256 Bins with 10s time delay

This is Data set 3, where mentioned above we used 256 bins and a time delay of 10s. For this data set we found a mean value of a = 135.72 and standard deviation of σ = 14.25

Figure 4:Plot of Counts vs Bin # for 4096 Bins with a time delay of 40s
Figure 4:Plot of Counts vs Bin # for 4096 Bins with a time delay of 40s

This is Data set 4, where mentioned above we used 4096 bins and a time delay of 40s. For this data set we found a mean value of a = 543.60 and standard deviation of σ = 34.63





  • Plot 1:This is a plot of the data, which was transformed into a probability by counting how many times it occured in the data set and dividing by the number of bins. This plot represents data set 1 for 800ms and plotted in green is the poisson, red is the data and in blue is the gaussian.

Plot 2:This is a plot of the data, which was transformed into a probability by counting how many times it occured in the data set and dividing by the number of bins. This plot represents data set 2 for 100ms and plotted in green is the poisson, red is the data and in blue is the gaussian.

Plot 3:This is a plot of the data, which was transformed into a probability by counting how many times it occured in the data set and dividing by the number of bins. This plot represents data set 3 for 10s and plotted in green is the poisson, red is the data and in blue is the gaussian.

Plot 4:This is a plot of the data, which was transformed into a probability by counting how many times it occured in the data set and dividing by the number of bins. This plot represents data set 4 for 40s and plotted in green is the poisson, red is the data and in blue is the gaussian.




















Analysis

In this experiment we used a detector that picked up random events big enough to penetrate led brick casing that house the detector. The purpose then of taking the data was to compare it to the Poisson distribution even though our data may have no trend like this what-so-ever.

In data set 1, the data fits well with the Poisson from what seemed like a completely random sequence of events. The Gaussian also works well with this set. In data set 2 however, there is some discrepancy between the two fits, the Gaussian appears in this case to fit better then the Poisson so we took steps to approximate this fit to the Poisson better. In data set 3, the two fits are practically the same, but the data is kind of random and doesn't really conform to a distribution. In data set 4, taking the Poisson for this instance would not work for us. This is because the number of counts is so large (around 600) in the Poisson this goes like a factorial and both Matt and I were hard pressed to find a computer that could do such a large computation. The Gaussian fits well here though.

Personal tools