# Chapter 12 Monte Carlo Radiation Transport

## 12.1  Transport and Collisions

Consider the passage of uncharged particles through matter. The particles might be neutrons, or photons such as gamma rays. The matter might be solid, liquid, or gas, and contain multiple species with which the particles can interact in different ways. We might be interested in the penetration of the particles into the matter from the source, for example what is the particle flux at a certain place, and we might want to know the extent to which certain types of interaction with the matter have taken place, for example radiation damage or ionization. This sort of problem lends itself to modelling by Monte Carlo methods.
Since the particles are uncharged, they travel in straight lines at constant speed between collisions with the matter. Actually the technique can be generalized to treat particles that experience significant forces so that their tracks are curved. However, charged particles generally experience many collisions that perturb their velocity only very slightly. Those small-angle collisions are not so easily or efficiently treated by Monte Carlo techniques, so we simplify the treatment by ignoring particle acceleration between collisions. Figure 12.1: A random walk is executed by a particle with variable-length steps between individual scatterings or collisions. Eventually it is permanently absorbed. The statistical distribution of the angle of scattering is determined by the details of the collision process.
A particle executes a random walk through the matter, illustrated in Fig. 12.1. It travels a certain distance in a straight line, then collides. After the collision it has a different direction and speed. It takes another step in the new direction, generally with a different distance, to the next collision. Eventually the particle has an absorbing collision, or leaves the system, or becomes so degraded (for example in energy) that it need no-longer be tracked. The walk ends.

### 12.1.1  Random walk step length

The length of any of the steps between collisions is random. For any collision process, the average number of collisions a particle has per unit length corresponding to that process, which we'll label $j$, is ${n}_{j}{\mathit{\sigma }}_{j}$, where ${\mathit{\sigma }}_{j}$ is the cross-section, and ${n}_{j}$ is the density in the matter of the target species that has collisions of type $j$. For example, if the particles are neutrons and the process is uranium nuclear fission, ${n}_{j}$ would be the density of uranium nuclei. The average total number of collisions of all types per unit path length is therefore
 ${\mathit{\Sigma }}_{t}=\sum _{j}{n}_{j}{\mathit{\sigma }}_{j},$ $\left(12.1\right)$
where the sum is over all possible collision processes. Once again ${\mathit{\Sigma }}_{t}$ is an inverse attenuation-length.
How far does a particle go before having a collision? Well, even for particles with identical position and velocity, it varies statistically in an unpredictable way. However, the probability of having a collision in the small distance $l\to l+dl$, given that the particle started at $l$ (in other words, earlier collisions are excluded), is ${\mathit{\Sigma }}_{t}\mathit{dl}$. This incremental collision probability ${\mathit{\Sigma }}_{t}dl$ is independent of prior events. Figure 12.2: The probability, $\stackrel{‾}{P}\left(l\right)$, of survival without a collision to position $l$ decreases by a multiplicative factor $1-{\mathit{\Sigma }}_{t}\mathit{dl}$ in an increment $\mathit{dl}$.
Therefore (refer to Fig. 12.2) if the probability that the particle survives at least as far as $l$ without a collision is $\stackrel{‾}{P}\left(l\right)$, then the probability that it survives at least as far as $l+dl$ is the product of $\stackrel{‾}{P}\left(l\right)$ times the probability $1-{\mathit{\Sigma }}_{t}dl$ that it does not have a collision in $dl$:
 $\stackrel{‾}{P}\left(l+dl\right)=\stackrel{‾}{P}\left(l\right)\left(1-{\mathit{\Sigma }}_{t}dl\right)$ $\left(12.2\right)$
Hence
 $\frac{\stackrel{‾}{P}\left(l+dl\right)-\stackrel{‾}{P}\left(l\right)}{dl} \underset{\mathit{dl}\to 0}{\to } \frac{d\stackrel{‾}{P}}{dl}=-{\mathit{\Sigma }}_{t}\stackrel{‾}{P}.$ $\left(12.3\right)$
The solution of this differential equation is
 $\stackrel{‾}{P}\left(l\right)=\mathrm{exp}\left(-{\int }_{0}^{l}{\mathit{\Sigma }}_{t}dl\right),$ $\left(12.4\right)$
where $l$ is measured from the starting position, so $\stackrel{‾}{P}\left(0\right)=1$. Another equivalent way to define $\stackrel{‾}{P}\left(l\right)$ is: the probability that the first collision is not in the interval $0\to l$. The complement $P\left(l\right)=1-\stackrel{‾}{P}\left(l\right)$ is therefore the probability that the first collision is in $0\to l$. In other words, $P\left(l\right)=1-\stackrel{‾}{P}\left(l\right)$ is a cumulative probability. It is the integral of the probability distribution $p\left(l\right)dl$ that the (first) collision takes place in $dl$ at $l$; so:
 $p\left(l\right)=\frac{\mathit{dP}}{dl}=-\frac{d\stackrel{‾}{P}}{dl}={\mathit{\Sigma }}_{t}\stackrel{‾}{P}={\mathit{\Sigma }}_{t}\mathrm{exp}\left(-{\int }_{0}^{l}{\mathit{\Sigma }}_{t}dl\right).$ $\left(12.5\right)$
To decide statistically where the first collision takes place for any specific particle, therefore, we simply draw a random number (uniform variate) and select $l$ from the cumulative distribution $P\left(l\right)$ as explained in section 11.2 and illustrated in Fig. 11.1. If we are dealing with a step in which ${\mathit{\Sigma }}_{t}$ can be taken as uniform, then $P\left(l\right)=1-\mathrm{exp}\left(-{\mathit{\Sigma }}_{t}l\right)$, and the cumulative function can be inverted analytically as $l\left(P\right)=-\mathrm{ln}\left(1-P\right)/{\mathit{\Sigma }}_{t}$.

### 12.1.2  Collision type and parameters

Having decided where the next collision happens, we need to decide what sort of collision it is. The local rate at which each type $j$ of collision is occuring is ${n}_{j}{\mathit{\sigma }}_{j}$ and the sum of all ${n}_{j}{\mathit{\sigma }}_{j}$ is ${\mathit{\Sigma }}_{t}$. Therefore the fraction of collisions that is of type $j$ is ${n}_{j}{\mathit{\sigma }}_{j}/{\mathit{\Sigma }}_{t}$. Figure 12.3: Deciding the collision type based upon a random number $x$.
If we regard all the possible types of collisions as arrayed in a long list, illustrated in Fig. 12.3, and to each type is assigned a real number ${x}_{j}$ such that ${x}_{j+1}={x}_{j}+{n}_{j}{\mathit{\sigma }}_{j}/{\mathit{\Sigma }}_{t}$, i.e.
 ${x}_{j}=\sum _{i=1}^{j-1}{n}_{i}{\mathit{\sigma }}_{i}/{\mathit{\Sigma }}_{t},$ $\left(12.6\right)$
then a single random draw $x$ determines $j$ by the condition ${x}_{j}\le x<{x}_{j+1}$. This is just the process of drawing from a discrete probability distribution. Our previous examples were the Poisson distribution, and the weighting of discrete particles for flux injection.
Once having decided upon the collision process $j$, there are (usually) other parameters of the collision to be decided. For example, if the collision is a scattering event, what is the scattering angle and the corresponding new scattered velocity $\mathbit{v}$ which serves${}^{78}$ to determine the starting condition of the next step? These random parameters are also determined by statistical choice from the appropriate probability distributions, for example the normalized differential cross-section per unit scattering angle.

### 12.1.3  Iteration and New Particles

Once the collision parameters have been decided, unless the event was an absorption, the particle is restarted from the position of collision with the new velocity and the next step is calculated. If the particle is absorbed, or escapes from the modelled region, then instead we start a completely new particle. Where and how it is started depends upon the sort of radiation transport problem we are solving. If it is transport from a localized known source, then that determines the initialization position, and its direction is appropriately chosen. If the matter is emitting, new particles can be generated spread throughout the volume. Spontaneous emission, arising independent of the level of radiation in the medium, is simply a distributed source. It serves to determine the distribution of brand new particles, which are launched once all active particles have been followed till absorbed or escaped. Examples of spontaneous emission include radiation from excited atoms, spontaneous radioactive decays giving gammas, or spontaneous fissions giving neutrons. However, often the new particles are "stimulated" by collisions from the transporting particles. Fig. 12.4 illustrates this process. The classic nuclear example is induced fission producing additional neutrons. Stimulated emission arises as a product of a certain type of collision. Figure 12.4: "Stimulated" emission is a source of new particles caused by the collisions of the old ones. It can multiply the particles through succeeding generations, as the new particles (different line styles) give rise themselves to additional emissions.
Any additional particles produced by collisions must be tracked by the same process as used for the original particle. They become new tracks. In a serial code, the secondary particles must be initialized and then set aside during the collision step. Then when the original track ends by absorption or escape, the code must take up the job of tracking the secondary particles. The secondary particles may generate tertiary particles, which will also eventually be tracked. And so on. Parallel code might offload the secondary and tertiary tracks to other processors (or computational threads).
One might be concerned that we'll never get to the end if particles are generated faster than they are lost. That's true; we won't. But particle generation faster than loss corresponds to runaway physical situation, for example a prompt supercritical reactor. So the real-world will have problems much more troublesome than our computational ones!

## 12.2  Tracking, Tallying and Statistical Uncertainty

The purpose of a Monte Carlo simulation is generally to determine some averaged bulk parameters of the radiation particles or the matter itself. The sorts of parameters include, radiation flux as a function of position, spectrum of particle energies, rate of occurrence of certain types of collision, or resulting state of the matter through which the radiation passes. To determine these sorts of quantities requires keeping track of the passage of particles through the relevant regions of space, and of the collisions that occur there. The computational task of keeping track of events and contributions to statistical quantities is often called "tallying".
What is generally done is to divide the region of interest into a managable number of discrete volumes, in which the tallies of interest are going to be accumulated. Each time an event occurs in one of these volumes, it is added to the tally. Then, provided a reasonably large number of such events has occurred, the average rate of the occurrence of events of interest can be obtained. For example, if we wish to know the fission power distribution in a nuclear reactor, we would tally the number of fission collisions occurring in each volume. Fission reactions release on average a known amount of total energy $E$. So if the number occuring in a particular volume $V$, during a time $T$ is $N$, the fission power density is $NE/\mathit{VT}$.
The number of computational random walks that we can afford to track is usually far less than the number of events that would actually happen in the physical system being modelled. Each computational particle can be considered to represent a large number of actual particles all doing the same thing. The much smaller computational number leads to important statistical fluctuations, uncertainty, or noise, that is a purely computational limitation.
The statistical uncertainty   of a Monte Carlo calculation is generally determined by the observation that a sample of $N$ random choices from a distribution (population) of standard deviation $S$ has a sample mean ${\mathit{\mu }}_{N}$ whose standard deviation is equal to the standard error $S/\sqrt{N}$. Each tally event acts like single random choice. Therefore for $N$ tally events the uncertainty in the quantity to be determined is smaller than its intrinsic uncertainty or statistical range by a factor $1/\sqrt{N}$. Put differently, suppose the average rate of a certain type of statistically determined event is constant, giving on average $N$ events in time $T$ then the number $n$ of events in any particular time interval of length $T$ is governed by a Poisson probability, eq. (11.15) $p\left(n\right)=\mathrm{exp}\left(-N\right){N}^{n}/n!$. The standard deviation of this Poisson distribution is $\sqrt{N}$. Therefore we obtain a precise estimate of the average rate of events, $N$, by using the number actually observed in a particular case, $n$, only if $N$ (and therefore $n$) is a large number.
The first conclusion is that we must not choose our discrete tallying volumes too small. The smaller they are, the fewer events will occur in them, and, therefore, the less statistically accurate will be the estimate of the event rate.
When tallying collisions, the first thought one might have is simply to put a check mark against each type of collision for each volume element, whenever that exact type of event occurs in it. The difficulty with this approach is that it will give very poor statistics, because there are going to be too few check marks. To get, say, 1% statistical uncertainty, we would need ${10}^{4}$ check marks for each discrete volume. If the events are spread over, say ${100}^{3}$ volumes, we would need ${10}^{10}$ total collisions of each relevant type. For, say, 100 collision types we are already up to ${10}^{12}$ collisions. The cost is very great. But we can do better than this naïve check-box approach.
One thing we can do with negligible extra effort is to add to the tally for every type of collision, whenever any collision happens. To make this work, we must add a variable amount to the tally, not just 1 (not just a check mark). The amount we should add to each collision-type tally, is just the fraction of all collisions that is of that type, namely ${n}_{j}{\mathit{\sigma }}_{j}/{\mathit{\Sigma }}_{t}$. Of course these fractional values should be evaluated with values of ${n}_{j}$ corresponding to the local volume, and at the velocity of the particle being considered (which affects ${\mathit{\sigma }}_{j}$). This process will produce, on average, the same correct collision tally value, but will do so with a total number of contributions bigger by a factor equal to the number of collision types. That substantially improves the statistics. Figure 12.5: Tallying is done in discrete volumes. One can tally every collision(green) but if the steps are larger than the volumes, it gives better statistics to tally every volume the particle passes through(blue).
A further statistical advantage is obtained by tallying more than just collisions. If we want particle flux properties, as well as the effects on the matter, we need to tally not just collisions, but also the average flux density in all the discrete volumes. That requires us to determine for every volume, every passage $i$ of a particle through it, the fractional time $\mathit{\Delta }{t}_{i}$ that it spends in the volume, and the speed ${v}_{i}$ at which it passed through. See Fig. 12.5. When the simulation has been tracked for a sufficient number of particles, the density of particles in the volume is proportional to the sum $\sum _{i}\mathit{\Delta }{t}_{i}$ and the scalar flux density${}^{79}$ to $\sum _{i}{v}_{i}\mathit{\Delta }{t}_{i}$. If collisional lengths (the length of the random walk step) are substantially larger than the size of the discrete volumes, then there are more contributions to the flux property tallies than there are walk steps, i.e. modelled collisions. The statistical accuracy of the flux density measurement is then better than the collision tallies (even when all collision types are tallied for every collision) by a factor approximately equal to the ratio of collision step length to discrete volume side-length. Therefore it may be worth the effort to keep a tally of the total contribution to collisions of type $j$ that we care about from every track that passes through every volume. In other words, for each volume, to obtain the sum over all passages $i$: $\sum _{i}\mathit{\Delta }{t}_{i}{v}_{i}{n}_{j}{\mathit{\sigma }}_{j}$. The extra cost of this process is the geometric task of determining the length of a line during which it is inside a particular volume. But if that task is necessary anyway, because flux properties are desired, then it is no more work to tally the collisional probabilities in this way.
Importance weighting   Another aspect of statistical accuracy relates to the choice of initial distribution of particles, especially in velocity space. In some transport problems, it might be that the particles that penetrate a large distance away from their source are predominantly the high energy particles, perhaps those quite far out on the tail of a birth velocity distribution that has the bulk of its particles at low energy. The most straightforward way to represent the transport problem is to launch particles in proportion to their physical birth distribution. Each computational particle then represents the same number of physical particles. But this choice might mean there are very few of the high-energy particles that determine the flux far from the source. If so, then the statistical uncertainty of the far flux is going to be great. Figure 12.6: If all particles have the same weight, there are many more representing bulk velocities (${v}_{1}$) than tail velocities (${v}_{2}$). Improved statistical representation of the tail is obtained by giving the tail velocities lower weight ${w}_{2}$. If the weights are proportional to $f$ then equal numbers of particles are assigned to equal velocity ranges.
It is possible to compensate for this problem by weighting the particles, illustrated by Fig. 12.6. One can obtain better far-flux statistics by launching more of the high energy particles than would be implied by their probability distribution, but compensating for the statistical excess by an importance weighting $w$ that is inversely proportional to the factor by which the particle number has exceeded the probability distribution. For a weighted particle's subsequent random walk, and for any additional particles emitted in response to collisions of the particle, the contribution made to any flux or reaction rate is then multiplied by $w$. Consequently the total flux calculated anywhere has the correct average contribution from all types of initial particles. But there are more actual contributions (of lower weight) from high-initial-energy particles than there otherwise would be. This has the effect of reducing the statistical uncertainty of flux that happens to depend upon high energy birth particles by transferring some of the computational effort to them from lower energy birth particles.
Importance weighting is immediately accommodated during the tallying process by adding contributions to the various flux tallies that are the same as have been previously given - except: each contribution is multiplied by the weight ${w}_{i}$ of that particular particle. Thus density becomes $\propto {\mathit{\Sigma }}_{i}{w}_{i}\mathit{\Delta }{t}_{i}$ and scalar flux density $\propto {\mathit{\Sigma }}_{i}{w}_{i}{v}_{i}\mathit{\Delta }{t}_{i}$, and so on.
The weight of a particle that is launched as a result of stimulated emission is inherited from its parent. A newly launched particle of the $n$th generation inherently acquires the weight ${w}_{n-1}$ of the particular previous generation particle whose collision caused the launch. However, in the process of deciding the $n$th generation's initial velocity an extra launch weighting is accorded to it, corresponding to its launch parameters; for example it might be weighted proportional to the launch speed (or energy) probability distribution $p\left({v}_{n}\right)$. If so, then the weight of the newly launched particle is
 ${w}_{n}={w}_{n-1}p\left({v}_{n}\right).$ $\left(12.7\right)$
This process of weight multiplication between generations causes a spreading of the weights. The ratio of the maximum to minimum weight of each succeeding generation is multiplied by the birth weight-ratio; so the weight range ratio grows geometrically. When there is a very large range between the minimum and the maximum weight, it becomes a waste of time to track particles whose weight is negligibly small. So after the simulation has run for some time duration, special procedures must be applied to limit the range of weights by removing particles of irrelevantly small weight.

## 12.3  Worked Example: Density and Flux Tallies

In a Monte Carlo calculation of radiation transport from a source emitting at a fixed rate $R$ particles per second, tallies in the surrounding region are kept for every transit of a cell by every particle. The tally for density in a particular cell consists of adding the time interval $\mathit{\Delta }{t}_{i}$ during which a particle is in it, every time the cell is crossed. The tally for flux density adds up the time interval multiplied by speed: ${v}_{i}\mathit{\Delta }{t}_{i}$. After launching and tracking to extinction a large number $N$ of random emitted particles, the sums are
 ${S}_{n}=\sum _{i}\mathit{\Delta }{t}_{i},\mathrm{ }{S}_{\mathit{\phi }}=\sum _{i}{v}_{i}\mathit{\Delta }{t}_{i}.$

Deduce quantitative physical formulas for what the particle and flux densities are (not just that they are proportional to these sums), and the uncertainties in them, giving careful rationales.

A total of $N$ particles tracked from the source (not including the other particles arising from collisions that we also had to track), is the number of particles that would be emitted in a time $T=N/R$. Suppose the typical physical duration of a particle "flight" (the time from launch to absorption/loss of the particle and all of its descendants) is $\mathit{\tau }$.
If $T>>\mathit{\tau }$, then it is clear that the calculation is equivalent to treating a physical duration $T$. In that case almost all particle flights are completed within the same physical duration $T$. Only those that start within $\mathit{\tau }$ of the end of $T$ would be physically uncompleted; and only those that started a time less than $\mathit{\tau }$ before the start of $T$ would be still in flight when $T$ starts. The fraction affected is $~\mathit{\tau }/T$, which is small. But in fact, even if $T$ is not much longer than $\mathit{\tau }$, the calculation is still equivalent to simulating a duration $T$. In an actual duration $T$ there would then be many flights that are unfinished at the end of $T$, and others that are part way through at its start. But on average the different types of partial flights add up to represent a number of total flights equal to $N$. The fact that in the physical case the partial flights are of different particles, whereas in the Monte Carlo calculation they are of the same particle, does not affect the average tally.
Given, then, that the calculation is effectively simulating a duration $T$, the physical formulas for particle and flux densities in a cell of volume $V$ are
 $n={S}_{n}/\mathit{TV}={S}_{n}R/\mathit{NV},\mathrm{ }\text{and}\mathrm{ }\mathit{\phi }={S}_{\mathit{\phi }}/\mathit{TV}={S}_{\mathit{\phi }}R/\mathit{NV}.$

To obtain the uncertainty, we require additional sample sums. The total number of tallies in the cell of interest we'll call ${S}_{1}=\sum _{i}1$. We may also need the sum of the squared tally contributions ${S}_{{n}^{2}}=\sum _{i}\mathit{\Delta }{t}_{i}^{2}$ and ${S}_{{\mathit{\phi }}^{2}}=\sum _{i}\left({v}_{i}\mathit{\Delta }{t}_{i}{\right)}^{2}$. Then finding ${S}_{n}$ and ${S}_{\mathit{\phi }}$ can be considered to be a process of making ${S}_{1}$ random selections of typical transits of the cell from probability distributions whose mean contribution per transit are ${S}_{n}/{S}_{1}$ and ${S}_{\mathit{\phi }}/{S}_{1}$ respectively. Of course the probability distributions aren't actually known, they are indirectly represented by the Monte Carlo calculation. But we don't need to know what the distributions are, because we can invoke the fact that the variance of the mean of a large number (${S}_{1}$) of samples from a population of variance ${\mathit{\sigma }}^{2}$ is just ${\mathit{\sigma }}^{2}/{S}_{1}$. We need an estimate of the population variance for density and flux density. That estimate is provided by the standard expressions for variance:
 ${\mathit{\sigma }}_{n}^{2}=\frac{1}{{S}_{1}-1}\left[{S}_{{n}^{2}}-\left({S}_{n}/{S}_{1}{\right)}^{2}\right],\mathrm{ }{\mathit{\sigma }}_{\mathit{\phi }}^{2}=\frac{1}{{S}_{1}-1}\left[{S}_{{\mathit{\phi }}^{2}}-\left({S}_{\mathit{\phi }}/{S}_{1}{\right)}^{2}\right].$

So the uncertainties in the tally sums when a fixed number ${S}_{1}\approx {S}_{1}-1$ of tallies occurs are
 ${\mathit{\sigma }}_{n}/\sqrt{{S}_{1}},\mathrm{ }\text{and}\mathrm{ }{\mathit{\sigma }}_{\mathit{\phi }}/\sqrt{{S}_{1}}.$

However, there is also uncertainty arising from the fact that the number of tallies ${S}_{1}$ is not exactly fixed, it varies from case to case in different Monte Carlo trials, each of $N$ flights. Generally ${S}_{1}$ is Poisson distributed, so its variance is equal to its mean ${\mathit{\sigma }}_{{S}_{1}}^{2}={S}_{1}$. Variation $\mathit{\delta }{S}_{1}$ in ${S}_{1}$ gives rise to a variation approximately $\left({S}_{n}/{S}_{1}\right)\mathit{\delta }{S}_{1}$ in ${S}_{n}$. Also it is reasonable to suppose that the variances of contribution size, ${\mathit{\sigma }}_{n}^{2}$ and ${\mathit{\sigma }}_{\mathit{\phi }}^{2}$ are not correlated with the variance arising from number of samples $\left({S}_{n}/{S}_{1}{\right)}^{2}{\mathit{\sigma }}_{{S}_{1}}^{2}$, so we can simply add them and arrive at total uncertainty in ${S}_{n}$ and ${S}_{\mathit{\phi }}$, (which we write $\mathit{\delta }{S}_{n}$ and $\mathit{\delta }{S}_{\mathit{\phi }}$)
 $\mathit{\delta }{S}_{n}=\sqrt{\frac{{\mathit{\sigma }}_{n}^{2}+{S}_{n}^{2}}{{S}_{1}}}\mathrm{ }\mathit{\delta }{S}_{\mathit{\phi }}=\sqrt{\frac{{\mathit{\sigma }}_{\mathit{\phi }}^{2}+{S}_{\mathit{\phi }}^{2}}{{S}_{1}}}\mathrm{ }$

Usually, ${\mathit{\sigma }}_{n}<~{S}_{n}$ and ${\mathit{\sigma }}_{\mathit{\phi }}<~{S}_{\mathit{\phi }}$, in which case we can (within a factor of $\sqrt{2}$) ignore the ${\mathit{\sigma }}_{n}$ and ${\mathit{\sigma }}_{\mathit{\phi }}$ contributions, and approximate the fractional uncertainty in both density and flux as $1/\sqrt{{S}_{1}}$. In this approximation, the squared sums ${S}_{{n}^{2}}$ and ${S}_{{\mathit{\phi }}^{2}}$ are unneeded.

## Exercise 12. Monte Carlo Statistics

1. Suppose in a Monte Carlo transport problem there are ${N}_{j}$ different types of collision each of which is equally likely. Approximately what is the statistical uncertainty in the estimate of the rate of collisions of a specific type $i$ when a total ${N}_{t}>>{N}_{j}$ of collisions has occurred
(a) if for each collision just one contribution to a specific collision type is tallied,
(b) if for each collision a proportionate ($1/N$) contribution to every collision type tally is made?
If adding a tally contribution for any individual collision type has a computational cost that is a multiple $f$ times the rest of the cost of the simulation,
(c) how large can $f$ be before it becomes counter-productive to add proportions to all collision type tallies for each collision?

2. Consider a model transport problem represented by two particle ranges, low and high energy: $1,2$. Suppose on average there are ${n}_{1}$ and ${n}_{2}$ particles in each range and ${n}_{1}+{n}_{2}=n$ is fixed. The particles in these ranges react with the material background at overall average rates ${r}_{1}$ and ${r}_{2}$.
(a) A Monte Carlo determination of the reaction rate is based on a random draw for each particle determining whether or not it has reacted during a time $T$ (chosen such that ${r}_{1}T,{r}_{2}T<<1$). Estimate the fractional statistical uncertainty in the reaction rate determination after drawing ${n}_{1}$ and ${n}_{2}$ times respectively.
(b) Now consider a determination using the same ${r}_{1,2}$, $T$, and total number of particles, $n$, but distributed differently so that the numbers of particles (and hence number of random draws) in the two ranges are artificially adjusted to ${n}_{1}\text{'}$, ${n}_{2}\text{'}$ (keeping ${n}_{1}\text{'}+{n}_{2}\text{'}=n$), and the reaction rate contributions are appropriately scaled by ${n}_{1}/{n}_{1}\text{'}$ and ${n}_{2}/{n}_{2}\text{'}$. What is now the fractional statistical uncertainty in reaction rate determination? What is the optimum value of ${n}_{1}\text{'}$ (and ${n}_{2}\text{'}$) to minimize the uncertainty?

3. Build a program to sample randomly from an exponential probability distribution $p\left(x\right)=\mathrm{exp}\left(-x\right)$, using a built-in or library uniform random deviate routine. Code the ability to form a series of $M$ independent samples labelled $j$, each sample consisting of $N$ independent random values ${x}_{i}$ from the distribution $p\left(x\right)$. The samples are to be assigned to $K$ bins depending on the sample mean ${\mathit{\mu }}_{j}=\sum _{i=1}^{N}{x}_{i}/N$. Bin $k$ contains all samples for which ${x}_{k-1}\le {\mathit{\mu }}_{j}<{x}_{k}$, where ${x}_{k}=k\mathit{\Delta }$. Together they form a distribution ${n}_{k}$, for $k=1,\dots ,K$ that is the number of samples with ${\mathit{\mu }}_{j}$ in the bin $k$. Find the mean ${\mathit{\mu }}_{n}=\sum _{k=1}^{K}{n}_{k}\left(k-1/2\right)\mathit{\Delta }/M$ and variance ${\mathit{\sigma }}_{n}^{2}=\sum _{k=1}^{K}{n}_{k}\left[\left(k-1/2\right)\mathit{\Delta }-{\mathit{\mu }}_{n}{\right]}^{2}/\left(M-1\right)$ of this distribution ${n}_{k}$, and compare them with the prediction of the Central Limit Theorem, for $N=100$, $K=30$, $\mathit{\Delta }=0.2$, and two cases: $M=100$ and $M=1000$. Submit the following as your solution:
1. Your code in a computer format that is capable of being executed.
2. Plots of ${n}_{k}$ versus ${x}_{k}$, for the two cases.
3. Your numerical comparison of the mean and variance, and comments as to whether it is consistent with theoretical expectations.