3 Random variables and their distributions
Upon completion of this chapter, you should be able to:
- distinguish between discrete, continuous and mixed random variables.
- determine the probability function of random variables defined for a random process.
- determine the distribution function of a random variable from its probability function.
- apply probability functions and distribution functions to compute probabilities for defined events.
- plot the probability function and distribution function of a random variable.
3.1 Random variables
Chapter 2 introduced the language and tools of probability to describe uncertainty. The concept of the sample space was introduced, which describes the possible outcomes of a random process. Often, however, the individual elements of the sample space are not directly of interest, especially if the sample space is large or infinite. Subsets of these sample space elements are usually of greater interest and more convenient to work with.
For example, the sample space for the observing the rolls of two dice (Example 2.17) contains \(36\) elements. We may be interested in the sum of the two rolls, rather than which elements in the sample space produce a given sum. That is, we may be more interested in whether we roll a sum of \(5\) than how that sum of \(5\) was obtained, or which elements of the sample space give rise to a sum of \(5\). The various elements of the sample space that are of interest can be collected together, and treated as a collective.
More generally, collecting elements of the sample space together is useful, and we can assign a real number to that collection. This leads to the idea of a random variable. Random variable may be abbreviated to ‘rv’ or ‘RV’.
Definition 3.1 (Random variable) A random variable is a function that assigns a real number to each outcome \(s\) in the sample space \(S\). A random variable \(X\) maps \(s\to\mathbb{R}\), and the value assigned to an outcome \(s\in S\) is written \(X(s)\).
Many random variables take only integer values, such as the number of heads in three coin tosses; these are called discrete random variables. Many random variables take values in an interval, such as the height of a randomly chosen person; these are continuous random variables. Mixed random variables are partially discrete and partially continuous. Mixed random variables combine both types of behaviour in a single variable. In all cases, \(X(s)\) is a real number, but the set of possible values of \(X\) may be a finite set, a countable set (like the integers), or an interval in \(\mathbb{R}\).
Random variables are different from variables used in algebra. In algebra, a variable typically represents an unknown but fixed quantity. In contrast, a random variable represents a quantity whose value depends on the outcome of a random process.
Definition 3.2 (Domain and range space) The domain of a random variable is the sample space \(S\), and the range (or range space, or value set) is the set of real numbers taken by a function.
The range space for a random variable \(X\) is often denoted \(\mathcal{R}_X\), where \(\mathcal{R}_X\in\mathbb{R}\). The domain of \(X\) is the set \(S\), and the range space is the set \(\{X(s)\mid s\in S\}\).
Since \(X\) is a function, each \(s\in S\) is assigned to exactly one value \(X(s)\); however, multiple values of \(s\in S\) may be assigned to the same value of \(X(s)\). The variable is random since its value depends upon the outcome of the random process.
A capital letter (such as \(X\) or \(Y\)) is usually used to denote the description of the random variable, while lower-case letters (such as \(x\) or \(y\)) are used to represent the values that the random variable can take. For example, consider rolling two dice and observing the sum of the two rolls. Writing \(X = 3\) means:
- ‘The random variable \(X\)’ (e.g., the description ‘the sum of the roll of two dice’)…
- ‘… takes the value \(3\) in some outcome of the random process’.
Example 3.1 (Rolling a die twice) Consider rolling a fair die twice. The sample space \(S\) contains \(36\) elements shown in Table 2.3: \[ S = \{ (1, 1), (1, 2), (1, 3), \dots (6, 6)\}. \] However, we may be interested in the random variable \(X\), the product of the two numbers rolled. Each element \(s_i\) of \(S\) can be assigned to exactly one real number (in this case, to exactly one integer): \[\begin{align*} s_1 &= (1, 1) \mapsto X = 1;\\ s_2 &= (1, 2) \mapsto X = 2;\\ s_3 &= (1, 3) \mapsto X = 3;\\ \vdots &\qquad \vdots\\ s_{36} &= (6, 6) \mapsto X = 36. \end{align*}\] Each element of the sample space is mapped to exactly one value of \(X\). However, multiple elements of the sample space can be mapped to the same value of \(X\): \[\begin{align*} (3, 4) &\mapsto X = 12; \quad\text{and}\\ (6, 2) &\mapsto X = 12. \end{align*}\] Writing \(X = 12\) means ‘the product of the numbers on the two rolls is \(12\)’.
Once a random variable has been defined, events can be defined in terms of the values of the random variable. Random variables provide a convenient way to express events numerically; rather than listing specific outcomes, events can be described using inequalities or equations involving the random variable.
Example 3.2 (Rolling a die twice: range) In Example 3.1, the range of the random variable \(X\) is \[ \mathcal{R}_X = \{1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 30, 36\}. \] The domain is the sample space, the set of all ordered pairs \[ S = \{ (1, 1), (1, 2), (1, 3), \dots (3, 2), (3, 3), (3, 4),\dots (6, 5), (6, 6)\}. \]
Example 3.3 (Rolling a die twice: events) The random variable \(X\) in Example 3.1 can be used to define different events. For example, we could define Event \(A_1\) as \[ A_1 = \{s \in S \mid X(s) > 10\} = \{ 12, 15, 16, 18, 20, 24, 25, 30, 36\}, \] usually written more succinctly as \(X > 10\). Other events can be defined also: \[\begin{align*} A_2 &= \{\text{$4 \le X < 10$}\} = \{ 4, 5, 6, 8, 9 \};\\ A_3 &= \{\text{$X < 0$ }\} = \varnothing;\\ A_4 &= \{\text{$X$ is prime}\} = \{2, 3, 5 \};\\ A_5 &= \{\text{$X$ is evenly divisible by\ $8$}\} = \{ 8, 16, 24\}. \end{align*}\]
Example 3.4 (Sum of two die rolls) Consider observing the rolls of two dice. The sample space contains \(36\) elements (Example 2.17), and can be denoted using the ordered pairs \((r_1, r_2)\), where \(r_1\) and \(r_2\) are the results of roll \(1\) and \(2\) respectively. The sample space is listed in Table 2.3. For example, we could define \(s_1\) as the sample point \((1, 1)\).
The random variable \(Y\) can be defined on this sample space as: \[ Y(s) = \text{the sum of the two rolls in $s$} = r_1 + r_2. \] This definition assigns a real number to each outcome in the sample space:
| Sample space elements | Value of random variable \(Y\) |
|---|---|
| (1, 1) | 2 |
| (1, 2), (2, 1) | 3 |
| (1, 3), (2, 2), (3, 1) | 4 |
| \(\vdots\) | \(\vdots\) |
| (6, 6) | 12 |
For example, the elements of \(S\) assigned to \(Y = 4\) are \[ (1, 3), (2, 2), (3, 1). \] Notice that many elements of the sample space can be assigned to the same value of the random variable (which is typical for random variables).
The domain of \(Y\) is the sample space \(S\); the range space is \[ \mathcal{R}_Y = \{ Y(s) \mid s\in S\} = \{2, 3, \dots, 12\}. \]
Example 3.5 (Tossing a coin till a head appears) Consider the random process ‘tossing a coin until a Head is observed’. The sample space is \[ \Omega = \{(H), (TH), (TTH), (TTTH), \dots \}. \] We could then define the random variable \(N\) as ‘the number of tosses until the first head is observed’. Then each element of the sample space can be assigned to a real number: \[\begin{align*} (H)\quad &\text{is assigned to $N = 1$};\\ (TH)\quad &\text{is assigned to $N = 2$};\\ (TTH)\quad &\text{is assigned to $N = 3$};\\ \vdots\quad &\qquad \vdots \end{align*}\] and so on. Writing \(N = 2\) means ‘the number of tosses to observe the first head is two’.
Example 3.6 (Drawing two cards) Consider drawing two cards from a standard, well-shuffled pack of cards, and observing the colour of the card (B: Black; R: Red). The sample space \(\Omega\) is: \[ \Omega = \{ (BB), (BR), (RR), (RB)\}. \] Many random variables could be defined on this sample space; for example: \[\begin{align*} T&: \text{The number of black cards drawn};\\ M&: \text{The number of red cards drawn on the first draw};\\ D&: \text{The number of black cards drawn,}\\ &\quad \text{minus the number of red cards drawn}. \end{align*}\] All of these assign a real number to each element of \(\Omega\). The random variable \(D\), for instance, is defined as:
| Sample space elements | Value of \(D\) |
|---|---|
| \((BR)\) and \((RB)\) | \(D = 0\) |
| \((BB)\) | \(D = 2\) |
| \((RR)\) | \(D = -2\) |
The domain is \(\Omega\), and the range space is \[ \mathcal{R}_D = \{ D(s) \mid s\in \Omega\} = \{-2, 0, 2\}. \]
3.2 Discrete, continuous and mixed random variables
As observed earlier, random variables can be discrete, continuous, or a mixture of both.
3.2.1 Discrete random variables
Definition 3.3 (Discrete random variable) A discrete random variable contains a finite, or countably infinite, number of values within any given interval of a given domain.
Example 3.7 In Example 3.1, exactly \(36\) values of the random variable \(X\) are possible: \(1, 2, \dots 36\).
In Example 3.4, exactly \(11\) values of the random variable \(Y\) are possible: \(2, 3, \dots 12\).
In Example 3.5, the random variable \(N\) takes a countably infinite number of possible values: \(1, 2, 3, \dots\)
In Example 3.6, the random variable \(D\) can take one of three possible values: \(-2\), \(0\) or \(2\).
The definition refers to the values of random variable, not to the sample space (i.e., the inputs to the function).
Examples of discrete random variables include:
- The number of children aged under \(18\) living in a household.
- The number of errors per month.
- The number of incidents of lung cancer at a hospital.
- The number of cyclones per season.
- The number of wins by a football team.
- The number of kangaroos observed in a five-hectare transect.
3.2.2 Continuous random variables
For a continuous sample space, the random variable is usually the identity function \(Y(s) = s\). For example, in Example 1.19 the sample space that describes how far a cricket ball can be thrown is already defined on the positive reals. Hence, we can define the random variable as \(T(s) = s\), where \(s\) is the distance specified in the sample space.
Definition 3.4 (Continuous random variable) A continuous random variable can take on any value within any given interval of a given domain (at least in theory).
The value of a continuous random variable can never, in principle, be measured exactly, so in practice needs to be rounded.
Example 3.8 (Heights) Height \(H\) is often recorded to the nearest centimetre (e.g., \(179\,\text{cm}\)) for convenience and practicality. Better measuring instruments may be able to record height to one or more decimal places of a centimetre. The range space is \(\mathcal{R}_H = \{ H(s) \mid s \in (0, \infty)\}\), or \(\mathcal{R}_H = \{ H(s) \mid s \in \mathbb{R}_{+}\}\).
Even though your height may not change, the notion of a random variable means that height varies from one realisation of the random process to another; that is, from one person to the next.
Examples of continuous random variables include:
- The volume of waste water treated at a sewage plant per day.
- The weight of hearts in normal rats.
- The lengths of the wings of butterflies.
- The yield of barley from a large paddock.
- The amount of rainfall recorded each year.
- The time taken to perform a psychological test.
- The percentage cloud cover.
3.2.3 Mixed random variables
Some random variables are not completely discrete or continuous; these are called mixed random variables. The most commonly-occuring mixed random variable is continuous for the positive real numbers, plus a discrete component at zero.
Definition 3.5 (Mixed random variable) A mixed random variable has subsets where the random variable is discrete and subsets where the random variable is continuous.
Example 3.9 (Vehicle wait times) Consider the time spent by vehicles waiting at a set of traffic lights before proceeding through the intersection.
If the light is green on arrival, the wait time is exactly zero (i.e., discrete): the vehicle can drive straight through the intersection. A wait time of zero seconds can be measured exactly. However, if the light is red on arrival, the vehicle needs to wait a continuous amount of time before it turns green.
The time spent waiting is a mixed random variable.
Examples of mixed random variables include:
- The amount of rainfall that falls in a month (exactly zero, or a continuous amount).
- The weight of fruit produced per tree (exactly zero if no fruit is produced, or a continuous amount).
- The mass of fish-catch per trawl (exactly zero if no fish are caught, or a continuous amount).
- The lifetime of computer components (exactly zero if the component fails immediately, or a continuous time).
3.3 Probability functions
The previous section introduced random variables: real values assigned to outcomes in the sample space. Often, many elements of the sample space were assigned to the same value of the random variable. Therefore, probabilities can be assigned to various values of the random variable, to develop a probability model for the random variable.
A model describes theoretical patterns over infinite trials.
On any single roll of a die, a
may or may not occur, but theoretically (and for infinite rolls) we expect a
to appear \(1/6\) of the time.
A probability model describes the probability that various values of the random variable might appear on any one realisation in theory.
This probability model is called the probability function.
Example 3.10 (Tossing coin outcomes) Consider tossing a coin twice and observing the outcome of the two tosses. Since a random variable is a real-valued function, simply observing the outcome as \((H, T)\), for example, does not define a random variable.
We could define the random variable of interest, say \(H\), as the number of heads on the two tosses of the coin. The sample space for the experiment is \[ S = \{ (TT), (TH), (HT), (HH)\}. \] The connection between the sample space and \(H\) is shown in the table below. In this case, the range space of \(H\) is \(\mathcal{R}_H = \{0, 1, 2\}\). The probability of observing each value of \(H\) can be computed using classical probability:
| Element of \(S\) | Function \(H(s)\) | Value of \(H\) | \(\Pr(H = s_i)\) |
|---|---|---|---|
| \(s_1 = TT\) | \(H(s_1)\): Number of heads in \(s_1\) | 0 | \(1/4\) |
| \(s_1 = TH\) | \(H(s_2)\): Number of heads in \(s_2\) | 1 | \(1/4\) |
| \(s_1 = HT\) | \(H(s_3)\): Number of heads in \(s_3\) | 1 | \(1/4\) |
| \(s_1 = HH\) | \(H(s_4)\): Number of heads in \(s_4\) | 2 | \(1/4\) |
The probability function could be defined as \[\begin{align*} \Pr(H = 0):&\quad 1/4\\ \Pr(H = 1):&\quad 1/2\\ \Pr(H = 2):&\quad 1/4\\ \Pr(H = h):&\quad 0\quad \text{for all other values of $h$}. \end{align*}\]
Probability functions are written and interpreted differently, depending on whether the random variable is a discrete, continuous or mixed random variable.
3.3.1 Discrete random variables: probability mass functions
For a discrete random variable, the probability function indicates how probabilities are assigned to the values of the discrete random variable. For a discrete random variable, the probability function is often called the probability mass function (or PMF).
Definition 3.6 (Probability function) Let the range space of the discrete random variable \(X\) be \(\mathcal{R}_X\). With each \(x\in \mathcal{R}_X\), associate a number \[ p_X(x) = \Pr(X = x). \] The function \(p_X(x)\) is called the probability mass function of \(X\).
The following properties of the probability function are implied by the definition and the rules of probability.
- \(p_X(t) \ge 0\) for all values of \(t\); that is, probabilities are never negative.
- \(\displaystyle \sum_{t \in \mathcal{R}_X} p_X(t) = 1\) where \(\mathcal{R}_X\) is the range space of \(X\); that is, the probability function covers the probability for all possible sample points in the sample space.
- \(p_X(t) = 0\) if \(t \notin \mathcal{R}_X\).
- For an event \(A\) defined on a sample space \(S\), the probability of event \(A\) is \[ \Pr(A) = \sum_{t\in A} p_X(t). \]
Definition 3.7 (Probability distribution) If \(\mathcal{R}_X =\{ x_1, x_2, \dots \}\), the pair \[ \{ \big(x_i, p_X(x_i)\big); \quad i = 1, 2,\dots\} \] is called the probability distribution of the discrete random variable \(X\).
The probability distribution of a discrete random variable \(X\) can be represented by: listing each outcome with its probability; giving a formula; using a table; or using a graph which displays the probabilities \(p(x)\) corresponding to each \(x\in \mathcal{R}_X\).
Sometimes the probability function is denoted \(p(x)\) rather than \(p_X(x)\). Using the subscript avoids confusion in situations where many random variables are considered at once. The subscript is used throughout this book.
The probability distribution of a random variable is a description of the range of the variable and the associated assignment of probabilities.
Example 3.11 (Independence) Five balls numbered \(1\), \(2\), \(3\), \(4\) and \(5\) are in an urn. Two balls are selected at random. Consider finding the probability distribution of the larger of the two numbers.
The sample space is: \[ S =\{ (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)\}, \] where all \(10\) elements are equally likely. Then, let \(X\) be the random variable ‘the larger of the two numbers chosen’, so that \(\mathcal{R}_X = \{2, 3, 4, 5\}\). Listing the probabilities: \[\begin{alignat*}{3} \Pr(X = 2) &= \Pr\big((1, 2)\big) &\quad &= 1/10;\\ \Pr(X = 3) &= \Pr\big((1, 3) \text{ or } (2, 3)\big) &\quad &= 2/10;\\ \Pr(X = 4) &= \Pr\big((1, 4) \text{ or } (2, 4) \text{ or } (3, 4)\big) &\quad &= 3/10;\\ \Pr(X = 5) &= \Pr\big((1, 5) \text{ or } (2, 5) \text{ or } (3, 5)\text{ or } (4, 5)\big) &\quad &= 4/10;\\ \Pr(\text{other values of } X) & \text{ } &\quad &= 0. \end{alignat*}\] This is the probability distribution of \(X\), which could also be given as a formula: \[ \Pr(X = x) = \begin{cases} (x - 1)/10 & \text{for $x = 2, 3, 4, 5$}\\ 0 & \text{elsewhere}. \end{cases} \] The probability function could also be given in a table (Table 3.1) or graph (Fig. 3.1), where the probability is assumed to be zero for all values not shown.
| \(x\) | 2 | 3 | 4 | 5 |
| Probability | 0.1 | 0.2 | 0.3 | 0.4 |

FIGURE 3.1: The probability function for the larger of two numbers drawn.
Show R code
plot( x = 1:6, ### The values for which PMF > 0
y = c(0, 0.1, 0.2, 0.3, 0.4, 0),
xlim = c(0.5, 6.6), ylim = c(0, 0.45),
type = "h", ### type = "h": vertical lines
las = 1, lty = 3, ### lty = 3: Dotted lines
axes = FALSE, ### Supress drawing labelled axes
col = "grey",
main = expression(
paste( "The probability distribution of ", italic(X))
),
xlab = expression(
paste("Values of the random variable ", italic(X))
),
ylab = expression(
paste( "Probability function ", italic(p)[italic(X)](italic(x)) )
)
)
points( x = 1:6, ### Adds the points on top of the vertical lines
y = c(0, 0.1, 0.2, 0.3, 0.4, 0),
pch = 19)
axis(side = 1, at = 1:6) ### Add axis on bottom (side = 1)
axis(side = 2, ### Add axis at left (side = 2)
at = seq(0, 0.4, by = 0.1),
las = 1)
box() ### Surround plot with a box
Example 3.12 (Tossing heads) Suppose a fair coin is tossed twice, and the uppermost face is noted. Then the sample space is \[ S = \{ (HH), (HT), (TH), (TT) \}. \] Let \(H\) be the number of heads observed. \(H\) is a (discrete) random variable, and the range of \(H\) is \(\mathcal{R}_H = \{0, 1, 2\}\), representing the values that \(H\) can take.
The probability function maps each of these values to the associated probability. Using techniques from Chap. 2.4, the probabilities are: \[\begin{align*} \Pr(H = 0) &= \Pr(\text{no heads}) = 0.25;\\ \Pr(H = 1) &= \Pr(\text{one head}) = 0.5;\\ \Pr(H = 2) &= \Pr(\text{two heads}) = 0.25. \end{align*}\] As a function, the probability function is \[ p_H(h) = \Pr(H = h) = \begin{cases} 0.25 & \text{if $h = 0$};\\ 0.5 & \text{if $h = 1$};\\ 0.25 & \text{if $h = 2$};\\ 0 & \text{otherwise}. \end{cases} \] More succinctly, \[ p_H(h) = \Pr(H = h) = \begin{cases} (0.5)0.5^{|h - 1|} & \text{for $h = 0$, $1$ or $2$};\\ 0 & \text{otherwise}. \end{cases} \] This information can also be presented as a table (Table 3.2) or graph (Fig. 3.2). Note that \(\sum_{t \in \{0, 1, 2\}} p_H(t) = 1\) and \(p_H(h)\ge0\) for all \(h\), as required of a pf.
| \(h\) | 0 | 1 | 2 |
| \(\Pr(H = h)\) | 0.25 | 0.5 | 0.25 |

FIGURE 3.2: The probability function for \(H\), the number of heads on two tosses of a coin.
3.3.2 Continuous random variables: probability density functions
In the discrete case, probability can be distributed over distinct points (possibly a countably infinite number) where each point has non-zero mass. However, in the continuous case, mass cannot be thought of as an attribute of a point but rather of a region surrounding a point. The ideas from Sect. 2.7 are relevant here.
Definition 3.8 (Probability density function) The probability density function (PDF) of the continuous random variable \(X\) is a function \(f_X(\cdot)\) such that \[ f_X(x) = \Pr(a < X \le b) = \int_a^b f_X(x)\,dx \] for any interval \((a, b]\) (where \(a < b\)) on the real line.
We are usually only concerned with \((a, b)\in \mathcal{R}_X\), but it makes sense to think of the PDF as defined for all \(x\), insisting that \(f_X(x) = 0\) for \(x\notin \mathcal{R}_X\). This definition states that areas under the graph of the PDF represent probabilities and leads to the following properties of the probability density function.
- \(f_X(x) \ge 0\) for all \(-\infty < x < \infty\): the density function is never negative.
- \(\displaystyle \int_{-\infty}^\infty f_X(x)\,dx = 1\): The total probability is one.
- For an event \(A\) defined on a sample space \(S\), the probability of event \(A\) is \[ \Pr(A) = \int_{A} f_X(x)\, dx. \]
- Since exact values are not possible: \[\begin{align*} & \Pr(a < X \le b) = \Pr(a < X < b)\\ {} =& \Pr(a \le X < b) = \Pr(a \le X \le b) = \int_a^b f_X(x)\,dx \end{align*}\]
Properties 1 and 2 are sufficient to prove that a function is a PDF. That is, to show that some function \(g(x)\) is a PDF, showing that \(g(x) \ge 0\) for all \(-\infty < x < \infty\) and that \(\int_{-\infty}^\infty g(x)\,dx = 1\) is sufficient.
Property 4 results from noting that if \(X\) is a continuous random variable, \(\Pr(X = a) = 0\) for any and every value \(a\) for the same reason that a point has mass zero.
The value of a PDF at some point \(x\) does not represent a probability, but rather a probability density. Hence, the PDF can have any non-negative value of arbitrary size at a specific value of \(X\).
This last statement is easy to show. See Fig. 3.3, which shows some probability density function \(f(x)\). The probability of \(X\) occurring can be expressed as \[ \Pr(X = x^*) \approx \Pr\left(x^* - \frac{\Delta x}{2} < X < x^* + \frac{\Delta x}{2} \right) \] as \(\Delta x \to 0\). This is shown by the shaded area, which can be approximated by the dotted rectangle shown. That is, \[\begin{align*} \Pr(X = x^*) &\approx \Pr\left(x^* - \frac{\Delta x}{2} < X < x^* + \frac{\Delta x}{2} \right) \\ &= \Delta x \times f(x). \end{align*}\] Rearranging, \[ f(x) = \lim_{\Delta x \to 0} \frac{\Pr(X = x^*)}{\Delta x}. \] That is, the density function \(f(x)\) is not the probability \(\Pr(X = x^*)\).

FIGURE 3.3: Finding probabilities for a continuous random variable \(X\).
Example 3.13 (Probability density function) Consider the continuous random variable \(W\) with the PDF \[ f_W(w) = \begin{cases} 2w & \text{for $0 < w < 1$};\\ 0 & \text{elsewhere}. \end{cases} \] The probability \(\Pr(0 < W < 0.5)\) can be computed in two ways. One is to use calculus: \[ \Pr(0 < W < 0.5) = w^2\Big|_0^{0.5} = 0.25. \] Alternately, the probability can be computed geometrically from the graph of the PDF (Fig. 3.4). The region corresponding to \(\Pr(0 < W < 0.5)\) is triangular; integration simply finds the area of this triangle. Thus, the area can be found using the area of a triangle directly: the length of the base of the triangle, times the height of the rectangle, divided by two: \[ 0.5 \times 1 /2 = 0.25, \] and the answer is the same as before.
Note that \(f_W(w)\) is greater than one for some values of \(w\). Since \(f_W(w)\) does not represent probabilities at each point (as \(W\) is continuous), this is not a problem. However, \(\int_{\mathbb{R}} f_W(w) \, dw = 1\) as required of a probability density.

FIGURE 3.4: The probability function for \(W\). The shaded area represents \(0 < W < 0.5\).
3.3.3 Mixed random variables
Some random variables are not entirely continuous nor entirely discrete, but have components of both. These random variables are called mixed random variables.
Example 3.14 (Mixed random variable) In a factory producing diodes, a proportion of the diodes \(p\) fail immediately. The distribution of the lifetime (in hundreds of days), say \(Y\), of the diodes is given by a discrete component at \(y = 0\) for which \(\Pr(Y = 0) = p\), and a continuous part for \(y > 0\) described by \[ f_Y(y) = (1 - p) \exp(-y) \quad \text{if $y > 0$.} \] Here, \(f_Y(y)\) is not itself a PDF as it doesn’t integrate to one; however the total probability is \[ p + \int_0^\infty (1 - p)\exp(-y) \, dy = p + (1 - p) = 1 \] as required.
Consider a diode for which \(p = 0.4\). The probability distribution is displayed in Fig. 3.5 where a solid dot is included to show the discrete part. Representing the probability distribution in this mixed case is difficult, because of the need to combine a probability distribution and a PDF. These difficulties are overcome by using the distribution function (Sect. 3.4).

FIGURE 3.5: The probability function for the diodes example.
3.4 Distribution functions
Another way of describing random variables is using a distribution function (DF), also called a cumulative distribution function (CDF). The DF gives the probability that a random variable \(X\) is less than or equal to a given value of \(x\).
Definition 3.9 (Distribution function) For any random variable \(X\) the distribution function, \(F_X(x)\), is given by \[ F_X(x) = \Pr(X \leq x) \quad \text{for $x\in\mathbb{R}$}. \]
The distribution function applies to discrete, continuous or mixed random variables. Importantly, the definition includes a less than or equal to sign; and the distribution function is defined for all real numbers.
If \(X\) is a discrete random variable with range space \(\mathcal{R}_X\), then the DF is \[\begin{align*} F_X(x) &= \Pr(X \leq x)\\ &= \sum_{x_i \leq x} \Pr(X = x_i)\text{ for }x_i\in \mathcal{R}_X,\text{ and }-\infty < x < \infty. \end{align*}\] If \(X\) is a continuous random variable, the DF is \[\begin{align*} F_(x) &= \Pr(X \leq x)\\ &= \int^x_{-\infty} f(t)\,dt \text{ for } -\infty < x < \infty. \end{align*}\]
Example 3.15 (Tossing heads) Consider the simple example in Example 3.12 where a coin is tossed once. The probability function for \(H\) is given in that example in numerous forms. To determine the DF, first note that when \(h < 0\), the accumulated probability is zero; hence, \(F_H(h) = 0\) when \(h < 0\). At \(h = 0\), the probability of \(0.25\) is accumulated all at once, and no more probability is accumulated until \(h = 1\). Thus, \(F_H(h) = 0.25\) for \(0 \le h < 1\). Continuing, the DF is \[ F_H(h) = \begin{cases} 0 & \text{for $h < 0$};\\ 0.25 & \text{for $0\le h < 1$};\\ 0.75 & \text{for $1\le h < 2$};\\ 1 & \text{for $h\ge 2$}. \end{cases} \] The DF can be displayed graphically, being careful to clarify what happens at \(H = 1\), \(H = 2\) and \(H = 3\) using open or filled circles (Fig. 3.6).

FIGURE 3.6: A graphical representation of the distribution function for the tossing-heads example. The filled circles contain the given point, while the empty circles omit the given point.
Example 3.16 (Distribution function) Consider a continuous random variable \(V\) with PDF \[ f_V(v) = \begin{cases} v/2 & \text{for $0 < v < 2$};\\ 0 & \text{otherwise.} \end{cases} \] The DF is zero whenever \(v\le 0\). For \(0 < v < 2\), \[ F_V(v) = \int_0^v t/2\,dt = v^2/4. \] Whenever \(v\ge 2\), the DF is one. So the DF is \[ F_V(v) = \begin{cases} 0 & \text{if $v\le 0$};\\ v^2/4 & \text{if $0 < v < 2$};\\ 1 & \text{if $v\ge 2$.} \end{cases} \] A picture of the distribution function is shown in Fig. 3.7.

FIGURE 3.7: The probability function for \(V\).
For the integral, do not write \[ \int_0^v v/2\,dv. \] It makes no sense to have the variable of integration as a limit on the integral and also in the function to be integrated. Either write the integral as given in the example, or write \(\int_0^t v/2\,dv = t^2/4\) and then change the variable from \(t\) to \(v\).
Example 3.17 (Mixed random variable) Example 3.14 discussed the mixed random variable \(Y\), the lifetimes of diodes (in hundreds of days). A proportion of the diodes \(p = 0.4\) fail immediately. For other diodes, the distribution function of \(Y\) is described by \[\begin{align*} F_Y(y) &= \Pr(Y\le y)\\ &= p + \int_0^y f_Y(t)\, dt \\ &= p + (1 - p) \int_0^y\exp(-t)\, dt \\ &= p + (1 - p) [1 - \exp(-y)]\\ &= 0.4 + 0.6 [1 - \exp(-y)] \end{align*}\] for \(y > 0\). The probability distribution is displayed in Fig. 3.8 where a solid dot is included to show the discrete part.
The complete distribution function is: \[ F_Y(y) = \begin{cases} 0 & \text{if $y < 0$}\\ 0.4 & \text{if $y = 0$}\\ 0.4 + 0.6(1 - \exp(-y)) & \text{if $y > 0$}. \end{cases} \] Notice that the total probability is \[ 0.4 + 0.6\int_0^\infty \exp(-y) \, dy = 1 \] as required.

FIGURE 3.8: The distribution function for the diodes example.
Properties of the DF are stated below.
- \(0\leq F_X(x)\leq 1\) because \(F_X(x)\) is a probability.
- \(F_X(x)\) is a non-decreasing function of \(x\). That is, if \(x_1 < x_2\) then \(F_X(x_1) \leq F_X(x_2)\).
- \(\displaystyle{\lim_{x\to \infty} F_X(x)} = 1\) and \(\displaystyle{\lim_{x\to -\infty} F_X(x)} = 0\).
- \(\Pr(a < X \leq b) = F_X(b) - F_X(a)\).
- If \(X\) is discrete, then \(F_X(x)\) is a step-function. If \(X\) is continuous, then \(F_X\) will be a continuous function for all \(x\).
We have seen how to find \(F_X(x)\) given \(\Pr(X = x)\), or to find \(F_X(x)\) given \(f_X(x)\). But can we proceed in the other direction too? That is, given \(F_X(x)\), how do we find \(\Pr(X = x)\) for \(X\) discrete, or \(f_X(x)\) for \(X\) continuous?
As seen from the graph of the distribution in Example 3.15, the values of \(x\) where a ‘jump’ in \(F_X(x)\) occurs are the points in the range space, and the probability associated with a particular point in \(\mathcal{R}_X\) is the ‘height’ of the jump there. That is, \[\begin{equation} p_X(x_j) = \Pr(X = x_j) = F_X(x_j) - F_X(x_{j - 1}). \tag{3.1} \end{equation}\]
Example 3.18 (Mass function from distribution function) Consider the DF for a discrete random variable \(X\): \[ F_X(x) = \begin{cases} 0 & \text{for $x < 10$;}\\ 0.1 & \text{for $10 \le x < 11$;}\\ 0.4 & \text{for $11 \le x < 15$;}\\ 0.9 & \text{for $15 \le x < 17$;}\\ 1 & \text{for $x \ge 17$.} \end{cases} \] To find the PDF, use Eq. (3.1): \[ f_X(x) = \begin{cases} 0.1 & \text{for $x = 10$}\\ 0.3 & \text{for $x = 11$}\\ 0.5 & \text{for $x = 15$}\\ 0.1 & \text{for $x = 17$}\\ 0 & \text{elsewhere}\\ \end{cases}. \] See Fig. 3.9.

FIGURE 3.9: Left: the distribution function of \(X\). Right: the probability mass function of \(X\).
When \(X\) continuous, from the Fundamental Theorem of Calculus, \[\begin{equation} f_X(x) = \frac{dF_X(x)}{dx} \quad \text{where the derivative exists.} \tag{3.2} \end{equation}\]
Example 3.19 (Density function from distribution function) Consider the DF for a continuous random variable \(X\): \[ F_X(x) = \begin{cases} 0 & \text{for $x < 0 $;}\\ x(2 - x) & \text{for $0 \le x \le 1$;}\\ 1 & \text{for $x > 1$.} \end{cases} \] To find the PDF, use Eq. (3.2): \[\begin{align*} f_X(x) &= \begin{cases} \frac{d}{dx} 0 & \text{for $x < 0 $;}\\ \frac{d}{dx} x(2 - x) & \text{for $0 \le x \le 1$;}\\ \frac{d}{dx} 1 & \text{for $x > 1$} \end{cases} \\ &= \begin{cases} 0 & \text{for $x < 0 $;}\\ 2(1 - x) & \text{for $0 \le x \le 1$;}\\ 0 & \text{for $x > 1$} \end{cases} \end{align*}\] which is usually written as \(f_X(x) = 2(1 - x)\) for \(0 < x < 1\), and \(0\) elsewhere.
3.5 Quantile functions
For a random variable \(X\), the distribution function \(F_X(x)\) computes the probability that \(X < x\) for some given value of \(x\). The value of \(F_X(x)\) is a probability, so is a value between \(0\) and \(1\).
The quantile function \(Q_X(p)\) is the inverse of the distribution function; it takes a value between \(0\) and \(1\) (called \(p\)), and returns the smallest value \(x\) such that \(F_X(x) \ge p\) for given values of \(p\). More formally, \[\begin{equation} Q_X(p) = \inf\{x \in \mathbb{R} \mid F_X(x) \ge p\}\quad\text{for $0 < p < 1$} \tag{3.3} \end{equation}\] where ‘inf’ is the ‘infimum’, the greatest lower bound of the set. In practice this refers to the leftmost value \(x\) where the distribution function \(F_X(x)\) reaches or exceeds a target probability \(p\).
At the endpoints (i.e., at \(p = 0\) and \(p = 1\)), some ambiguity exists.
Some authors set \(Q_X(0) = \min\{x \mid F_X(x) > 0\}\), the smallest support point with a positive mass. Other authors define \(Q_X(0) = \lim_{p\downarrow} Q_X(p)\), which may give \(Q_X(0)\to -\infty\).
Similarly, some authors leave \(Q_X(1)\) undefined, while others set \(Q_X(1) = \sup\{x \mid F_X(x) < 1\}\), the largest support point (possibly \(+\infty\)). Other authors define \(Q_X(1) = \lim_{p\uparrow} Q_X(p)\), which again may be \(+\infty\).
When \(X\) is a continuous random variable and \(F_X(x)\) is a strictly increasing function, the quantile function is the inverse of the distribution function, and we can write \[ Q_X(p) = F_X^{-1}(p). \]
For built-in distributions (see Chaps. 7 and 8), R allows the values p = 0 and p = 1.
If the distribution is unbounded, R returns \(-\infty\) or \(+\infty\) as appropriate.
If the distribution is bounded, R returns the actual minimum and maximum of the support.
Example 3.20 (Quantile function: continuous rv) Consider the continuous random variable \(X\) with the probability density function \[ f_X(x) = \begin{cases} x/2 & \text{for $0 \le x \le 2$}\\ 0 & \text{elsewhere}. \end{cases} \] The distribution function is \[ F_X(x) = \begin{cases} 0 & \text{for $x < 0$}\\ x^2/4 & \text{for $0 \le x \le 2$}\\ 1 & \text{for $x > 2$}. \end{cases} \] For \(0 < p < 1\), the quantile function \(Q_X(p)\) is given by solving \(F_X(x) = p\) (i.e., solving \(p = x^2 /4\)), which gives \(Q_X(p) = 2\sqrt{p}\). This means the quantile function is \[ Q_X(p) = 2\sqrt{p} \quad\text{for $0 < p < 1$.}\\ \] See Fig. 3.10.

FIGURE 3.10: Left: the distribution function of \(X\). Right: the quantile function for \(X\).
Example 3.21 (Quantile function: continuous rv) Consider the probability density function for a continuous random variable \(X\) such that \[ f_X(x) = \begin{cases} 2\exp(-2x) & \text{for $x > 0$}\\ 0 & \text{elsewhere} \end{cases} \] so that the distribution function is: \[ F_X(x) = \begin{cases} 0 & \text{for $x < 0$}\\ 1 - \exp(-2x) & \text{for $x \ge 0 $}. \end{cases} \] For \(0 < p < 1\), the quantile function \(Q(p)\) is found by solving \(F_X(x) = p\); i.e., the solution to \(p = 1 - \exp(-2x)\). This yields \[ Q_X(p) = F_X^{-1}(p) = -\frac{1}{2}\log(1 - p) \] for \(0 \le p \le 1\). For example, three quarters of the probability occurs before \[ Q(3/4) = -\frac{1}{2}\log\big(1 - (3/4)\big) = \frac{\log 4}{2} = 0.6931\dots \] In other words, \(75\)% of the probability occurs for \(x = 0.6931...\). See Fig. 3.11.

FIGURE 3.11: Left: the distribution function of \(X\). Right: the quantile function for \(X\).
When \(X\) is a discrete random variable, and so the distribution function is discontinuous, great care is needed when applying Eq. (3.3).
Example 3.22 (Quantile function: discrete rv) Consider the discrete random variable in Example 3.18, for which \[ F_X(x) = \begin{cases} 0 & \text{for $x < 10$;}\\ 0.1 & \text{for $10 \le x < 11$;}\\ 0.4 & \text{for $11 \le x < 15$;}\\ 0.9 & \text{for $15 \le x < 17$;}\\ 1 & \text{for $x \ge 17$,} \end{cases} \] as shown in Fig. 3.12 (left panel).
The quantile function is found by solving \(F_X(x) \ge p\) for \(0 < p \le 1\). This gives: \[ Q_X(p) = \begin{cases} 10 & \text{for $0 < p \le 0.1$;}\\ 11 & \text{for $0.1 < p \le 0.4$;}\\ 15 & \text{for $0.4 < p \le 0.9$;}\\ 17 & \text{for $0.9 < p \le 1$.} \end{cases} \] as shown in Fig. 3.12. For example:
- \(Q_X(0.02) = 10\).
- \(Q_X(0.3) = 11\).
- \(Q_X(0.4) = 11\), since \(F_X(11) = 0.4\) and no value of \(x\) has a value of \(F_X(x)\) greater than or equal to \(0.4\).
- \(Q_X(1) = 17\).

FIGURE 3.12: Left: the distribution function of \(X\). Right: the quantile function of \(X\).
3.6 Generating random numbers
Many computers have facilities for generating pseudo-random numbers between \(0\) and \(1\), though random numbers from other distributions are often more useful in computer modelling and simulation (for example, Sects. 7.11 and 8.10). For instance, generating heights of people at random would requires the heights to have an average value of (say) \(180\,\text{cm}\), with most heights between about \(175\,\text{cm}\) and \(185\,\text{cm}\), but a few smaller than \(175\,\text{cm}\) and a few larger than \(185\,\text{cm}\).
- Generate a random number between \((0, 1)\), say \(p^*\).
- Use the quantile function to evaluate \(Q(p^*)\); this becomes a random number from the distribution specified in quantile function.
In R, random numbers between \(0\) and \(1\) are found using the function runif():
runif(10) # '10' here means to generate 10 random numbers between 0 and 1
#> [1] 0.2516103 0.5757754 0.0141046 0.8898474
#> [5] 0.5825408 0.4986948 0.8753130 0.3948804
#> [9] 0.3206488 0.2908654Example 3.23 (Random numbers from a continuous distribution) Suppose we want to generate random numbers from the distribution \[ f_X(x) = \exp(-2x) \] as in Example 3.21. For the distribution, the quantile function is shown in Example 3.21. So random numbers from the distribution can be generated using R:
# Create an R function for the quantile function
quantileFn <- function(p){ -log(1 - p) / 2}
# Create 2000 random numbers between 0 and 1
rnos <- runif(2000)
# Random numbers from the specified distribution
rSpecified <- quantileFn(rnos)
# A histogram of these random numbers should have a shape like the density fn
hist(rSpecified)The histogram (Fig. 3.13) does indeed have a similar shape to the density function.

FIGURE 3.13: Left: a density function. Right: a histogram of random numbers from the distribution.
3.7 Statistical computing
Many statistical distributions are generated from existing distributions, and many of these relationships are explored later. These relationships can also be shown using computer simulation.
As an example, the chi-squared distribution is related to the normal distribution, and is discussed later (Sect. 12.5.2). If a random variable \(Z\) has a standard normal distribution \(Z\sim N(0, 1)\), then \(Z^2\) has a chi-squared distribution (with one degree of freedom). This can be shown using simulation (Fig. 3.14).
### A standard normal variate: 1000 random values
Z <- rnorm(n = 1000,
mean = 0,
sd = 1)
### Set up for two plots, side-by-side
par(mfrow = c(1, 2) )
### Plot two histograms: Z and Z^2
### truehist is part of the MASS package,which must be loaded first, using:
### library(MASS)
truehist(Z,
las = 1,
xlab = expression(italic(Z) ),
ylab = "Density")
truehist(Z^2,
las = 1,
xlab = expression(sum(italic(Z^2)) ),
ylab = "Density")
### Now plot the theoretical chi-squared distribution (with one df)
# Use these values of X:
x_Plot <- seq(0, 10,
length = 100)
# Add lines to the histogram of chi-square random values
lines( dchisq(x_Plot, df = 1) ~ x_Plot,
type = "l",
lwd = 3)
# dchisq() is the density function for a chi-squared distribution
FIGURE 3.14: Left: histogram of \(1000\) simulated normal variates. Right: histogram of the squared normal variates. The solid lines on the right panel is the theoretical distribution of a chi-squared distribution with \(1\) degree of freedom.
Furthermore, the sum of \(k\) independent standard normal distributions is a chi-squared distribution with \(k\) degrees of freedom (Fig. 3.15):
set.seed(7701)
### Standard normal variate: in 100 rows of 5 columns each
Z <- matrix( data = rnorm(n = 5000,
mean = 0,
sd = 1),
nrow = 1000,
ncol = 5)
### Add the squared values, across rows (so each value is the
### sum of *five* standard normal variates)
Z_SumsSq <- rowSums(Z^2)
### Set up for two plots, side-by-side
par(mfrow = c(1, 2) )
### Plot two histograms: Z and Z^2
### truehist is part of the MASS package,which must be loaded first, using:
### library(MASS)
truehist(Z,
las = 1,
xlab = expression(italic(Z)),
ylab = "Density")
truehist(Z_SumsSq,
las = 1,
ylim = c(0, 0.16),
xlab = expression(sum(italic(Z^2))~over~five~values ),
ylab = "Density")
### Now plot the theoretical chi-squared distribution (with *five* df)
# Use these values of X:
x_Plot <- seq(0, 20,
length = 100)
lines( dchisq(x_Plot, df = 5) ~ x_Plot,
type = "l",
lwd = 3)
FIGURE 3.15: Left: histogram of \(5000\) simulated normal variates. Right: histogram of the sum of five squared normal variates. The solid lines on the right panel is the theoretical distribution of a chi-squared distribution with \(5\) degrees of freedom.
3.8 Exercises
Selected answers appear in Sect. E.3.
Exercise 3.1 For the following random processes, determine the range space \(\mathcal{R}_X\) and define the random variable of interest. Determine whether the random variable is discrete, continuous or mixed. Justify your answer.
- The number of heads in two throws of a fair coin.
- The number of throws of a fair coin until a head is observed.
- The time taken to download a webpage.
- The time it takes to walk to work.
Exercise 3.2 For the following random processes, determine the range space \(\mathcal{R}_X\) and define the random variable of interest. Determine whether the random variable is discrete, continuous or mixed. Justify your answer.
- The number of cars that pass through an intersection during a day.
- The number of X-rays taken at a hospital per day.
- The barometric pressure in a given city at \(5\)pm each afternoon.
- The length of a phone call connection.
Exercise 3.3 The random variable \(X\) has the probability function \[ p_X(x) = \begin{cases} 0.3 & \text{for $x = 10$};\\ 0.2 & \text{for $x = 15$};\\ 0.5 & \text{for $x = 20$};\\ 0 & \text{elsewhere}. \end{cases} \]
- Show that \(p_X(x)\) is a valid probability distribution.
- Plot the probability function of \(X\).
- Find and plot the distribution function for \(X\).
- Compute \(\Pr(X > 13)\).
- Compute \(\Pr(X \le 10 \mid X\le 15)\).
Exercise 3.4 The random variable \(X\) has the probability mass function \[ p_X(x) = \begin{cases} 2^{-x} & \text{for $x = 1, 2, 3, \dots$};\\ 0 & \text{elsewhere}. \end{cases} \]
- Show that \(p_X(x)\) is a valid probability distribution.
- Plot the probability function of \(X\).
- Find and plot the distribution function for \(X\).
- Compute \(\Pr(X > 13)\).
- Compute \(\Pr(X \le 10 \mid X\le 15)\).
Exercise 3.5 Consider the continuous random variable \(Z\) with probability function \[ f_Z(z) = \begin{cases} \alpha (3 - z) & \text{for $-1 < z < 2$};\\ 0 & \text{elsewhere}. \end{cases} \]
- Find the value of \(\alpha\).
- Plot the probability function of \(Z\).
- Find and plot the distribution function of \(Z\).
- Find \(\Pr(Z < 0)\).
Exercise 3.6 Consider the continuous random variable \(X\) with probability density function \[ f_X(x) = \begin{cases} 3(4 - x^2)/16 & \text{for $0 < x < 2$};\\ 0 & \text{elsewhere}. \end{cases} \]
- Plot the probability function of \(X\).
- Find and plot the distribution function of \(X\).
- Find \(\Pr(X < 1)\).
Exercise 3.7 Consider the mixed random variable \(Y\) with probability function \[ f_Y(y) = \begin{cases} p & \text{for $y = 0$};\\ 1 - y & \text{for $0 < y < 1$}.\\ 0 & \text{elsewhere}. \end{cases} \]
- Find the value of \(p\).
- Carefully plot the probability function of \(Y\).
- Find and carefully plot the distribution function of \(Y\).
- Find \(\Pr(Y < 0.5)\).
Exercise 3.8 Consider the mixed random variable \(X\) with probability function \[ f_X(x) = \begin{cases} c & \text{for $x = 0$};\\ x/2 & \text{for $0 < x < 1$};\\ (1 - x)/4 & \text{for $1 < x < 3$};\\ 0 & \text{elsewhere}. \end{cases} \]
- Find the value of \(c\).
- Carefully plot the probability function of \(X\).
- Find and plot the distribution function of \(X\).
- Find \(\Pr(X > 1)\).
Exercise 3.9 Consider the random variable \(Y\) with the probability mass function \[ f_Y(y) = \begin{cases} y^\alpha - 2 & \text{for $y = 1, 2$}\\ 0 & \text{elsewhere.} \end{cases} \] Find the value(s) of \(\alpha\) so that \(f_Y(y)\) is a valid probability function.
Exercise 3.10 Consider the random variable \(X\) with the probability mass function \[ f_X(x) = \begin{cases} x + 1 & \text{for $x = b, b + 1, b + 2$}\\ 0 & \text{elsewhere.} \end{cases} \] Find the value(s) of \(b\) so that \(f_X(x)\) is a valid probability function.
Exercise 3.11 Consider the random variable \(T\) with the probability density function shown in Fig. 3.16 (left panel). Find the value(s) of \(a\) so that this represents a valid probability function.
Exercise 3.12 Consider the random variable \(W\) with the probability density function shown in Fig. 3.16 (right panel). Find the value(s) of \(a\) so that this represents a valid probability function.

FIGURE 3.16: Two probability density functions
Exercise 3.13 Consider the discrete random variable \(W\) with DF \[ F_W(w) = \begin{cases} 0 & \text{for $w < 10$};\\ 0.3 & \text{for $10 \le w < 20$};\\ 0.7 & \text{for $20 \le w < 30$};\\ 0.9 & \text{for $30 \le w < 40$};\\ 1 & \text{for $w \ge 40$}. \end{cases} \]
- Find and plot the PMF of \(W\).
- Compute \(\Pr(W < 25)\) using the PMF, and using the DF.
Exercise 3.14 Consider the continuous random variable \(Y\) with DF \[ F_Y(y) = \begin{cases} 0 & \text{for $y \le 0$};\\ y(4 - y^2)/3 & \text{for $0 < y < 1$};\\ 1 & \text{for $y \ge 1$}. \end{cases} \]
- Find and plot the PDF of \(Y\).
- Compute \(\Pr(Y < 0.5)\) using the PDF and using the DF.
Exercise 3.15 A study of the service life of concrete in various conditions (Liu and Shi 2012) used the following distribution for the chlorine threshold of concrete1: \[ f(x; a, b, c) = \begin{cases} \displaystyle \frac{2(x - a)}{(b - a)(c - a)} & \text{for $a\le x\le c$};\\[6pt] \displaystyle \frac{2(b - x)}{(b - a)(b - c)} & \text{for $c\le x\le b$};\\[3pt] 0 & \text{otherwise}, \end{cases} \] where \(c\) is the mode, and \(a\) and \(b\) are the lower and upper limits.
- Show that the distribution is a valid PDF. (This may be easier geometrically than using integration.)
- Previous studies, cited in the article, suggest the values \(a = 0.6\) and \(c = 5\), and that the distribution is symmetric. Write down the PDF for this case.
- Determine the DF using the values above.
- Plot the PDF and DF.
- Determine \(\Pr(X > 3)\).
- Determine \(\Pr(X > 3 \mid X > 1)\).
- Explain the difference in meaning for the last two answers.
Exercise 3.16 In a study modelling waiting times at a hospital (Khadem et al. 2008), patients are classified into one of three categories:
- Red: Critically ill or injured patients.
- Yellow: Moderately ill or injured patients.
- Green: Minimally injured or uninjured patients.
For ‘Yellow’ patients, the service time of doctors are modelled using a triangular distribution, with a minimum at \(3.5\,\text{mins}\), a maximum at \(30.5\,\text{mins}\) and a mode at \(5\,\text{mins}\).
- Plot the PDF and DF.
- If \(S\) is the service time, compute \(\Pr(S > 20 \mid S > 10)\).
Exercise 3.17 In meteorological studies, rainfall is often graphed using rainfall exceedance charts (Dunn 2001): the plotted function is \(1 - F_X(x)\) (also called the survivor function in other contexts), where \(X\) represents the rainfall.
- Explain what is measured by \(1 - F_X(x)\) in this context, and explain why it may be more useful than just \(F_X(x)\) when studying rainfall.
- The data in Table 3.3 shows the total monthly rainfall at Charleville, Queensland, from 1942 until 2022 for the months of June and December. (Data supplied by the Bureau of Meteorology.) Plot \(1 - F_X(x)\) for this rainfall data for each month on the same graph.
- Suppose a producer requires at least \(50\,\text{mm}\) of rain in June to plant a crop. Find the probability of this occurring from the plot above.
- Determine the median monthly rainfall in Charleville in June and December.
- Decide if the mean or the median is an appropriate measure of central tendency, giving reasons for your answer.
- Compare the probabilities of obtaining \(30\,\text{mm}\), \(50\,\text{mm}\), \(100\,\text{mm}\) and \(150\,\text{mm}\) of rain in the months of June and December.
- Write a one-or two-paragraph explanation of how to use diagrams like that presented here. The explanation should be aimed at producers (that is, experts in farming, but not in statistics) and should be able to demonstrate the usefulness of such graphs in decision making. Your explanation should be clear and without jargon. Use diagrams if necessary.
| June | December | |
|---|---|---|
| Zero rainfall | 3 | 0 |
| Over 0 to under 20 | 41 | 17 |
| 20 to under 40 | 19 | 17 |
| 40 to under 60 | 12 | 21 |
| 60 to under 80 | 2 | 9 |
| 80 to under 100 | 2 | 6 |
| 100 to under 120 | 0 | 3 |
| 120 to under 140 | 2 | 1 |
| 140 to under 160 | 0 | 4 |
| 160 to under 180 | 0 | 0 |
| 180 to under 200 | 0 | 1 |
| 200 to under 220 | 0 | 1 |
Exercise 3.18 Five people, including you and a friend, line up at random. The random variable \(X\) denotes the number of people between yourself and your friend. Determine the probability function of \(X\).
Exercise 3.19 Let \(Y\) be a continuous random variable with PDF \[ f_Y(y) = a(1 - y)^2,\quad \text{for $0\le y\le 1$}. \]
- Show that \(a = 3\).
- Find \(\Pr\left(| Y - \frac{1}{2}| > \frac{1}{4} \right)\).
- Use R to draw a graph of \(f_Y(y)\) and show the area described above.
Exercise 3.20 Suppose the random variable \(Y\) has the PDF \[ f_Y()y) = \begin{cases} \frac{2}{3}(y - 1) & \text{for $1 < y < 2$};\\ \frac{2}{3} & \text{for $2 < y < 3$}. \end{cases} \]
- Plot the PDF.
- Determine the DF.
- Confirms that the DF is a valid DF.
Exercise 3.21 Quartic polynomials are sometimes used to model mortality (e.g., Fitzpatrick and Moore (2018)). Suppose the model \[ f_Y(y) = k y^2 (100 - y)^2\quad \text{for $0 \le y \le 100$} \] (for some value of \(k\)) is used for describing mortality where \(Y\) denotes age at death in years. Using this model, is a person more likely to die at age between \(60\) and \(70\), or live past \(70\)?
Exercise 3.22 To detect disease in a population through a blood test, usually every individual is tested. If the disease is uncommon, however, an alternative method is often more efficient.
In the alternative method (called a pooled test), blood from \(n\) individuals is combined, and one test is conducted. If the test returns a negative result, then none of the \(n\) people have the disease; if the test returns a positive result, all \(n\) individuals are then tested individually to identify which individual(s) have the disease.
Suppose a disease occurs in an unknown proportion of people \(p\) of people. Let \(X\) be the number of tests to be performed for a group of \(n\) individuals using the pooled test approach.
- Determine the sample space for \(X\).
- Deduce the probability distribution of the random variable \(X\).
Exercise 3.23 In a deck of cards, consider an Ace as high, and all picture cards (i.e., Jacks, Queens, Kings, Aces) as worth ten points. All other cards are worth their face value (i.e., an 8 is worth eight points.)
Deduce the probability distribution of the absolute difference between the value of two cards drawn at random from a well-shuffled deck of \(52\) cards.
Exercise 3.24 The leading digits of natural data that span many orders of magnitude (e.g., lengths of rivers; populations of countries) often follow Benford’s law. Numbers are said to satisfy Benford’s law if the leading digit, say \(d\) (for \(d\in \{1, \dots, 9\}\)) has the probability mass function \[ p_D(d) = \log_{10}\left( \frac{d + 1}{d} \right). \]
- Plot the probability mass function of \(D\).
- Compute and plot the distribution function of \(D\).
Exercise 3.25 Consider the random variable \(X\) with PDF \[ f(x) = \begin{cases} \displaystyle \frac{k}{\sqrt{x(1 - x)}} & \text{for $0 < x < 1$};\\[6pt] 0 & \text{elsewhere}, \end{cases} \] for some constant \(k\).
- Plot the density function.
- Determine, and then plot, the distribution function.
- Compute \(\Pr(X > 0.25)\).
Exercise 3.26 Consider the distribution function \[ F_X(x) = \begin{cases} 0 & \text{for $x < 0$};\\ \exp(-1/x) & \text{$x \ge 0$}. \end{cases} \]
- Show that this is a valid DF.
- Compute the PDF.
- Plot the DF and PDF.
Exercise 3.27 Suppose a dealer deals four cards from a standard \(52\)-card pack. Define \(Y\) as the number of suits in the four cards. Deduce the distribution of \(Y\).
Exercise 3.28 Consider the random variable \(T\) with the probability function \[ f_T(t) = \begin{cases} a & \text{for $t = 0$}; \\ \frac{2}{3} - (t - 1)^2 & \text{for $0 < t < 2$}. \end{cases} \]
- Determine the value of \(a\).
- Plot the probability function.
- Determine the distribution function.
- Plot the distribution function.
Exercise 3.29 Consider rolling a fair, six-sided die.
- Find the probability mass function for the number of rolls required to roll a total of at least four.
- Find the probability mass function for the number of rolls required to roll a total of exactly four.
Exercise 3.30 Consider a random variable \(Z\) with a standard normal distribution \(N(0, 1)\), and a random variable \(V\) with a chi-squared distribution on \(v\) degrees of freedom.
Simulate the distribution of
\[
T = \frac{Z}{\sqrt{V/\nu}},
\]
then show this is a \(t\)-distribution with \(\nu\) degrees of freedom.
Hint: use the R functions for the \(t\)-distribution (e.g., dt()) and the chi-squared distribution (e.g, dchisq()).
Exercise 3.31 Consider two random variables \(X\) and \(Y\) that have uniform distributions, with probability density functions \[\begin{align*} f_X(x) = 1\quad\text{for $0< x < 1$; and}\\ f_Y(y) = 1\quad\text{for $0< y < 1$} \end{align*}\]
- Using simulation, show that the distribution of \(X + Y\) has a triangular distribution.
- What is the mean and variance of the resulting distribution, based on the simulation?
- Explain how your answers change if the distribution of \(Y\) changes to \[ f_Y(y) = \frac{1}{2}\quad\text{for $-1 < y < 1$}. \]
Exercise 3.32 Consider a random variable \(Z\) with a standard normal distribution \(N(0, 1)\), and a random variable \(V_\nu\) with a chi-squared distribution on \(\nu\) degrees of freedom.
Simulate the distribution of
\[
T = \frac{Z}{\sqrt{V_9/9}},
\]
then show this is a \(t\)-distribution with \(\nu = 9\) degrees of freedom.
Hint: use the R functions for the \(t\)-distribution (e.g., dt()) and the chi-squared distribution (e.g, dchisq()).