Random Experiments

Class Registration Banner

We may perform various activities in our daily existence, sometimes repeating the same actions though we get the same result every time. Suppose, in mathematics, we can directly say that the sum of all interior angles of a given quadrilateral is 360 degrees, even if we don’t know the type of quadrilateral and the measure of each internal angle. Also, we might perform several experimental activities, where the result may or may not be the same even when they are repeated under the same conditions. For example, when we toss a coin, it may turn up a tail or a head, but we are unsure which results will be obtained. These types of experiments are called random experiments.

Random Experiment in Probability

An activity that produces a result or an outcome is called an experiment. It is an element of uncertainty as to which one of these occurs when we perform an activity or experiment. Usually, we may get a different number of outcomes from an experiment. However, when an experiment satisfies the following two conditions, it is called a random experiment.

(i) It has more than one possible outcome.

(ii) It is not possible to predict the outcome in advance.

Let’s have a look at the terms involved in random experiments which we use frequently in probability theory. Also, these terms are used to describe whether an experiment is random or not.

Outcome A possible result of a random experiment is called its outcome.

Example: In an experiment of throwing a die, the outcomes are 1, 2, 3, 4, 5, or 6

Sample space The set of all possible outcomes of a random experiment is called the sample space connected with that experiment and is denoted by the symbol S.

Example: In an experiment of throwing a die, sample space is S = {1, 2, 3, 4, 5, 6}

Sample point Each element of the sample space is called a sample point.

Or

Each outcome of the random experiment is also called a sample point.

Learn more about sample space here.

What is a Random Experiment?

Based on the definition of random experiment we can identify whether the given experiment is random or not. Go through the examples to understand what is a random experiment and what is not a random experiment.

Is picking a card from a well-shuffled deck of cards a random experiment?

We know that a deck contains 52 cards, and each of these cards has an equal chance to be selected.

(i) The experiment can be repeated since we can shuffle the deck of cards every time before picking a card and there are 52 possible outcomes.

(ii) It is possible to pick any of the 52 cards, and hence the outcome is not predictable before.

Thus, the given activity satisfies the two conditions of being a random experiment.

Hence, this is a random experiment.

Consider the experiment of dividing 36 by 4 using a calculator. Check whether it is a random experiment or not.

(i) This activity can be repeated under identical conditions though it has only one possible result.

(ii) The outcome is always 9, which means we can predict the outcome each time we repeat the operation.

Hence, the given activity is not a random experiment.

Examples of Random Experiments

Below are the examples of random experiments and the corresponding sample space.

Number of possible outcomes = 8

Number of possible outcomes = 36

Number of possible outcomes = 100

Similarly, we can write several examples which can be treated as random experiments.

Playing Cards

Probability theory is the systematic consideration of outcomes of a random experiment. As defined above, some of the experiments include rolling a die, tossing coins, and so on. There is another experiment of playing cards. Here, a deck of cards is considered as the sample space. For example, picking a black card from a well-shuffled deck is also considered an event of the experiment, where shuffling cards is treater as the experiment of probability.

A deck contains 52 cards, 26 are black, and 16 are red.

However, these playing cards are classified into 4 suits, namely Spades, Hearts, Diamonds, and Clubs. Each of these four suits contains 13 cards.

We can also classify the playing cards into 3 categories as:

Aces:  A deck contains 4 Aces, of which 1 of every suit. 

Face cards:  Kings, Queens, and Jacks in all four suits, also known as court cards.

Number cards:  All cards from 2 to 10 in any suit are called the number cards. 

  • Spades and Clubs are black cards, whereas Hearts and Diamonds are red.
  • 13 cards of each suit = 1 Ace + 3 face cards + 9 number cards
  • The probability of drawing any card will always lie between 0 and 1.
  • The number of spades, hearts, diamonds, and clubs is the same in every pack of 52 playing cards.

An example problem on picking a card from a deck is given above.

MATHS Related Links

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

random experiment in probability definition

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

All Subjects

study guides for every class

That actually explain what's on your next test, random experiment, from class:, engineering probability.

A random experiment is a process or action that leads to one or more outcomes, where the outcome is uncertain and can vary each time the experiment is conducted. This concept is crucial in probability models, as it serves as the foundation for understanding how probabilities are assigned to different events based on the possible outcomes generated by the experiment. Each random experiment can be repeated multiple times, and analyzing the outcomes helps in making informed predictions about future occurrences.

congrats on reading the definition of Random Experiment . now let's actually learn it.

5 Must Know Facts For Your Next Test

  • A random experiment must have at least two possible outcomes that are clearly defined and can be measured.
  • The outcomes of a random experiment are typically unpredictable and cannot be determined in advance.
  • Common examples of random experiments include flipping a coin, rolling a die, or drawing a card from a deck.
  • The concept of randomness in these experiments forms the basis for calculating probabilities and understanding statistical behavior.
  • Each repetition of a random experiment may yield different results, which can lead to trends or patterns over time when analyzed.

Review Questions

  • Understanding that a random experiment produces uncertain outcomes allows us to define the sample space as all the possible results that can occur from that experiment. For instance, if we consider rolling a six-sided die, the sample space consists of six outcomes: {1, 2, 3, 4, 5, 6}. By analyzing the random experiment, we can better grasp how probabilities relate to each outcome within the sample space.
  • Recognizing that random experiments yield uncertain outcomes is essential for interpreting events accurately in probability theory. Since an event is defined as a subset of outcomes from a sample space, understanding the randomness involved helps us assign probabilities correctly. For example, in flipping a coin, knowing that it's a random experiment allows us to state that the event 'getting heads' has a probability of 0.5. This clarity guides our decision-making based on likely outcomes.
  • Different types of random experiments can significantly influence probability distributions and their applications in real-world scenarios. For example, consider an experiment involving rolling dice versus one involving weather predictions. The first has a finite number of outcomes with clear probabilities, allowing for straightforward probability distributions like uniform distribution. In contrast, weather prediction involves complex factors leading to more nuanced distributions such as normal or binomial distributions. Understanding these differences aids in selecting appropriate models for predicting outcomes in various fields, including finance, engineering, and social sciences.

Related terms

Sample Space : The set of all possible outcomes of a random experiment.

Event : A subset of outcomes from a sample space, often of interest in probability calculations.

Probability Distribution : A function that describes the likelihood of each possible outcome of a random experiment.

" Random Experiment " also found in:

Subjects ( 5 ).

  • AP Statistics
  • Introduction to Probability
  • Mathematical Probability Theory
  • Probability and Mathematical Statistics in Data Science
  • Statistical Inference

© 2024 Fiveable Inc. All rights reserved.

Ap® and sat® are trademarks registered by the college board, which is not affiliated with, and does not endorse this website..

  • FOR INSTRUCTOR
  • FOR INSTRUCTORS

Video Available

1.3.1 Random Experiments

  • Random experiment: toss a coin; sample space: $S=\{heads, tails\}$ or as we usually write it, $\{H,T\}$.
  • Random experiment: roll a die; sample space: $S=\{1, 2, 3, 4, 5, 6\}$.
  • Random experiment: observe the number of iPhones sold by an Apple store in Boston in $2015$; sample space: $S=\{0, 1, 2, 3, \cdots \}$.
  • Random experiment: observe the number of goals in a soccer match; sample space: $S=\{0, 1, 2, 3, \cdots \}$.

When we repeat a random experiment several times, we call each one of them a trial . Thus, a trial is a particular performance of a random experiment. In the example of tossing a coin, each trial will result in either heads or tails. Note that the sample space is defined based on how you define your random experiment. For example,

Example We toss a coin three times and observe the sequence of heads/tails. The sample space here may be defined as $$S = \{(H,H,H), (H,H,T), (H,T,H), (T,H,H), (H,T,T),(T,H,T),(T,T,H),(T,T,T)\}.$$

Our goal is to assign probability to certain events . For example, suppose that we would like to know the probability that the outcome of rolling a fair die is an even number. In this case, our event is the set $E=\{2, 4, 6\}$. If the result of our random experiment belongs to the set $E$, we say that the event $E$ has occurred. Thus an event is a collection of possible outcomes. In other words, an event is a subset of the sample space to which we assign a probability. Although we have not yet discussed how to find the probability of an event, you might be able to guess that the probability of $\{2, 4, 6 \}$ is $50$ percent which is the same as $\frac{1}{2}$ in the probability theory convention.

Outcome: A result of a random experiment. Sample Space: The set of all possible outcomes. Event: A subset of the sample space.

Union and Intersection: If $A$ and $B$ are events, then $A \cup B$ and $A \cap B$ are also events. By remembering the definition of union and intersection, we observe that $A \cup B$ occurs if $A$ or $B$ occur. Similarly, $A \cap B$ occurs if both $A$ and $B$ occur. Similarly, if $A_1, A_2,\cdots, A_n$ are events, then the event $A_1 \cup A_2 \cup A_3 \cdots \cup A_n$ occurs if at least one of $A_1, A_2,\cdots, A_n$ occurs. The event $A_1 \cap A_2 \cap A_3 \cdots \cap A_n$ occurs if all of $A_1, A_2,\cdots, A_n$ occur. It can be helpful to remember that the key words "or" and "at least" correspond to unions and the key words "and" and "all of" correspond to intersections.

The print version of the book is available on .


A First Course on Statistical Inference

1.1 probability review.

The following probability review starts with the very conceptualization of “randomness” through the random experiment , introduces the set theory needed for probability functions, and introduces the three increasingly general definitions of probability.

1.1.1 Random experiment

Definition 1.1 (Random experiment) A random experiment \(\xi\) is an experiment with the following properties:

  • its outcome is impossible to predict;
  • if the experiment is repeated under the same conditions, the outcome may be different;
  • the set of possible outcomes is known in advance.

The following concepts are associated with a random experiment:

  • The set of possible outcomes of \(\xi\) is termed as the sample space and is denoted as \(\Omega.\)
  • The individual outcomes of \(\xi\) are called sample outcomes , realizations , or elements , and are denoted by \(\omega\in\Omega.\)
  • An event \(A\) is a subset of \(\Omega.\) Once the experiment has been performed, it is said that \(A\) “happened” if the individual outcome of \(\xi,\) \(w,\) belongs to \(A.\)

Example 1.1 The following are random experiments:

  • \(\xi=\) “Tossing a coin”. The sample space is \(\Omega=\{\mathrm{H},\mathrm{T}\}\) ( H eads, T ails). Some events are: \(\emptyset,\) \(\{\mathrm{H}\},\) \(\{\mathrm{T}\},\) \(\Omega.\)
  • \(\xi=\) “Measuring the number of car accidents within an hour in Spain”. The sample space is \(\Omega=\mathbb{N}\cup\{0\}.\)
  • \(\xi=\) “Measuring the weight (in kgs) of a pedestrian between \(20\) and \(40\) years old”. The sample space is \(\Omega=[m,\infty),\) where \(m\) is a certain minimum weight.

1.1.2 Borelians and measurable spaces

A probability function will be defined as a mapping of subsets (events) of the sample space \(\Omega\) to elements in \([0,1].\) Therefore, it is necessary to count on a “good” structure for these subsets in order to generate “good” properties for the probability function. A \(\sigma\) -algebra gives such a structure.

Definition 1.2 ( \(\sigma\) -algebra) A \(\sigma\) -algebra \(\mathcal{A}\) over a set \(\Omega\) is a collection of subsets of \(\Omega\) with the following properties:

  • \(\emptyset\in \mathcal{A};\)
  • If \(A\in\mathcal{A},\) then \(\overline{A}\in \mathcal{A},\) where \(\overline{A}\) is the complementary of \(A;\)
  • If \(\{A_i\}_{i=1}^\infty\subset\mathcal{A},\) then \(\cup_{n=1}^{\infty} A_i\in \mathcal{A}.\)

A \(\sigma\) -algebra \(\mathcal{A}\) over \(\Omega\) defines a collection of sets that is closed under intersections and unions, i.e., it is impossible to take sets on \(\mathcal{A},\) operate on them through unions and intersections thereof, and end up with a set that does not belong to \(\mathcal{A}.\)

The following are two commonly employed \(\sigma\) -algebras.

Definition 1.3 (Discrete \(\sigma\) -algebra) The discrete \(\sigma\) -algebra of the set \(\Omega\) is the power set \(\mathcal{P}(\Omega):=\{A:A\subset \Omega\},\) that is, the collection of all subsets of \(\Omega.\)

Definition 1.4 (Borel \(\sigma\) -algebra) Let \(\Omega=\mathbb{R}\) and consider the collection of intervals

\[\begin{align*} \mathcal{I}:=\{(-\infty,a]: a\in \mathbb{R}\}. \end{align*}\]

The Borel \(\sigma\) -algebra , denoted by \(\mathcal{B},\) is defined as the smallest \(\sigma\) -algebra that contains \(\mathcal{I}.\)

Remark . The smallest \(\sigma\) -algebra coincides with the intersection of all \(\sigma\) -algebras containing \(\mathcal{I}.\)

Remark . The Borel \(\sigma\) -algebra \(\mathcal{B}\) contains all the complements, countable intersections, and countable unions of elements of \(\mathcal{I}.\) Particularly, \(\mathcal{B}\) contains all kinds of intervals, isolated points of \(\mathbb{R},\) and unions thereof. For example:

  • \((a,\infty)\in\mathcal{B},\) since \((a,\infty)=\overline{(-\infty,a]},\) and \((-\infty,a]\in\mathcal{B}.\)
  • \((a,b]\in\mathcal{B},\) \(\forall a<b,\) since \((a,b]=(-\infty,b]\cap (a,\infty),\) where \((-\infty,b]\in\mathcal{B}\) and \((a,\infty)\in\mathcal{B}.\)
  • \(\{a\}\in\mathcal{B},\) \(\forall a\in\mathbb{R},\) since \(\{a\}=\bigcap_{n=1}^{\infty}\big(a-\tfrac{1}{n},a\big],\) which belongs to \(\mathcal{B}.\)

However, \(\mathcal{B}\) is not \(\mathcal{P}(\mathbb{R})\) (indeed, \(\mathcal{B}\varsubsetneq\mathcal{P}(\mathbb{R})\) ).

Intuitively, the Borel \(\sigma\) -algebra represents the vast collection of sensible subsets of \(\mathbb{R},\) understanding sensible subsets as those constructed with set operations on intervals, which are a very well-behaved type of sets. The emphasis on sensible is important: \(\mathcal{P}(\mathbb{R}),\) on which \(\mathcal{B}\) is contained, is a space populated also by monster sets , such as the Vitali set . We want to be far away from them!

When the sample space \(\Omega\) is continuous and is not \(\mathbb{R},\) but a subset of \(\mathbb{R},\) we need to define a \(\sigma\) -algebra over the subsets of \(\Omega.\)

Definition 1.5 (Restricted Borel \(\sigma\) -algebra) Let \(A\subset \mathbb{R}.\) The Borel \(\sigma\) -algebra restricted to \(A\) is defined as

\[\begin{align*} \mathcal{B}_{A}:=\{B\cap A: B\in\mathcal{B}\}. \end{align*}\]

The \(\sigma\) -algebra \(\mathcal{A}\) over \(\Omega\) gives the required set structure to be able to measure the “size” of the sets with a probability function.

Definition 1.6 (Measurable space) The pair \((\Omega,\mathcal{A}),\) where \(\Omega\) is a sample space and \(\mathcal{A}\) is a \(\sigma\) -algebra over \(\Omega,\) is referred to as a measurable space .

Example 1.2 The measurable space for the experiment \(\xi=\) “Tossing a coin” described in Example 1.1 is

\[\begin{align*} \Omega=\{\mathrm{H}, \mathrm{T}\}, \quad \mathcal{A}=\{\emptyset,\{\mathrm{H}\},\{\mathrm{T}\},\Omega\}. \end{align*}\]

The sample space for experiment \(\xi=\) “Measuring the number of car accidents within an hour in Spain” is \(\Omega=\mathbb{N}_0,\) where \(\mathbb{N}_0=\mathbb{N}\cup \{0\}.\) Taking the \(\sigma\) -algebra \(\mathcal{P}(\Omega),\) then \((\Omega, \mathcal{P}(\Omega))\) is a measurable space.

For experiment \(\xi=\) “Measuring the weight (in kgs) of a pedestrian between \(20\) and \(40\) years old”, in which the sample space is \(\Omega=[m,\infty)\subset\mathbb{R},\) an adequate \(\sigma\) -algebra is the Borel \(\sigma\) -algebra restricted to \(\Omega,\) \(\mathcal{B}_{[m,\infty)}.\)

1.1.3 Probability definitions

A probability function maps an element of the \(\sigma\) -algebra to a real number in the interval \([0,1].\) Thus, probability functions are defined on measurable spaces and will assign a “measure” (called probability) to each set. We will see this formally in Definition 1.9 , after seeing some examples and more intuitive definitions next.

Example 1.3 The following tables show the relative frequencies of the outcomes of the random experiments of Example 1.1 when those experiments are repeated \(n\) times.

Tossing a coin \(n\) times. Table 1.1 and Figure 1.2 show that the relative frequencies of both “heads” and “tails” converge to \(0.5.\)

Table 1.1: Relative frequencies of “heads” and “tails” for \(n\) random experiments.
\(n\) Heads Tails
10 0.300 0.700
20 0.500 0.500
30 0.433 0.567
100 0.380 0.620
1000 0.495 0.505

Convergence of the relative frequencies of “heads” and “tails” to \(0.5\) as the number of random experiments \(n\) grows.

Figure 1.2: Convergence of the relative frequencies of “heads” and “tails” to \(0.5\) as the number of random experiments \(n\) grows.

Measuring the number of car accidents for \(n\) independent hours in Spain (simulated data). Table 1.2 and Figure 1.3 show the convergence of the relative frequencies of the experiment.

Table 1.2: Relative frequencies of car accidents in Spain for \(n\) hours.
\(n\) \(0\) \(1\) \(2\) \(3\) \(4\) \(5\) \(\geq 6\)
10 0.000 0.000 0.300 0.300 0.100 0.100 0.200
20 0.000 0.000 0.200 0.200 0.100 0.100 0.400
30 0.000 0.033 0.267 0.133 0.100 0.100 0.367
100 0.030 0.040 0.260 0.140 0.160 0.110 0.260
1000 0.021 0.078 0.145 0.192 0.200 0.150 0.214
10000 0.018 0.074 0.149 0.193 0.194 0.159 0.213

Convergence of the relative frequencies of car accidents as the number of measured hours \(n\) grows.

Figure 1.3: Convergence of the relative frequencies of car accidents as the number of measured hours \(n\) grows.

Measuring the weight (in kgs) of \(n\) pedestrians between \(20\) and \(40\) years old. Again, Table 1.3 and Figure 1.4 show the convergence of the relative frequencies of the weight intervals.

Table 1.3: Relative frequencies of weight intervals for \(n\) measured pedestrians.
\(n\) \([0, 35)\) \([35, 45)\) \([45, 55)\) \([55, 65)\) \([65, \infty)\)
10 0.000 0.000 0.700 0.300 0.000
20 0.000 0.100 0.700 0.200 0.000
30 0.000 0.067 0.767 0.167 0.000
100 0.000 0.220 0.670 0.110 0.000
1000 0.003 0.200 0.690 0.107 0.000
5000 0.003 0.207 0.676 0.113 0.001

Convergence of the relative frequencies of the weight intervals as the number of measured pedestrians \(n\) grows.

Figure 1.4: Convergence of the relative frequencies of the weight intervals as the number of measured pedestrians \(n\) grows.

As hinted from the previous examples, the frequentist definition of the probability of an event is the limit of the relative frequency of that event when the number of repetitions of the experiment tends to infinity.

Definition 1.7 (Frequentist definition of probability) The frequentist definition of the probability of an event \(A\) is

\[\begin{align*} \mathbb{P}(A):=\lim_{n\to\infty} \frac{n_A}{n}, \end{align*}\]

where \(n\) stands for the number of repetitions of the experiment and \(n_A\) is the number of repetitions in which \(A\) happens.

The Laplace definition of probability can be employed for experiments that have a finite number of possible outcomes, and whose results are equally likely.

Definition 1.8 (Laplace definition of probability) The Laplace definition of probability of an event \(A\) is the proportion of favorable outcomes to \(A,\) that is,

\[\begin{align*} \mathbb{P}(A):=\frac{\# A}{\# \Omega}, \end{align*}\]

where \(\#\Omega\) is the number of possible outcomes of the experiment and \(\# A\) is the number of outcomes in \(A.\)

Finally, the Kolmogorov axiomatic definition of probability does not establish the probability as a unique function, as the previous probability definitions do, but presents three axioms that must be satisfied by any so-called “probability function”. 1

Definition 1.9 (Kolmogorov definition of probability) Let \((\Omega,\mathcal{A})\) be a measurable space. A probability function is an application \(\mathbb{P}:\mathcal{A}\rightarrow \mathbb{R}\) that satisfies the following axioms:

  • ( Non-negativity ) \(\forall A\in\mathcal{A},\) \(\mathbb{P}(A)\geq 0;\)
  • ( Unitarity ) \(\mathbb{P}(\Omega)=1;\)
  • ( \(\sigma\) -additivity ) For any sequence \(A_1,A_2,\ldots\) of disjoint events ( \(A_i\cap A_j=\emptyset,\) \(i\neq j\) ) of \(\mathcal{A},\) it holds

\[\begin{align*} \mathbb{P}\left(\bigcup_{n=1}^{\infty} A_n\right)=\sum_{n=1}^{\infty} \mathbb{P}(A_n). \end{align*}\]

Observe that the \(\sigma\) -additivity property is well-defined: since \(\mathcal{A}\) is a \(\sigma\) -algebra, then the countable union belongs to \(\mathcal{A}\) also, and therefore the probability function takes as argument a proper element from \(\mathcal{A}.\) For this reason the closedness property of \(\mathcal{A}\) under unions, intersections, and complements is especially important.

Definition 1.10 (Probability space) A probability space is a trio \((\Omega,\mathcal{A}, \mathbb{P}),\) where \(\mathbb{P}\) is a probability function defined on the measurable space \((\Omega,\mathcal{A}).\)

Example 1.4 Consider the first experiment described in Example 1.1 with the measurable space \((\Omega,\mathcal{A}),\) where

\[\begin{align*} \Omega=\{\mathrm{H},\mathrm{T}\}, \quad \mathcal{A}=\{\emptyset,\{\mathrm{H}\},\{\mathrm{T}\},\Omega\}. \end{align*}\]

A probability function is \(\mathbb{P}_1:\mathcal{A}\rightarrow[0,1],\) defined as

\[\begin{align*} \mathbb{P}_1(\emptyset):=0, \ \mathbb{P}_1(\{\mathrm{H}\}):=\mathbb{P}_1(\{\mathrm{T}\}):=1/2, \ \mathbb{P}_1(\Omega):=1. \end{align*}\]

It is straightforward to check that \(\mathbb{P}_1\) satisfies the three definitions of probability. Consider now \(\mathbb{P}_2:\mathcal{A}\rightarrow[0,1]\) defined as

\[\begin{align*} \mathbb{P}_2(\emptyset):=0, \ \mathbb{P}_2(\{\mathrm{H}\}):=p<1/2, \ \mathbb{P}_2(\{\mathrm{T}\}):=1-p, \ \mathbb{P}_2(\Omega):=1. \end{align*}\]

If the coin is fair, then \(\mathbb{P}_2\) does not satisfy the frequentist definition nor the Laplace definition, since the outcomes are not equally likely. However, it does verify the Kolmogorov axiomatic definition. Several probability functions, as well as several probability spaces, are mathematically possible! But, of course, ones are more sensible than others according to the random experiment they are modeling.

Example 1.5 We can define a probability function for the second experiment of Example 1.1 , with the measurable space \((\Omega,\mathcal{P}(\Omega)),\) in the following way:

  • For the individual outcomes, the probability is defined as

\[\begin{align*} \begin{array}{lllll} &\mathbb{P}(\{0\}):=0.018, &\mathbb{P}(\{1\}):=0.074, &\mathbb{P}(\{2\}):=0.149, \\ &\mathbb{P}(\{3\}):=0.193, &\mathbb{P}(\{4\}):=0.194, &\mathbb{P}(\{5\}):=0.159, \\ &\mathbb{P}(\{6\}):=0.106, &\mathbb{P}(\{7\}):=0.057, &\mathbb{P}(\{8\}):=0.028, \\ &\mathbb{P}(\{9\}):=0.022, &\mathbb{P}(\emptyset):=0, &\mathbb{P}(\{i\}):=0,\ \forall i>9. \end{array} \end{align*}\]

  • For subsets of \(\Omega\) with more than one element, its probability is defined as the sum of probabilities of the individual outcomes belonging to each subset. This is, if \(A=\{a_1,\ldots,a_n\},\) with \(a_i\in \Omega,\) then the probability of \(A\) is

\[\begin{align*} \mathbb{P}(A):=\sum_{i=1}^n \mathbb{P}(\{a_i\}). \end{align*}\]

This probability function indeed satisfies the Kolmogorov axiomatic definition.

Example 1.6 Consider a modification of the first experiment described in Example 1.1 , where now \(\xi=\) “Toss a coin two times”. Then,

\[\begin{align*} \Omega=\{\mathrm{HH},\mathrm{HT},\mathrm{TH},\mathrm{TT}\}. \end{align*}\]

\[\begin{align*} \mathcal{A}_1=\{\emptyset,\{\mathrm{HH}\},\ldots,\{\mathrm{HH},\mathrm{HT}\},\ldots,\{\mathrm{HH},\mathrm{HT},\mathrm{TH}\},\ldots,\Omega\}=\mathcal{P}(\Omega). \end{align*}\]

Recall that the cardinality of \(\mathcal{P}(\Omega)\) is \(\#\mathcal{P}(\Omega)=2^{\#\Omega}.\) This can be easily checked for this example by adding how many events comprised by \(0\leq k\leq4\) outcomes are possible: \(\binom{4}{0}+\binom{4}{1}+\binom{4}{2}+\binom{4}{3}+\binom{4}{4}=(1+1)^4\) (Newton’s binomial). For the measurable space \((\Omega,\mathcal{A}_1),\) a probability function \(\mathbb{P}:\mathcal{A}_1\rightarrow[0,1]\) can be defined as

\[\begin{align*} \mathbb{P}(\{\omega\}):=1/4,\quad \forall \omega\in\Omega. \end{align*}\]

Then, \(\mathbb{P}(A)=\sum_{\omega\in A}\mathbb{P}(\{\omega\}),\) \(\forall A\in\mathcal{A}_1.\) This is a valid probability that satisfies the three Kolmogorov’s axioms (and also the frequentist and Laplace definitions) and therefore \((\Omega,\mathcal{A}_1,\mathbb{P})\) is a probability space.

Another possible \(\sigma\) -algebra for \(\xi\) is \(\mathcal{A}_2=\{\emptyset,\{\mathrm{HH}\},\{\mathrm{HT,TH,TT}\},\Omega\},\) for which \(\mathbb{P}\) is well-defined. Then, another perfectly valid probability space is \((\Omega,\mathcal{A}_2,\mathbb{P}).\) This probability space would not make too much sense for modelling \(\xi,\) since it assumes that the outcome \(\mathrm{HT}\) is impossible, as \(\mathbb{P}(\{\mathrm{HT}\})\) is not defined.

Proposition 1.1 (Basic probability results) Let \((\Omega,\mathcal{A},\mathbb{P})\) be a probability space and \(A,B\in\mathcal{A}.\)

  • Probability of the union: \(\mathbb{P}(A\cup B)=\mathbb{P}(A)+\mathbb{P}(B)-\mathbb{P}(A\cap B).\)
  • De Morgan’s rules: \(\mathbb{P}(\overline{A\cup B})=\mathbb{P}(\overline{A}\cap \overline{B}),\) \(\mathbb{P}(\overline{A\cap B})=\mathbb{P}(\overline{A}\cup \overline{B}).\)

1.1.4 Conditional probability

Conditioning one event on another allows establishing the dependence between them via the conditional probability function.

Definition 1.11 (Conditional probability) Let \((\Omega,\mathcal{A},\mathbb{P})\) be a probability space and \(A,B\in\mathcal{A}\) with \(\mathbb{P}(B)>0.\) The conditional probability of \(A\) given \(B\) is defined as

\[\begin{align} \mathbb{P}(A|B):=\frac{\mathbb{P}(A\cap B)}{\mathbb{P}(B)}.\tag{1.1} \end{align}\]

Definition 1.12 (Independent events) Let \((\Omega,\mathcal{A},\mathbb{P})\) be a probability space and \(A,B\in\mathcal{A}.\) Two events are said to be independent if \(\mathbb{P}(A\cap B)=\mathbb{P}(A)\mathbb{P}(B).\)

Equivalently, \(A,B\in\mathcal{A}\) such that \(\mathbb{P}(A),\mathbb{P}(B)>0\) are independent if \(\mathbb{P}(A|B)=\mathbb{P}(A)\) or \(\mathbb{P}(B|A)=\mathbb{P}(B)\) (i.e., knowing one event does not affect the probability of the other). Computing probabilities of intersections, if the events are independent, is trivial. The following results are useful for working with conditional probabilities.

Proposition 1.2 (Basic conditional probability results) Let \((\Omega,\mathcal{A},\mathbb{P})\) be a probability space.

  • Law of total probability: If \(A_1,\ldots,A_k\) is a partition of \(\Omega\) (i.e., \(\Omega=\cup_{i=1}^kA_i\) and \(A_i\cap A_j=\emptyset\) for \(i\neq j\) ) that belongs to \(\mathcal{A},\) \(\mathbb{P}(A_i)>0\) for \(i=1,\ldots,k,\) and \(B\in\mathcal{A},\) then \[\begin{align*} \mathbb{P}(B)=\sum_{i=1}^k\mathbb{P}(B|A_i)\mathbb{P}(A_i). \end{align*}\]
  • Bayes’ theorem: 2 If \(A_1,\ldots,A_k\) is a partition of \(\Omega\) that belongs to \(\mathcal{A},\) \(\mathbb{P}(A_i)>0\) for \(i=1,\ldots,k,\) and \(B\in\mathcal{A}\) is such that \(\mathbb{P}(B)>0,\) then \[\begin{align*} \mathbb{P}(A_i|B)=\frac{\mathbb{P}(B|A_i)\mathbb{P}(A_i)}{\sum_{j=1}^k\mathbb{P}(B|A_j)\mathbb{P}(A_j)}. \end{align*}\]

Proving the previous results is not difficult. Also, learning how to do it is a good way of always remembering them.

Note this definition frees the mathematical meaning of probability from the “tyranny” of the random experiment by abstracting the concept of probability. ↩︎

“Theorem” might be an overstatement for this result, which is obtained from two lines of mathematics. That’s why it is many times known as the Bayes formula . ↩︎

MA121: Introduction to Statistics

random experiment in probability definition

Basic Concepts of Probability

Read this section about basic concepts of probability, including spaces, and events. This section discusses set operations using Venn diagrams, including complements, intersections, and unions. Finally, it introduces conditional probability and talks about independent events.

LEARNING OBJECTIVES

  • To learn the concept of the sample space associated with a random experiment.
  • To learn the concept of an event associated with a random experiment.
  • To learn the concept of the probability of an event.

Sample Spaces and Events

Rolling an ordinary six-sided die is a familiar example of a random experiment, an action for which all possible outcomes can be listed, but for which the actual outcome on any given trial of the experiment cannot be predicted with certainty. In such a situation we wish to assign to each outcome, such as rolling a two, a number, called the probability of the outcome, that indicates how likely it is that the outcome will occur. Similarly, we would like to assign a probability to any event, or collection of outcomes, such as rolling an even number, which indicates how likely it is that the event will occur if the experiment is performed. This section provides a framework for discussing probability problems, using the terms just mentioned.

A random experiment is a mechanism that produces a definite outcome that cannot be predicted with certainty. The sample space associated with a random experiment is the set of all possible outcomes. An event is a subset of the sample space .

Construct a sample space for the experiment that consists of tossing a single coin.

Construct a sample space for the experiment that consists of rolling a single die. Find the events that correspond to the phrases "an even number is rolled" and "a number greater than two is rolled".

Figure 3.1 Venn Diagrams for Two Sample Spaces

random experiment in probability definition

A random experiment consists of tossing two coins.

a. Construct a sample space for the situation that the coins are indistinguishable, such as two brand new pennies.

b. Construct a sample space for the situation that the coins are distinguishable, such as one a penny and the other a nickel.

A device that can be helpful in identifying all possible outcomes of a random experiment, particularly one that can be viewed as proceeding in stages, is what is called a tree diagram. It is described in the following example.

Construct a sample space that describes all three-child families according to the genders of the children with respect to birth order.

Tree Diagram For Three-Child Families

random experiment in probability definition

The line segments are called branches of the tree. The right ending point of each branch is called a node. The nodes on the extreme right are the final nodes ; to each one there corresponds an outcome, as shown in the figure.

From the tree it is easy to read off the eight outcomes of the experiment, so the sample space is, reading from the top to the bottom of the final nodes in the tree,

IMAGES

  1. PPT

    random experiment in probability definition

  2. Probability-1-Random Experiment

    random experiment in probability definition

  3. Random Experiments

    random experiment in probability definition

  4. Random Variable: Definition, Types, How Its Used, and Example

    random experiment in probability definition

  5. Probability Terminology ( Random experiment,Sample space, events)-1

    random experiment in probability definition

  6. probability

    random experiment in probability definition

VIDEO

  1. Probability Definition , Random Experiment, Event, Event Space, Disjoint Event, সম্ভাবনা তত্ত্ব

  2. Probability distribution and expectation basic

  3. Probability

  4. Probability Lecture # 1

  5. Probability Theory _ Part 1

  6. random experiment

COMMENTS

  1. Random Experiments | Definition, Conditions and Examples - BYJU'S

    Random Experiment in Probability. An activity that produces a result or an outcome is called an experiment. It is an element of uncertainty as to which one of these occurs when we perform an activity or experiment. Usually, we may get a different number of outcomes from an experiment.

  2. 3.1: Sample Spaces, Events, and Their Probabilities

    Definition: random experiment. A random experiment is a mechanism that produces a definite outcome that cannot be predicted with certainty. The sample space associated with a random experiment is the set of all possible outcomes. An event is a subset of the sample space.

  3. 2.1: Random Experiments - Statistics LibreTexts

    In probability, we start with a completely specified mathematical model of a random experiment. Our goal is perform various computations that help us understand the random experiment, help us predict what will happen when we run the experiment.

  4. Experiment (probability theory) - Wikipedia

    An experiment is said to be random if it has more than one possible outcome, and deterministic if it has only one. A random experiment that has exactly two ( mutually exclusive ) possible outcomes is known as a Bernoulli trial .

  5. 4.1: Probability Experiments and Sample Spaces - Statistics ...

    Probability is a measure that is associated with how certain we are of outcomes of a particular experiment or activity. An experiment is a planned operation carried out under controlled conditions. If the result is not predetermined, then the experiment is said to be a chance experiment.

  6. Random Experiment - Vocab, Definition, and Must Know Facts ...

    A random experiment is a process or action that leads to one or more outcomes, where the outcome is uncertain and can vary each time the experiment is conducted.

  7. Random Experiments | Sample Space | Trials | Events

    In particular, a random experiment is a process by which we observe something uncertain. After the experiment, the result of the random experiment is known. An outcome is a result of a random experiment. The set of all possible outcomes is called the sample space.

  8. 1.1 Probability review | A First Course on Statistical Inference

    Definition 1.1 (Random experiment) A random experiment\ (\xi\) is an experiment with the following properties: its outcome is impossible to predict; if the experiment is repeated under the same conditions, the outcome may be different; the set of possible outcomes is known in advance. The following concepts are associated with a random experiment:

  9. Basic Concepts of Probability: Sample Spaces, Events, and ...

    A random experiment is a mechanism that produces a definite outcome that cannot be predicted with certainty. The sample space associated with a random experiment is the set of all possible outcomes. An event is a subset of the sample space. Definition.