StudySmarter - The all-in-one study app.

4.8 • +11k Ratings

More than 3 Million Downloads

Free

Suggested languages for you:

Americas

Europe

Combining Random Variables

- Calculus
- Absolute Maxima and Minima
- Absolute and Conditional Convergence
- Accumulation Function
- Accumulation Problems
- Algebraic Functions
- Alternating Series
- Antiderivatives
- Application of Derivatives
- Approximating Areas
- Arc Length of a Curve
- Area Between Two Curves
- Arithmetic Series
- Average Value of a Function
- Calculus of Parametric Curves
- Candidate Test
- Combining Differentiation Rules
- Combining Functions
- Continuity
- Continuity Over an Interval
- Convergence Tests
- Cost and Revenue
- Density and Center of Mass
- Derivative Functions
- Derivative of Exponential Function
- Derivative of Inverse Function
- Derivative of Logarithmic Functions
- Derivative of Trigonometric Functions
- Derivatives
- Derivatives and Continuity
- Derivatives and the Shape of a Graph
- Derivatives of Inverse Trigonometric Functions
- Derivatives of Polar Functions
- Derivatives of Sec, Csc and Cot
- Derivatives of Sin, Cos and Tan
- Determining Volumes by Slicing
- Direction Fields
- Disk Method
- Divergence Test
- Eliminating the Parameter
- Euler's Method
- Evaluating a Definite Integral
- Evaluation Theorem
- Exponential Functions
- Finding Limits
- Finding Limits of Specific Functions
- First Derivative Test
- Function Transformations
- General Solution of Differential Equation
- Geometric Series
- Growth Rate of Functions
- Higher-Order Derivatives
- Hydrostatic Pressure
- Hyperbolic Functions
- Implicit Differentiation Tangent Line
- Implicit Relations
- Improper Integrals
- Indefinite Integral
- Indeterminate Forms
- Initial Value Problem Differential Equations
- Integral Test
- Integrals of Exponential Functions
- Integrals of Motion
- Integrating Even and Odd Functions
- Integration Formula
- Integration Tables
- Integration Using Long Division
- Integration of Logarithmic Functions
- Integration using Inverse Trigonometric Functions
- Intermediate Value Theorem
- Inverse Trigonometric Functions
- Jump Discontinuity
- Lagrange Error Bound
- Limit Laws
- Limit of Vector Valued Function
- Limit of a Sequence
- Limits
- Limits at Infinity
- Limits at Infinity and Asymptotes
- Limits of a Function
- Linear Approximations and Differentials
- Linear Differential Equation
- Linear Functions
- Logarithmic Differentiation
- Logarithmic Functions
- Logistic Differential Equation
- Maclaurin Series
- Manipulating Functions
- Maxima and Minima
- Maxima and Minima Problems
- Mean Value Theorem for Integrals
- Models for Population Growth
- Motion Along a Line
- Motion in Space
- Natural Logarithmic Function
- Net Change Theorem
- Newton's Method
- Nonhomogeneous Differential Equation
- One-Sided Limits
- Optimization Problems
- P Series
- Particle Model Motion
- Particular Solutions to Differential Equations
- Polar Coordinates
- Polar Coordinates Functions
- Polar Curves
- Population Change
- Power Series
- Radius of Convergence
- Ratio Test
- Removable Discontinuity
- Riemann Sum
- Rolle's Theorem
- Root Test
- Second Derivative Test
- Separable Equations
- Separation of Variables
- Simpson's Rule
- Solid of Revolution
- Solutions to Differential Equations
- Surface Area of Revolution
- Symmetry of Functions
- Tangent Lines
- Taylor Polynomials
- Taylor Series
- Techniques of Integration
- The Fundamental Theorem of Calculus
- The Mean Value Theorem
- The Power Rule
- The Squeeze Theorem
- The Trapezoidal Rule
- Theorems of Continuity
- Trigonometric Substitution
- Vector Valued Function
- Vectors in Calculus
- Vectors in Space
- Washer Method
- Decision Maths
- Geometry
- 2 Dimensional Figures
- 3 Dimensional Vectors
- 3-Dimensional Figures
- Altitude
- Angles in Circles
- Arc Measures
- Area and Volume
- Area of Circles
- Area of Circular Sector
- Area of Parallelograms
- Area of Plane Figures
- Area of Rectangles
- Area of Regular Polygons
- Area of Rhombus
- Area of Trapezoid
- Area of a Kite
- Composition
- Congruence Transformations
- Congruent Triangles
- Convexity in Polygons
- Coordinate Systems
- Dilations
- Distance and Midpoints
- Equation of Circles
- Equilateral Triangles
- Figures
- Fundamentals of Geometry
- Geometric Inequalities
- Geometric Mean
- Geometric Probability
- Glide Reflections
- HL ASA and AAS
- Identity Map
- Inscribed Angles
- Isometry
- Isosceles Triangles
- Law of Cosines
- Law of Sines
- Linear Measure and Precision
- Median
- Parallel Lines Theorem
- Parallelograms
- Perpendicular Bisector
- Plane Geometry
- Polygons
- Projections
- Properties of Chords
- Proportionality Theorems
- Pythagoras Theorem
- Rectangle
- Reflection in Geometry
- Regular Polygon
- Rhombuses
- Right Triangles
- Rotations
- SSS and SAS
- Segment Length
- Similarity
- Similarity Transformations
- Special quadrilaterals
- Squares
- Surface Area of Cone
- Surface Area of Cylinder
- Surface Area of Prism
- Surface Area of Sphere
- Surface Area of a Solid
- Surface of Pyramids
- Symmetry
- Translations
- Trapezoids
- Triangle Inequalities
- Triangles
- Using Similar Polygons
- Vector Addition
- Vector Product
- Volume of Cone
- Volume of Cylinder
- Volume of Pyramid
- Volume of Solid
- Volume of Sphere
- Volume of prisms
- Mechanics Maths
- Acceleration and Time
- Acceleration and Velocity
- Angular Speed
- Assumptions
- Calculus Kinematics
- Coefficient of Friction
- Connected Particles
- Conservation of Mechanical Energy
- Constant Acceleration
- Constant Acceleration Equations
- Converting Units
- Elastic Strings and Springs
- Force as a Vector
- Kinematics
- Newton's First Law
- Newton's Law of Gravitation
- Newton's Second Law
- Newton's Third Law
- Power
- Projectiles
- Pulleys
- Resolving Forces
- Statics and Dynamics
- Tension in Strings
- Variable Acceleration
- Work Done by a Constant Force
- Probability and Statistics
- Bar Graphs
- Basic Probability
- Charts and Diagrams
- Conditional Probabilities
- Continuous and Discrete Data
- Frequency, Frequency Tables and Levels of Measurement
- Independent Events Probability
- Line Graphs
- Mean Median and Mode
- Mutually Exclusive Probabilities
- Probability Rules
- Probability of Combined Events
- Quartiles and Interquartile Range
- Systematic Listing
- Pure Maths
- ASA Theorem
- Absolute Value Equations and Inequalities
- Addition and Subtraction of Rational Expressions
- Addition, Subtraction, Multiplication and Division
- Algebra
- Algebraic Fractions
- Algebraic Notation
- Algebraic Representation
- Analyzing Graphs of Polynomials
- Angle Measure
- Angles
- Angles in Polygons
- Approximation and Estimation
- Area and Circumference of a Circle
- Area and Perimeter of Quadrilaterals
- Area of Triangles
- Argand Diagram
- Arithmetic Sequences
- Average Rate of Change
- Bijective Functions
- Binomial Expansion
- Binomial Theorem
- Chain Rule
- Circle Theorems
- Circles
- Circles Maths
- Combination of Functions
- Combinatorics
- Common Factors
- Common Multiples
- Completing the Square
- Completing the Squares
- Complex Numbers
- Composite Functions
- Composition of Functions
- Compound Interest
- Compound Units
- Conic Sections
- Construction and Loci
- Converting Metrics
- Convexity and Concavity
- Coordinate Geometry
- Coordinates in Four Quadrants
- Cubic Function Graph
- Cubic Polynomial Graphs
- Data transformations
- De Moivre's Theorem
- Deductive Reasoning
- Definite Integrals
- Deriving Equations
- Determinant of Inverse Matrix
- Determinants
- Differential Equations
- Differentiation
- Differentiation Rules
- Differentiation from First Principles
- Differentiation of Hyperbolic Functions
- Direct and Inverse proportions
- Disjoint and Overlapping Events
- Disproof by Counterexample
- Distance from a Point to a Line
- Divisibility Tests
- Double Angle and Half Angle Formulas
- Drawing Conclusions from Examples
- Ellipse
- Equation of Line in 3D
- Equation of a Perpendicular Bisector
- Equation of a circle
- Equations
- Equations and Identities
- Equations and Inequalities
- Estimation in Real Life
- Euclidean Algorithm
- Evaluating and Graphing Polynomials
- Even Functions
- Exponential Form of Complex Numbers
- Exponential Rules
- Exponentials and Logarithms
- Expression Math
- Expressions and Formulas
- Faces Edges and Vertices
- Factorials
- Factoring Polynomials
- Factoring Quadratic Equations
- Factorising expressions
- Factors
- Finding Maxima and Minima Using Derivatives
- Finding Rational Zeros
- Finding the Area
- Forms of Quadratic Functions
- Fractional Powers
- Fractional Ratio
- Fractions
- Fractions and Decimals
- Fractions and Factors
- Fractions in Expressions and Equations
- Fractions, Decimals and Percentages
- Function Basics
- Functional Analysis
- Functions
- Fundamental Counting Principle
- Fundamental Theorem of Algebra
- Generating Terms of a Sequence
- Geometric Sequence
- Gradient and Intercept
- Graphical Representation
- Graphing Rational Functions
- Graphing Trigonometric Functions
- Graphs
- Graphs and Differentiation
- Graphs of Common Functions
- Graphs of Exponents and Logarithms
- Graphs of Trigonometric Functions
- Greatest Common Divisor
- Growth and Decay
- Growth of Functions
- Highest Common Factor
- Hyperbolas
- Imaginary Unit and Polar Bijection
- Implicit differentiation
- Inductive Reasoning
- Inequalities Maths
- Infinite geometric series
- Injective functions
- Instantaneous Rate of Change
- Integers
- Integrating Polynomials
- Integrating Trig Functions
- Integrating e^x and 1/x
- Integration
- Integration Using Partial Fractions
- Integration by Parts
- Integration by Substitution
- Integration of Hyperbolic Functions
- Interest
- Inverse Hyperbolic Functions
- Inverse Matrices
- Inverse and Joint Variation
- Inverse functions
- Iterative Methods
- Law of Cosines in Algebra
- Law of Sines in Algebra
- Laws of Logs
- Limits of Accuracy
- Linear Expressions
- Linear Systems
- Linear Transformations of Matrices
- Location of Roots
- Logarithm Base
- Logic
- Lower and Upper Bounds
- Lowest Common Denominator
- Lowest Common Multiple
- Math formula
- Matrices
- Matrix Addition and Subtraction
- Matrix Determinant
- Matrix Multiplication
- Metric and Imperial Units
- Misleading Graphs
- Mixed Expressions
- Modulus Functions
- Modulus and Phase
- Multiples of Pi
- Multiplication and Division of Fractions
- Multiplicative Relationship
- Multiplying and Dividing Rational Expressions
- Natural Logarithm
- Natural Numbers
- Notation
- Number
- Number Line
- Number Systems
- Numerical Methods
- Odd functions
- Open Sentences and Identities
- Operation with Complex Numbers
- Operations with Decimals
- Operations with Matrices
- Operations with Polynomials
- Order of Operations
- Parabola
- Parallel Lines
- Parametric Differentiation
- Parametric Equations
- Parametric Integration
- Partial Fractions
- Pascal's Triangle
- Percentage
- Percentage Increase and Decrease
- Percentage as fraction or decimals
- Perimeter of a Triangle
- Permutations and Combinations
- Perpendicular Lines
- Points Lines and Planes
- Polynomial Graphs
- Polynomials
- Powers Roots And Radicals
- Powers and Exponents
- Powers and Roots
- Prime Factorization
- Prime Numbers
- Problem-solving Models and Strategies
- Product Rule
- Proof
- Proof and Mathematical Induction
- Proof by Contradiction
- Proof by Deduction
- Proof by Exhaustion
- Proof by Induction
- Properties of Exponents
- Proportion
- Proving an Identity
- Pythagorean Identities
- Quadratic Equations
- Quadratic Function Graphs
- Quadratic Graphs
- Quadratic functions
- Quadrilaterals
- Quotient Rule
- Radians
- Radical Functions
- Rates of Change
- Ratio
- Ratio Fractions
- Rational Exponents
- Rational Expressions
- Rational Functions
- Rational Numbers and Fractions
- Ratios as Fractions
- Real Numbers
- Reciprocal Graphs
- Recurrence Relation
- Recursion and Special Sequences
- Remainder and Factor Theorems
- Representation of Complex Numbers
- Rewriting Formulas and Equations
- Roots of Complex Numbers
- Roots of Polynomials
- Roots of Unity
- Rounding
- SAS Theorem
- SSS Theorem
- Scalar Triple Product
- Scale Drawings and Maps
- Scale Factors
- Scientific Notation
- Second Order Recurrence Relation
- Sector of a Circle
- Segment of a Circle
- Sequences
- Sequences and Series
- Series Maths
- Sets Math
- Similar Triangles
- Similar and Congruent Shapes
- Simple Interest
- Simplifying Fractions
- Simplifying Radicals
- Simultaneous Equations
- Sine and Cosine Rules
- Small Angle Approximation
- Solving Linear Equations
- Solving Linear Systems
- Solving Quadratic Equations
- Solving Radical Inequalities
- Solving Rational Equations
- Solving Simultaneous Equations Using Matrices
- Solving Systems of Inequalities
- Solving Trigonometric Equations
- Solving and Graphing Quadratic Equations
- Solving and Graphing Quadratic Inequalities
- Special Products
- Standard Form
- Standard Integrals
- Standard Unit
- Straight Line Graphs
- Substraction and addition of fractions
- Sum and Difference of Angles Formulas
- Sum of Natural Numbers
- Surds
- Surjective functions
- Tables and Graphs
- Tangent of a Circle
- The Quadratic Formula and the Discriminant
- Transformations
- Transformations of Graphs
- Translations of Trigonometric Functions
- Triangle Rules
- Triangle trigonometry
- Trigonometric Functions
- Trigonometric Functions of General Angles
- Trigonometric Identities
- Trigonometric Ratios
- Trigonometry
- Turning Points
- Types of Functions
- Types of Numbers
- Types of Triangles
- Unit Circle
- Units
- Variables in Algebra
- Vectors
- Verifying Trigonometric Identities
- Writing Equations
- Writing Linear Equations
- Statistics
- Bias in Experiments
- Binomial Distribution
- Binomial Hypothesis Test
- Bivariate Data
- Box Plots
- Categorical Data
- Categorical Variables
- Central Limit Theorem
- Chi Square Test for Goodness of Fit
- Chi Square Test for Homogeneity
- Chi Square Test for Independence
- Chi-Square Distribution
- Combining Random Variables
- Comparing Data
- Comparing Two Means Hypothesis Testing
- Conditional Probability
- Conducting a Study
- Conducting a Survey
- Conducting an Experiment
- Confidence Interval for Population Mean
- Confidence Interval for Population Proportion
- Confidence Interval for Slope of Regression Line
- Confidence Interval for the Difference of Two Means
- Confidence Intervals
- Correlation Math
- Cumulative Distribution Function
- Cumulative Frequency
- Data Analysis
- Data Interpretation
- Degrees of Freedom
- Discrete Random Variable
- Distributions
- Dot Plot
- Empirical Rule
- Errors in Hypothesis Testing
- Estimator Bias
- Events (Probability)
- Frequency Polygons
- Generalization and Conclusions
- Geometric Distribution
- Histograms
- Hypothesis Test for Correlation
- Hypothesis Test for Regression Slope
- Hypothesis Test of Two Population Proportions
- Hypothesis Testing
- Inference for Distributions of Categorical Data
- Inferences in Statistics
- Large Data Set
- Least Squares Linear Regression
- Linear Interpolation
- Linear Regression
- Measures of Central Tendency
- Methods of Data Collection
- Normal Distribution
- Normal Distribution Hypothesis Test
- Normal Distribution Percentile
- Paired T-Test
- Point Estimation
- Probability
- Probability Calculations
- Probability Density Function
- Probability Distribution
- Probability Generating Function
- Quantitative Variables
- Quartiles
- Random Variables
- Randomized Block Design
- Residual Sum of Squares
- Residuals
- Sample Mean
- Sample Proportion
- Sampling
- Sampling Distribution
- Scatter Graphs
- Single Variable Data
- Skewness
- Spearman's Rank Correlation Coefficient
- Standard Deviation
- Standard Error
- Standard Normal Distribution
- Statistical Graphs
- Statistical Measures
- Stem and Leaf Graph
- Sum of Independent Random Variables
- Survey Bias
- T-distribution
- Transforming Random Variables
- Tree Diagram
- Two Categorical Variables
- Two Quantitative Variables
- Type I Error
- Type II Error
- Types of Data in Statistics
- Variance for Binomial Distribution
- Venn Diagrams

You have probably seen the tags on items you have purchased that say "inspected by". Sometimes, like in car production, an item can be inspected by multiple people over the course of putting it together. If you know the average time it takes for each inspector to check the car, and the standard deviation for each inspector, how do you figure out the total inspection time for a random car? That is an application of **combining random variables**!

As you have already seen, many people inspect things like cars before they are sold. Each individual inspector has a mean inspection time, and a variance associated with their inspection time. The random variable in this case is the inspector, and what you are looking for is the sum of their expected inspection times.

If multiple random events occur which are associated with an outcome, you may want to add them to form a new distribution. The new distribution in the car example would be the total inspection time for the car.

**Combining random variables** means transforming two or more random variables into one.

On the other hand, transforming random variables involves scaling and shifting them. This would happen if you were playing a game multiple times and trying to figure out how much your total wins and losses might be. See the article Transforming Random Variables for more details and examples on that.

One very important thing to check before you combine random variables is that they are independent, or at least that it is reasonable for you to assume that they are independent.

Suppose you have \(3\) people who inspect a cell phone before it gets shipped off from the factory. If no two people ever inspect the phone at the same time, could you combine the random variable of their inspection times to get a new random variable for the total inspection time?

Answer:

Because no inspector ever interacts with a cell phone at the same time as another inspector, it is reasonable to assume that their inspections do not affect each other. That would mean that their inspection times are independent, and you can combine the random variables.

What about in the next example?

Suppose that your first random variable is how many hours a randomly chosen person slept yesterday, and your second random variable is how many hours that same person was awake. Can you combine those random variables?

Answer:

No. How many hours a person is awake is dependent on how many hours they were asleep, so these are not independent random variables, and they cannot be combined.

The notation \(T = X + Y\) can be confusing. Are you really just adding things together? Let's take a look at an example.

Let's think about two people inspecting a cell phone, and they do separate inspections. The company keeps track of how long each person takes to do an inspection. Then you can set up:

- \(X\) is the set of times for the first person to inspect a phone; and
- \(Y\) is the set of times for the first person to inspect a phone.

Rather than looking at each person inspecting a phone individually, the company wants to get an idea of the total time it takes to inspect a phone. So in this example, combining the random variables \(X\) and \(Y\) means making a random variable \(T\) with \(T = X + Y\) where you are actually adding the times in \(X\) to the times in \(Y\) to get a total time.

It can help to look at the range of times of \(T\). If the range of times in \(X\) is \(6\) minutes to \(8\) minutes, and the range of times in \(Y\) is \(4\) minutes to \(5\) minutes, then the range of \(T = X + Y\) is \(6+4 =10\) minutes to \(8+5=13\) minutes.

Suppose the company took \(20\) measurements of each inspector, and graphed them in the histograms below.

The mean for Inspector #1 is \(7.1\) minutes, and the mean for Inspector #2 is \(4.6\) minutes. Then their times are combined into a new random distribution, \(T\), and the histogram for that data is above.

Notice that the range in times of the histogram goes between \(10\) and \(13\) minutes. The mean for the combined histogram is \(11.7\) minutes, which is about what you would expect given the means for the individual inspections.

How does combining random variables affect the mean?

While you can combine more than two random variables as long as they are independent, for simplicity's sake the rest of this article concentrates on combining just two of them.

Suppose \(X\) and \(Y\) are two random variables that are independent. For the mean of \(X\) write \(\mu_X\), and for the mean of \(Y\) write \(\mu_Y\). How do you combine their means?

The mean of the sum of two random variables is the sum of their means. In other words, if \(T = X + Y\) then\[ \mu_T = \mu_X + \mu_Y.\]

If you take the difference of two random variables, then the mean of the difference is the difference of their means. So if \(T = X - Y\), then\[ \mu_T = \mu_X - \mu_Y.\]

Just like in regular subtraction, the order makes a difference. Let's look at a couple of examples.

Jake and Anna work in the same store, but in different departments. Jake expects to sell an average of \(5\) shirts per day and Anna expects to sell an average of \(3\). What is the total expected average number of shirts sold in the store per day?

Answer:

Let \(X\) be the random variable representing how many shirts Jake sells, and \(Y\) be the random variable representing Anna's sales. You would hope these are independent random variables! Call \(T\) the random variable of the total sales in the store, so \(T = X + Y\).

From the problem statement,

\[ \mu_X = 5 \text{ and } \mu_Y = 3.\]

Therefore, they can expect to sell \[ \begin{align} \mu_T &= \mu_X + \mu_Y \\ &= 5 + 3 \\ &= 8, \end{align}\]or in other words a total of \(8\) shirts.

What if you are asked about how many more shirts Jake would expect to sell?

Jake and Anna work in the same store, but in different departments. Jake expects to sell an average of \(5\) shirts per day and Anna expects to sell an average of \(3\). How many more shirts can Jake expect to sell per day?

**Solution: **

Just like before, let \(X\) be the random variable representing how many shirts Jake sells, and \(Y\) be the random variable representing Anna's sales, where you are reasonably assuming they are independent. Call \(T\) the random variable of the difference between Jake and Anna's sales in the store. Then since \(T = X - Y\),

\[ \begin{align} \mu_T &= \mu_X - \mu_Y \\ &= 5 - 3 \\ &= 2. \end{align}\]

So Jake can expect to sell \(2\) more shirts than Anna.

Suppose you had looked at the difference between Anna and Jake's sales instead? Then you would have found a mean of \(-2\)! That can happen, and you need to look at the actual combined distribution to figure out what it implies in real life. If you find a negative number when looking at the difference in the sales, it just implies that in general Anna sells fewer shirts than Jake does.

Just like with the mean, combining the variance of two independent random variables is a matter of addition. Suppose \(X\) and \(Y\) are two random variables that are independent. For the standard deviation of \(X\) write \(\sigma_X\), and for the standard deviation of \(Y\) write \(\sigma_Y\). Then:

The variance of the sum of two random variables is the sum of their variances. In other words, if \(T = X + Y\) then\[ \sigma^2_T = \sigma^2_X + \sigma^2_Y.\]

If you take the difference of two random variables, then the variance of the difference is the sum of their variances. So if \(T = X - Y\), then\[ \sigma^2_T = \sigma^2_X + \sigma^2_Y.\]

Wait a minute, that second part doesn't look right! Why is it that when you subtract two distributions you aren't subtracting their variances? It is because the variance is a measure of how spread apart the distribution is. So if you combine two distributions, the new one is going to have a larger spread than either of the two original ones.

Does this imply that you can combine the standard deviation of two independent random variables with addition as well? Absolutely not! Remember that the standard deviation is the square root of the variance, and that

\[ \sqrt{a + b} \ne \sqrt{a} + \sqrt{b}.\]

So the standard deviations cannot be added in the same way that the variance can be.

Let's look at an example to show how it works.

Jake and Anna work in the same store, but in different departments. Jake expects to sell an average of \(5\) shirts per day and Anna expects to sell an average of \(3\). However, Jake has a standard deviation in his sales of \(1\) shirt, while Anna has a standard deviation of \(4\) shirts. Is the standard deviation of their combined shirt totals the same as the sum of the standard deviation of their individual totals?

**Solution:**

Setting up some variables:

- \(X\) is the random variable of the number of shirts Jake sells;
- \(Y\) is the random variable of the number of shirts Anna sells; and
- \(T\) is the random variable of the number of shirts they sell combined.

As you have already seen, \(\mu_T = 8\). What about the variance and standard deviation? From the statement of the problem, their individual standard deviations are

\[ \sigma_X = 1 \mbox{ and } \sigma_Y = 4.\]

Then for the variance,

\[ \begin{align} \sigma^2_T &= \sigma^2_X + \sigma^2_Y \\ &= 1^2 + 4^2 \\ &= 17, \end{align} \]

but

\[ \sigma_T = \sqrt{17} \approx 4.1\]

which is not the same as

\[ \sigma_X + \sigma_Y = 1 + 4 = 5.\]

In fact,

\[ \sigma_T < \sigma_X + \sigma_Y.\]

So while the average number of shirts they sell per day stays the same if they work together, the standard deviation of the number of shirts they sell together is smaller than if they stay separate.

In the examples you have looked at so far, it didn't make a difference if the random variables followed a normal distribution. The only thing that mattered is that they were independent random variables.

When you have two independent continuous random variables, both of which follow a normal distribution, so does their sum or difference.

Let's look at an example to illustrate this.

Suppose you have a business where you are making and delivering pizzas, where both making and delivering the pizzas are normal distributions, with

- making the pizza has an average time of \(18\) minutes with a standard deviation of \(1.5\) minutes; and
- delivering the pizzas has an average time of \(25\) minutes with a standard deviation of \(8\) minutes.

(a) What is the probability that making and delivering a pizza takes more than an hour?

(b) What percentage of the pizzas take longer to make than to deliver?

**Solution:**

(a) In this part of the question you are looking for the total time, in other words, the sum of two normally distributed independent random variables. First, let's define the random variables:

- \(X\) is the random variable for the time it takes to make a pizza;
- \(Y\) is the random variable for the time it takes to deliver a pizza; and
- \(T\) is the random variable to the total time to make and deliver a pizza.

You are told that both of the random variables are normal, and you would expect that making the pizza and delivering the pizza are independent of each other. So \(T\) is also normally distributed, with \(T = X + Y\).

The average time to make and deliver a pizza would be

\[ \begin{align} \mu_T &= \mu_X + \mu_Y \\ &= 18 + 25 \\ &= 43 \, min. \end{align}\]

Since the times are independent,

\[ \begin{align} \sigma^2_T &= \sigma^2_X + \sigma^2_Y \\ &= 1.5^2 + 8^2 \\ &= 66.25,\end{align} \]

so

\[ \sigma_T = \sqrt{66.25} \approx 8.1 \, min.\]

In other words, \(T\) is a normal distribution with mean \(43\) and standard deviation \(8.1\).

You want to know the probability that making and delivering a pizza takes more than an hour. The graph below shows the normal distribution for the total time, and the shaded region represents the time over \(60\) minutes.

Then the \(z\)-score associated with \(60\) minutes is

\[ z = \frac{60-43}{8.1} = 2.099\]

which, using a standard normal table, gives you the probability of taking more than \(60\) minutes is

\[ P(T>60) = P(z>2.099) = 0.0179.\]

In other words, there is only a \(1.79\%\) chance that a pizza will take longer than an hour to make and deliver!

(b) Next you want to know the percentage of the pizzas take longer to make than to deliver. This time you want to know about the difference between \(X\) and \(Y\), so you need a new random variable, call it \(D\), to represent this. In other words \(D = X - Y\). It is still true that both \(X\) and \(Y\) are independent random variables that follow a normal distribution.

The average time difference between making and delivering a pizza would be

\[ \begin{align} \mu_D &= \mu_X - \mu_Y \\ &= 18 - 25 \\ &= -8 \, min. \end{align}\]

Since the times are independent,

\[ \begin{align} \sigma^2_D &= \sigma^2_X + \sigma^2_Y \\ &= 1.5^2 + 8^2 \\ &= 66.25,\end{align} \]

so

\[ \sigma_D = \sqrt{66.25} \approx 8.1 \, min.\]

In other words, \(D\) is a normal distribution with mean \(-8\) and standard deviation \(8.1\). If a pizza takes longer to make than to deliver, what you want to find is \(P(D>0)\). In the graph below, the shaded region represents when the pizza takes longer to make than to deliver.

Then the \(z\)-score associated with \(0\) minutes is

\[ z = \frac{0-(-8)}{8.1} = 0.988\]

which, using a standard normal table, gives you the probability of taking more than \(60\) minutes is

\[ P(D>0) = P(z>0.988) = 0.1611.\]

In other words, about \(16\%\) of the time, the pizza will take longer to make than to deliver.

More examples are always good!

Let's take a look at some more examples.

Suppose you have two inspectors working for you. If either of them inspects an item, it takes an average of \(5.8\) minutes to do the inspection, with a standard deviation of \(8\) minutes. However, if both of them work together to inspect the same item, it takes an average of \(11.6\) minutes with a standard deviation of \(17\) minutes. Is it better for you to have the inspectors working separately or together?

**Solution:**

First, let's give the variables some names:

- \(X\) is the variable for inspector A;
- \(Y\) is the variable for inspector B; and
- \(T\) is the variable for their combined times.

Then \(T = X + Y\), so

\[ \begin{align} \mu_T &= \mu_X + \mu_Y \\ &= 5.8 + 5.8 \\ &= 11.6 \, min. \end{align}\]

That means it doesn't matter if they work together or separately, in either case, their average time is going to be \(11.6\) minutes.

In order for you to look at their combined variances, you need to know that they are independent variables. So for the rest of this example, you will need to assume that two people can inspect an item at the same time without interfering with each other, making them independent variables. Then the variance is

\[\begin{align} \sigma^2_T &= \sigma^2_X + \sigma^2_Y \\ &= 8^2 + 8^2 \\ &= 128, \end{align} \]

and the standard deviation is

\[ \begin{align} \sigma_T &= \sqrt{ \sigma^2_T} \\ & = \sqrt{134.8} \\ &\approx 11.3 \, min. \end{align} \]

So when the two inspectors work separately, they have a much smaller variation in their inspection time.

What does that mean in terms of you having them work together or separately? Given that their mean inspection time is the same either way, it pays you to choose the option that gives you the least variation in inspection times. That means you want the two inspectors working separately since when they work together their standard deviation is \(17\) minutes rather than \(11.3\) minutes when they work separately.

Let's look at one involving toys.

A local shop sells toy cars. The probability of selling between \(0\) and \(5\) toy cars is given in the table below.

Number of Cars | Probability |

\(0\) | \(0.03\) |

\(1\) | \(0.16\) |

\(2\) | \(0.30\) |

\(3\) | \(0.23\) |

\(4\) | \(0.17\) |

\(5\) | \(0.11\) |

Table 1. Probability of selling.

Assume that the sale of toy cars is independent.(a) Find the mean and standard deviation for the number of toy cars the shop sells in a day.(b) If the shop is open \(5\) days a week, how many toy cars can the shop expect to sell, and what is the standard deviation?Remember that you can't just add to get the standard deviation! Instead, you must find the variance for the week, then take the square root. The variance for the weekly toy car sales is additive, so

\[ \begin{align} \text{variance for weekly car sales} &= 5(2.95) \\ &= 14.75 \end{align}\]

which gives you

\[ \begin{align} \text{standard deviation for weekly car sales} &= \sqrt{14.75} \\ & \approx 3.84 . \end{align}\]

- Combining random variables means transforming two or more random variables into one.
- Only combine random variables that are independent!
- The mean of the sum of two random variables is the sum of their means. In other words, if \(T = X + Y\) then\[ \mu_T = \mu_X + \mu_Y.\]
- If you take the difference of two random variables, then the mean of the difference is the difference of their means. So if \(T = X - Y\), then\[ \mu_T = \mu_X - \mu_Y.\]
- The variance of the sum of two random variables is the sum of their variances. In other words, if \(T = X + Y\) then\[ \sigma^2_T = \sigma^2_X + \sigma^2_Y.\]
- If you take the difference of two random variables, then the variance of the difference is the sum of their variances. So if \(T = X - Y\), then\[ \sigma^2_T = \sigma^2_X + \sigma^2_Y.\]

- The sum and difference formulas do not work for the standard deviation!

To combine normal random variables, the following steps should be followed:

**Step 1:** Give the random variables meaningful names, such as X and Y

**Step 2:** Identify their means, X and Y, and their standard deviations, X and Y.

**Step 3:** Calculate their expected value sum or differences by adding or subtracting the mean values.

**Step 4:** Square the standard deviations given to give you the variance, then take square root of results to obtain the combined standard deviation.

More about Combining Random Variables

Be perfectly prepared on time with an individual plan.

Test your knowledge with gamified quizzes.

Create and find flashcards in record time.

Create beautiful notes faster than ever before.

Have all your study materials in one place.

Upload unlimited documents and save them online.

Identify your study strength and weaknesses.

Set individual study goals and earn points reaching them.

Stop procrastinating with our study reminders.

Earn points, unlock badges and level up while studying.

Create flashcards in notes completely automatically.

Create the most beautiful study materials using our templates.

Sign up to highlight and take notes. It’s 100% free.

Over 10 million students from across the world are already learning smarter.

Get Started for Free