StudySmarter - The all-in-one study app.

4.8 • +11k Ratings

More than 3 Million Downloads

Free

Suggested languages for you:

Americas

Europe

Geometric Distribution

- Calculus
- Absolute Maxima and Minima
- Absolute and Conditional Convergence
- Accumulation Function
- Accumulation Problems
- Algebraic Functions
- Alternating Series
- Antiderivatives
- Application of Derivatives
- Approximating Areas
- Arc Length of a Curve
- Area Between Two Curves
- Arithmetic Series
- Average Value of a Function
- Calculus of Parametric Curves
- Candidate Test
- Combining Differentiation Rules
- Combining Functions
- Continuity
- Continuity Over an Interval
- Convergence Tests
- Cost and Revenue
- Density and Center of Mass
- Derivative Functions
- Derivative of Exponential Function
- Derivative of Inverse Function
- Derivative of Logarithmic Functions
- Derivative of Trigonometric Functions
- Derivatives
- Derivatives and Continuity
- Derivatives and the Shape of a Graph
- Derivatives of Inverse Trigonometric Functions
- Derivatives of Polar Functions
- Derivatives of Sec, Csc and Cot
- Derivatives of Sin, Cos and Tan
- Determining Volumes by Slicing
- Direction Fields
- Disk Method
- Divergence Test
- Eliminating the Parameter
- Euler's Method
- Evaluating a Definite Integral
- Evaluation Theorem
- Exponential Functions
- Finding Limits
- Finding Limits of Specific Functions
- First Derivative Test
- Function Transformations
- General Solution of Differential Equation
- Geometric Series
- Growth Rate of Functions
- Higher-Order Derivatives
- Hydrostatic Pressure
- Hyperbolic Functions
- Implicit Differentiation Tangent Line
- Implicit Relations
- Improper Integrals
- Indefinite Integral
- Indeterminate Forms
- Initial Value Problem Differential Equations
- Integral Test
- Integrals of Exponential Functions
- Integrals of Motion
- Integrating Even and Odd Functions
- Integration Formula
- Integration Tables
- Integration Using Long Division
- Integration of Logarithmic Functions
- Integration using Inverse Trigonometric Functions
- Intermediate Value Theorem
- Inverse Trigonometric Functions
- Jump Discontinuity
- Lagrange Error Bound
- Limit Laws
- Limit of Vector Valued Function
- Limit of a Sequence
- Limits
- Limits at Infinity
- Limits at Infinity and Asymptotes
- Limits of a Function
- Linear Approximations and Differentials
- Linear Differential Equation
- Linear Functions
- Logarithmic Differentiation
- Logarithmic Functions
- Logistic Differential Equation
- Maclaurin Series
- Manipulating Functions
- Maxima and Minima
- Maxima and Minima Problems
- Mean Value Theorem for Integrals
- Models for Population Growth
- Motion Along a Line
- Motion in Space
- Natural Logarithmic Function
- Net Change Theorem
- Newton's Method
- Nonhomogeneous Differential Equation
- One-Sided Limits
- Optimization Problems
- P Series
- Particle Model Motion
- Particular Solutions to Differential Equations
- Polar Coordinates
- Polar Coordinates Functions
- Polar Curves
- Population Change
- Power Series
- Radius of Convergence
- Ratio Test
- Removable Discontinuity
- Riemann Sum
- Rolle's Theorem
- Root Test
- Second Derivative Test
- Separable Equations
- Separation of Variables
- Simpson's Rule
- Solid of Revolution
- Solutions to Differential Equations
- Surface Area of Revolution
- Symmetry of Functions
- Tangent Lines
- Taylor Polynomials
- Taylor Series
- Techniques of Integration
- The Fundamental Theorem of Calculus
- The Mean Value Theorem
- The Power Rule
- The Squeeze Theorem
- The Trapezoidal Rule
- Theorems of Continuity
- Trigonometric Substitution
- Vector Valued Function
- Vectors in Calculus
- Vectors in Space
- Washer Method
- Decision Maths
- Geometry
- 2 Dimensional Figures
- 3 Dimensional Vectors
- 3-Dimensional Figures
- Altitude
- Angles in Circles
- Arc Measures
- Area and Volume
- Area of Circles
- Area of Circular Sector
- Area of Parallelograms
- Area of Plane Figures
- Area of Rectangles
- Area of Regular Polygons
- Area of Rhombus
- Area of Trapezoid
- Area of a Kite
- Composition
- Congruence Transformations
- Congruent Triangles
- Convexity in Polygons
- Coordinate Systems
- Dilations
- Distance and Midpoints
- Equation of Circles
- Equilateral Triangles
- Figures
- Fundamentals of Geometry
- Geometric Inequalities
- Geometric Mean
- Geometric Probability
- Glide Reflections
- HL ASA and AAS
- Identity Map
- Inscribed Angles
- Isometry
- Isosceles Triangles
- Law of Cosines
- Law of Sines
- Linear Measure and Precision
- Median
- Parallel Lines Theorem
- Parallelograms
- Perpendicular Bisector
- Plane Geometry
- Polygons
- Projections
- Properties of Chords
- Proportionality Theorems
- Pythagoras Theorem
- Rectangle
- Reflection in Geometry
- Regular Polygon
- Rhombuses
- Right Triangles
- Rotations
- SSS and SAS
- Segment Length
- Similarity
- Similarity Transformations
- Special quadrilaterals
- Squares
- Surface Area of Cone
- Surface Area of Cylinder
- Surface Area of Prism
- Surface Area of Sphere
- Surface Area of a Solid
- Surface of Pyramids
- Symmetry
- Translations
- Trapezoids
- Triangle Inequalities
- Triangles
- Using Similar Polygons
- Vector Addition
- Vector Product
- Volume of Cone
- Volume of Cylinder
- Volume of Pyramid
- Volume of Solid
- Volume of Sphere
- Volume of prisms
- Mechanics Maths
- Acceleration and Time
- Acceleration and Velocity
- Angular Speed
- Assumptions
- Calculus Kinematics
- Coefficient of Friction
- Connected Particles
- Conservation of Mechanical Energy
- Constant Acceleration
- Constant Acceleration Equations
- Converting Units
- Elastic Strings and Springs
- Force as a Vector
- Kinematics
- Newton's First Law
- Newton's Law of Gravitation
- Newton's Second Law
- Newton's Third Law
- Power
- Projectiles
- Pulleys
- Resolving Forces
- Statics and Dynamics
- Tension in Strings
- Variable Acceleration
- Work Done by a Constant Force
- Probability and Statistics
- Bar Graphs
- Basic Probability
- Charts and Diagrams
- Conditional Probabilities
- Continuous and Discrete Data
- Frequency, Frequency Tables and Levels of Measurement
- Independent Events Probability
- Line Graphs
- Mean Median and Mode
- Mutually Exclusive Probabilities
- Probability Rules
- Probability of Combined Events
- Quartiles and Interquartile Range
- Systematic Listing
- Pure Maths
- ASA Theorem
- Absolute Value Equations and Inequalities
- Addition and Subtraction of Rational Expressions
- Addition, Subtraction, Multiplication and Division
- Algebra
- Algebraic Fractions
- Algebraic Notation
- Algebraic Representation
- Analyzing Graphs of Polynomials
- Angle Measure
- Angles
- Angles in Polygons
- Approximation and Estimation
- Area and Circumference of a Circle
- Area and Perimeter of Quadrilaterals
- Area of Triangles
- Argand Diagram
- Arithmetic Sequences
- Average Rate of Change
- Bijective Functions
- Binomial Expansion
- Binomial Theorem
- Chain Rule
- Circle Theorems
- Circles
- Circles Maths
- Combination of Functions
- Combinatorics
- Common Factors
- Common Multiples
- Completing the Square
- Completing the Squares
- Complex Numbers
- Composite Functions
- Composition of Functions
- Compound Interest
- Compound Units
- Conic Sections
- Construction and Loci
- Converting Metrics
- Convexity and Concavity
- Coordinate Geometry
- Coordinates in Four Quadrants
- Cubic Function Graph
- Cubic Polynomial Graphs
- Data transformations
- De Moivre's Theorem
- Deductive Reasoning
- Definite Integrals
- Deriving Equations
- Determinant of Inverse Matrix
- Determinants
- Differential Equations
- Differentiation
- Differentiation Rules
- Differentiation from First Principles
- Differentiation of Hyperbolic Functions
- Direct and Inverse proportions
- Disjoint and Overlapping Events
- Disproof by Counterexample
- Distance from a Point to a Line
- Divisibility Tests
- Double Angle and Half Angle Formulas
- Drawing Conclusions from Examples
- Ellipse
- Equation of Line in 3D
- Equation of a Perpendicular Bisector
- Equation of a circle
- Equations
- Equations and Identities
- Equations and Inequalities
- Estimation in Real Life
- Euclidean Algorithm
- Evaluating and Graphing Polynomials
- Even Functions
- Exponential Form of Complex Numbers
- Exponential Rules
- Exponentials and Logarithms
- Expression Math
- Expressions and Formulas
- Faces Edges and Vertices
- Factorials
- Factoring Polynomials
- Factoring Quadratic Equations
- Factorising expressions
- Factors
- Finding Maxima and Minima Using Derivatives
- Finding Rational Zeros
- Finding the Area
- Forms of Quadratic Functions
- Fractional Powers
- Fractional Ratio
- Fractions
- Fractions and Decimals
- Fractions and Factors
- Fractions in Expressions and Equations
- Fractions, Decimals and Percentages
- Function Basics
- Functional Analysis
- Functions
- Fundamental Counting Principle
- Fundamental Theorem of Algebra
- Generating Terms of a Sequence
- Geometric Sequence
- Gradient and Intercept
- Graphical Representation
- Graphing Rational Functions
- Graphing Trigonometric Functions
- Graphs
- Graphs and Differentiation
- Graphs of Common Functions
- Graphs of Exponents and Logarithms
- Graphs of Trigonometric Functions
- Greatest Common Divisor
- Growth and Decay
- Growth of Functions
- Highest Common Factor
- Hyperbolas
- Imaginary Unit and Polar Bijection
- Implicit differentiation
- Inductive Reasoning
- Inequalities Maths
- Infinite geometric series
- Injective functions
- Instantaneous Rate of Change
- Integers
- Integrating Polynomials
- Integrating Trig Functions
- Integrating e^x and 1/x
- Integration
- Integration Using Partial Fractions
- Integration by Parts
- Integration by Substitution
- Integration of Hyperbolic Functions
- Interest
- Inverse Hyperbolic Functions
- Inverse Matrices
- Inverse and Joint Variation
- Inverse functions
- Iterative Methods
- Law of Cosines in Algebra
- Law of Sines in Algebra
- Laws of Logs
- Limits of Accuracy
- Linear Expressions
- Linear Systems
- Linear Transformations of Matrices
- Location of Roots
- Logarithm Base
- Logic
- Lower and Upper Bounds
- Lowest Common Denominator
- Lowest Common Multiple
- Math formula
- Matrices
- Matrix Addition and Subtraction
- Matrix Determinant
- Matrix Multiplication
- Metric and Imperial Units
- Misleading Graphs
- Mixed Expressions
- Modulus Functions
- Modulus and Phase
- Multiples of Pi
- Multiplication and Division of Fractions
- Multiplicative Relationship
- Multiplying and Dividing Rational Expressions
- Natural Logarithm
- Natural Numbers
- Notation
- Number
- Number Line
- Number Systems
- Numerical Methods
- Odd functions
- Open Sentences and Identities
- Operation with Complex Numbers
- Operations with Decimals
- Operations with Matrices
- Operations with Polynomials
- Order of Operations
- Parabola
- Parallel Lines
- Parametric Differentiation
- Parametric Equations
- Parametric Integration
- Partial Fractions
- Pascal's Triangle
- Percentage
- Percentage Increase and Decrease
- Percentage as fraction or decimals
- Perimeter of a Triangle
- Permutations and Combinations
- Perpendicular Lines
- Points Lines and Planes
- Polynomial Graphs
- Polynomials
- Powers Roots And Radicals
- Powers and Exponents
- Powers and Roots
- Prime Factorization
- Prime Numbers
- Problem-solving Models and Strategies
- Product Rule
- Proof
- Proof and Mathematical Induction
- Proof by Contradiction
- Proof by Deduction
- Proof by Exhaustion
- Proof by Induction
- Properties of Exponents
- Proportion
- Proving an Identity
- Pythagorean Identities
- Quadratic Equations
- Quadratic Function Graphs
- Quadratic Graphs
- Quadratic functions
- Quadrilaterals
- Quotient Rule
- Radians
- Radical Functions
- Rates of Change
- Ratio
- Ratio Fractions
- Rational Exponents
- Rational Expressions
- Rational Functions
- Rational Numbers and Fractions
- Ratios as Fractions
- Real Numbers
- Reciprocal Graphs
- Recurrence Relation
- Recursion and Special Sequences
- Remainder and Factor Theorems
- Representation of Complex Numbers
- Rewriting Formulas and Equations
- Roots of Complex Numbers
- Roots of Polynomials
- Roots of Unity
- Rounding
- SAS Theorem
- SSS Theorem
- Scalar Triple Product
- Scale Drawings and Maps
- Scale Factors
- Scientific Notation
- Second Order Recurrence Relation
- Sector of a Circle
- Segment of a Circle
- Sequences
- Sequences and Series
- Series Maths
- Sets Math
- Similar Triangles
- Similar and Congruent Shapes
- Simple Interest
- Simplifying Fractions
- Simplifying Radicals
- Simultaneous Equations
- Sine and Cosine Rules
- Small Angle Approximation
- Solving Linear Equations
- Solving Linear Systems
- Solving Quadratic Equations
- Solving Radical Inequalities
- Solving Rational Equations
- Solving Simultaneous Equations Using Matrices
- Solving Systems of Inequalities
- Solving Trigonometric Equations
- Solving and Graphing Quadratic Equations
- Solving and Graphing Quadratic Inequalities
- Special Products
- Standard Form
- Standard Integrals
- Standard Unit
- Straight Line Graphs
- Substraction and addition of fractions
- Sum and Difference of Angles Formulas
- Sum of Natural Numbers
- Surds
- Surjective functions
- Tables and Graphs
- Tangent of a Circle
- The Quadratic Formula and the Discriminant
- Transformations
- Transformations of Graphs
- Translations of Trigonometric Functions
- Triangle Rules
- Triangle trigonometry
- Trigonometric Functions
- Trigonometric Functions of General Angles
- Trigonometric Identities
- Trigonometric Ratios
- Trigonometry
- Turning Points
- Types of Functions
- Types of Numbers
- Types of Triangles
- Unit Circle
- Units
- Variables in Algebra
- Vectors
- Verifying Trigonometric Identities
- Writing Equations
- Writing Linear Equations
- Statistics
- Bias in Experiments
- Binomial Distribution
- Binomial Hypothesis Test
- Bivariate Data
- Box Plots
- Categorical Data
- Categorical Variables
- Central Limit Theorem
- Chi Square Test for Goodness of Fit
- Chi Square Test for Homogeneity
- Chi Square Test for Independence
- Chi-Square Distribution
- Combining Random Variables
- Comparing Data
- Comparing Two Means Hypothesis Testing
- Conditional Probability
- Conducting a Study
- Conducting a Survey
- Conducting an Experiment
- Confidence Interval for Population Mean
- Confidence Interval for Population Proportion
- Confidence Interval for Slope of Regression Line
- Confidence Interval for the Difference of Two Means
- Confidence Intervals
- Correlation Math
- Cumulative Distribution Function
- Cumulative Frequency
- Data Analysis
- Data Interpretation
- Degrees of Freedom
- Discrete Random Variable
- Distributions
- Dot Plot
- Empirical Rule
- Errors in Hypothesis Testing
- Estimator Bias
- Events (Probability)
- Frequency Polygons
- Generalization and Conclusions
- Geometric Distribution
- Histograms
- Hypothesis Test for Correlation
- Hypothesis Test for Regression Slope
- Hypothesis Test of Two Population Proportions
- Hypothesis Testing
- Inference for Distributions of Categorical Data
- Inferences in Statistics
- Large Data Set
- Least Squares Linear Regression
- Linear Interpolation
- Linear Regression
- Measures of Central Tendency
- Methods of Data Collection
- Normal Distribution
- Normal Distribution Hypothesis Test
- Normal Distribution Percentile
- Paired T-Test
- Point Estimation
- Probability
- Probability Calculations
- Probability Density Function
- Probability Distribution
- Probability Generating Function
- Quantitative Variables
- Quartiles
- Random Variables
- Randomized Block Design
- Residual Sum of Squares
- Residuals
- Sample Mean
- Sample Proportion
- Sampling
- Sampling Distribution
- Scatter Graphs
- Single Variable Data
- Skewness
- Spearman's Rank Correlation Coefficient
- Standard Deviation
- Standard Error
- Standard Normal Distribution
- Statistical Graphs
- Statistical Measures
- Stem and Leaf Graph
- Sum of Independent Random Variables
- Survey Bias
- T-distribution
- Transforming Random Variables
- Tree Diagram
- Two Categorical Variables
- Two Quantitative Variables
- Type I Error
- Type II Error
- Types of Data in Statistics
- Variance for Binomial Distribution
- Venn Diagrams

When I was little, my mom used to take me with her to the groceries store every Sunday. There I saw a beautiful stuffed bear which I wanted with all my heart. The problem is that it was inside a claw machine, and my mom would give me just one chance to get it every time we went to the store.

At first, I was confident enough because the game looked easy for me, and every time I would be able to grab the bear with no problem at all. The thing is that every single time the claw just went loose and dropped my bear! After some weeks, tears burst out of my eyes when I was finally able to claim my prize, which I still treasure in my bedroom.

You might be wondering how this relates to probability distributions. Turns out that the claw machines are built so the prize is rarely obtained, no matter how precise your inputs are. In my stuffed bear predicament, I was doing a **trial **every Sunday **until I got a success**. In this context, the number of trials that I made until I got my success is represented by a **random variable** with **geometric distribution.**

When talking about probability distributions you need to have a clear grasp of which is the random variable you are dealing with. Just like in the stuffed bear example, where I was counting how many times I had to play the claw machine, in a geometric distribution you **count** **how many trials you perform until you obtain a success**. It is assumed that each trial is a Bernoulli trial.

Remember that a Bernoulli trial only has two outcomes: success or failure.

It is time to properly define the geometric distribution.

The **geometric distribution, **also known as **the geometric probability model,** is a discrete probability distribution where the random variable \( X\) counts the number of trials performed until a success is obtained.

Since the least amount of trials required to obtain a success is \(1\), then the random variable \(X\) can take the values

\[ X=1,2,3, \dots\]

The geometric distribution has only one parameter, which is the probability \(p\) of success. A geometric distribution with probability \(p\) is usually denoted

\[\text{Geom}(p),\]

or sometimes it is written as

\[ G(p).\]

In my stuffed bear example, the random variable \(X\) counted how many times I performed the trial of playing the claw machine until I got my hands on the bear. The probability of success, \(p\), was not known to my person, but in most cases you will be given this value.

A probability distribution needs to satisfy the following requirements in order to fit a geometric model:

**There are only two possible outcomes for each trial, success or failure.**For example, the first trial could either be a success or a failure, just like all subsequent trials. It is worth noting that the experiment**stops**once you get a success.

**The trials are independent of each other****.**For example, if the second trial is a failure this will not affect the next trial, or any subsequent trials, in any way.

**The success probability remains unchanged trial after trial.**This means the probability of success for the first trial is the same for all subsequent trials. For example, if \(p = 0.4\) then the probability of success of the first trial is \(0.4\), the probability of success of the second trial is \(0.4\) as well, and so on.

It is worth noting that if \(p<1\), it is in theory possible that you never obtain success even if you do a large amount of trials. This is easier to picture if \(p\) is a very small number.

Suppose you buy a lottery ticket every month. The chances of actually winning the lottery are astronomically small, so it is most likely that you will never win the prize. How sad!

Usually, when you are given a geometric distribution you will be also given some formulas to find certain values of interest.

Since in a geometric distribution you are counting how many trials you take until getting a success, a natural question that arises is: What is the probability of getting the success in exactly \( x\) trials? This can be found by noting that, if you underwent \(x\) trials until you got the success, then you had \(x-1\) failures, so

\[ P(X=x) = (1-p)^{x-1}p,\]

where \(p\) is the probability of success, and \(1-p\) is the probability of failure. You might also find this formula written as

\[ P(X=x) = q^{x-1}p,\]

where \(q=1-p\).

You can find a more realistic approach to an experiment by looking at the cumulative distribution function of the geometric distribution, which tells you the probability of getting success in \(x\) trials or less. For the geometric distribution, this is given by

\[P(X\leq k) = 1-(1-p)^k.\]

Think of the stuffed bear example. Suppose you go to the claw machine with five spare quarters, the cumulative distribution function will tell you the probability of having at least one success with those five quarters, that is

\[ P(X \leq 5) = 1-(1-p)^5.\]

The expected value (also known as mean) of the geometric distribution gives you a rough estimate of how many trials you will need to do until you get a success, and it is given by

\[ \mu = \frac{1}{p}.\]

The standard deviation, in general, gives you insight on how a variable tends to stay around the expected value. A geometric distribution with a small standard deviation expects the number of trials to be close to the mean. It is given by

\[\sigma = \sqrt{\frac{1-p}{p^2}}.\]

Sometimes you will be asked to find the variance of an experiment modeled by a geometric distribution. To make things simple, since the standard deviation is the square root of the variance, you can obtain the variance by squaring the standard deviation. That is if the standard deviation is given by

\[ \sigma = \sqrt{\frac{1-p}{p^2}}\]

then, the variance is given by

\[ \sigma^2 = \frac{1-p}{p^2}.\]

Because the graph of a geometric distribution looks like a decreasing exponential function, you might associate a geometric distribution with an exponential distribution.

The exponential distribution is quite similar to the geometric distribution in the sense that it models the time-lapse of an experiment until success is obtained. However, because time is considered a continuous quantity, the exponential distribution is a **continuous** probability distribution, while the geometric distribution is **discrete.**

Here you can solve some problems that can be modeled using the geometric distribution.

A patient suffers kidney failure and requires a transplant from a suitable donor. The probability that a random donor will match this patient’s requirements is \(0.2\).

- Suppose that no donor matches the patient's requirements until a fifth donor comes in. What is the probability of this scenario?
- Find the probability of the patient requiring \(10\) or fewer donors until a match is found.
- What is the expected number of donors required to get a match?
- Find the standard deviation of this scenario.

**Solution:**

- Whenever you need to find the probability that the experiment requires an exact number of trials to succeed, you should start by writing its probability mass function. In this case, since \(p=0.2\) then\[ \begin{align} P(X=x) &= (1-p)^{x-1}p \\ &= (1-0.2)^{x-1}(0.2) \\ &= (0.8)^{x-1}(0.2). \end{align}\]Now, you can evaluate the above function when \(x=5\), giving you\[ \begin{align} P(X=5) &= (0.8)^{5-1}(0.2) \\ &= (0.8)^4(0.2) \\ &= 0.08192, \end{align}\]which means that the probability that this scenario happens is \( 8.192 \%\).
- This time you will need the cumulative distribution function, which in this case is given by\[ P(X\leq k) = 1-(1-p)^k.\]Since you are looking for the case where ten or fewer donors are required, you need to plug in \(k=10\) into the above formula (and \(p=0.2\) as well), which will give you\[ \begin{align} P(X\leq 10) &= 1-(1-0.2)^{10} \\ &= 1-(0.8)^{10} \\ &= 0.892625, \end{align}\]so the probability of finding a suitable kidney from ten random donors is of about \(89.26 \%\).
- This is a rather straightforward task. For the expected number of donors you should use the formula for the expected value, so\[ \mu = \frac{1}{p}.\]By substituting \(p=0.2\) you will obtain\[ \begin{align} \mu &= \frac{1}{0.2} \\ &=5. \end{align}\]
- Finally, you can find the standard deviation by using the formula\[ \sigma = \sqrt{\frac{1-p}{p^2}}.\]Substituting \(p=0.2\) will give you\[ \begin{align} \sigma &= \sqrt{ \frac{1-0.2}{0.2^2} } \\ &= \sqrt{20} \\ &= 4.472133. \end{align}\]

You are likely to find the geometric distribution when playing board games!

Suppose you roll a fair dice until you get a three as a result.

- What is the probability that you don't roll a three until your fourth roll?
- Find the probability of getting the three you need in less than \(10\) rolls.
- What is the expected number of rolls required to get your desired outcome?
- Find the variance of this experiment.

**Solution:**

- In this case you need to find the probability of getting the success. Since you are using a fair dice, the odds of getting either number are all equal, so \[ p = \frac{1}{6}\]for obtaining any specific number, which includes getting three as a result. Now that you know \(p\), you can write the probability mass function for this geometric experiment, that is\[ \begin{align} P(X=x) &= (1-p)^{x-1}p \\ &= \left( 1- \frac{1}{6} \right)^{x-1} \left( \frac{1}{6} \right) \\ &= \left( \frac{5}{6} \right) ^{x-1} \left( \frac{1}{6} \right). \end{align} \] Finally, evaluate the above expression when \(x=4\), obtaining\[ \begin{align} P(X=4) &= \left( \frac{5}{6} \right) ^{4-1} \left(\frac{1}{6} \right) \\&= 0.0964506. \end{align}\]This means the probability that you don't get a three until your fourth roll is \( 9.645 \% \).
- For this case you will need the cumulative distribution function, which in this case is\[ P(X\leq k)=1-(1-p)^k.\]Here you are asked to find the probability of getting the success in
**less than**\(10\) rolls, which means \(9\) rolls or less, so \( k=9\). Knowing this, you can substitute \(k\) and \(p\) to find the requested probability, so\[ \begin{align} P(X\leq 9) &= 1-\left(1-\frac{1}{6} \right)^9 \\ &= 1-\left(\frac{5}{6}\right)^9 \\ &= 0.806193. \end{align} \]So the probability of getting your desired result in less than \(10\) rolls is \( 80.6193 \% \). - You can use the formula\[ \mu = \frac{1}{p}\]to find the expected value, so\[ \mu = \frac{1}{\frac{1}{6}}, \] which you can simplify with the properties of fractions, giving you\[ \mu = 6.\]
- This time you can use the variance formula,\[ \sigma^2 = \frac{1-p}{p^2},\]so\[ \begin{align} \sigma^2 &= \frac{1-\frac{1}{6}}{\left(\frac{1}{6}\right)^2} \\ &=\frac{\frac{5}{6}}{\frac{1}{36}} \\ &= 30. \end{align} \]

Let's assign a number to the probability of succeeding in the claw machine game.

Suppose that the probability of winning an item from a claw machine is \( 0.05\).

- What is the probability of winning an item on your first try?
- What is the probability of winning an item in less than \(20\) tries?
- Suppose you need to use a quarter for each try. What is the expected amount of money spent for getting a prize?

**Solution:**

- This is a tricky question! You can try building the probability mass function and using \(x=1\), but you are already told that the probability of winning an item from the claw machine is \(0.05\), or \( 5\%\), so this is the answer.
- As usual, build the cumulative distribution function, so\[ P(X\leq k) = 1-(1-p)^k.\]You need to find the probability of wining an item in
**less****than \(20\)**tries, which means \(**19\) or less**tries. So \(k=19\). Knowing this, evaluate the cumulative distribution function, that is\[ \begin{align} P(X\leq k) &= 1-(1-0.05)^{19} \\ &= 1-(0.95)^{19} \\&=0.622646.\end{align}\] So, the probability of winning a prize in less than \(20\) tries is \(62.2646\%\). - Whenever you are asked about expectations, you should begin by finding the expected value. In this case this means that\[ \begin{align} \mu &= \frac{1}{\mu} \\ &= \frac{1}{0.05} \\ &= 20. \end{align}\]This means that you can expect to play the claw machine about \(20\) times. Since each time you play costs you a quarter, you need \(20\) quarters, so\[20(0.25) = 5\]means that you can expect to spend \($5\) on the claw machine.

- The
**geometric distribution**, also known as the**geometric probability model**, is a discrete probability distribution where the random variable \( X\) counts the number of trials performed until a success is obtained.Since the least amount of trials required to obtain a success is \(1\), then the random variable \(X\) can take the values \( X=1,2,3, \dots\).

In order to model a situation using a geometric distribution, you need to make some assumptions: 1. There are only two possible outcomes of a trial, a success or a failure. 2. The trials are independent of each other. 3. The success probability remains unchanged trial after trial.

The formulas used in geometric distributions are the following:

The probability mass function is given by\[ P(X=x) = (1-p)^{x-1}p.\]

The cumulative distribution function is\[ P(X \leq k) = 1-(1-p)^k.\]

The expected value can be found as\[ \mu = \frac{1}{p}.\]

The standard deviation is\[ \sigma = \sqrt{\frac{1-p}{p^2}}.\]

The exponential distribution is similar to the geometric distribution in the sense that both describe situations in which you are looking for the first success of a trial. However, the exponential distribution is a

**continuous**distribution, while the geometric distribution is a**discrete**distribution.

In order to model a situation using a geometric distribution you need it to meet the following conditions:

- There are only two possible outcomes for each trial: Success, or failure.
- The trials are independent of each other.
- The success probability remains unchanged trial after trial.

More about Geometric Distribution

Be perfectly prepared on time with an individual plan.

Test your knowledge with gamified quizzes.

Create and find flashcards in record time.

Create beautiful notes faster than ever before.

Have all your study materials in one place.

Upload unlimited documents and save them online.

Identify your study strength and weaknesses.

Set individual study goals and earn points reaching them.

Stop procrastinating with our study reminders.

Earn points, unlock badges and level up while studying.

Create flashcards in notes completely automatically.

Create the most beautiful study materials using our templates.

Sign up to highlight and take notes. It’s 100% free.

Over 10 million students from across the world are already learning smarter.

Get Started for Free