How to calculate the probability of an event formula. Probability

10.03.2019

To calculate the probability P A of event A, it is necessary to build a mathematical model of the object under study, which contains event A. The basis of the model is the probability space (,?, P), where is the space of elementary events, ? - a class of events with composition operations introduced on them,

The probability of any event A that has meaning in and is included in the class of events? 25. If, for example,

then from Axiom 3, the probabilities, it follows that

Thus, the calculation of the probability of an event A is reduced to the calculation of the probabilities of elementary events, its components, and since they are “basic”, the methods for their calculation do not have to depend on the axiomatics of probability theory.

Three approaches to calculating the probabilities of elementary events are considered here:

classical;

geometric;

statistical or frequency.

The classic method for calculating probabilities

It follows from the axiomatic definition of probability that the probability exists for any event A, but nothing is said about how to calculate it, although it is known that for each elementary event i there is a probability pi, such that the sum of the probabilities of all elementary events in space is equal to one, that is

The classical method of calculating the probabilities of random events is based on the use of this fact, which, due to its specificity, provides a way to find the probabilities of these events directly from the axioms.

Let a fixed probability space (,?,P) be given, in which:

  • a) consists of a finite number n of elementary events,
  • b) each elementary event i is assigned a probability

Consider event A, which consists of m elementary events:

then from axiom 3 of probabilities, due to the inconsistency of elementary events, it follows that

Thus we have the formula

which can be interpreted as follows: the probability of event A to occur is equal to the ratio of the number of elementary events that favor the occurrence of event A to the number of all elementary events from.

This is the essence of the classical method of calculating the probabilities of events.

Comment. Having assigned the same probability to each of the elementary events of space, we, on the one hand, having a probability space and relying on the axioms of probability theory, have obtained a rule for calculating the probabilities of any random events from space according to formula (2), on the other hand, this gives us reason to consider all elementary events are equally probable and the calculation of the probabilities of any random events can be reduced to an "urn" scheme, regardless of the axioms.

It follows from formula (2) that the probability of an event A depends only on the number of elementary events of which it consists and does not depend on their specific content. Thus, in order to use formula (2), it is necessary to find the number of points in space and the number of points that make up the event A, but then this is already a problem of combinatorial analysis.

Let's look at a few examples.

Example 8. There are k red and (n - k) black balls in an urn of n balls. We randomly extract r balls without returning. What is the probability that in a sample of r balls, s balls are red?

Solution. Let the event (A) (in the sample of r balls s be red). The desired probability is found according to the classical scheme, formula (2):

where is the number of possible samples of size r that differ by at least one ball number, and m is the number of samples of size r in which s balls are red. For obviously the number options samples is equal, and m, as follows from example 7, is

Thus, the desired probability is equal to

Let a set be given in pairs incompatible events as,

generating full group, Then

In this case, we say that we have a probability distribution of events As.

Probability distributions is one of the fundamental concepts modern theory probabilities and forms the basis of Kolmagorov's axioms.

Definition. Probability distribution

the hypergeometric distribution is determined.

Borovkov A.A. in his book, using formula (3) as an example, he explains the nature of the problems of probability theory and mathematical statistics as follows: knowing the composition of the general population, we can use the hypergeometric distribution to find out what the composition of the sample can be - this is a typical problem of probability theory (direct problem). IN natural sciences solve the inverse problem: by the composition of the samples, determine the nature of the general populations - this is an inverse problem, and, figuratively speaking, it is the content of mathematical statistics.

A generalization of binomial coefficients (combinations) are polynomial coefficients, which owe their name to the decomposition of a polynomial of the form

by the powers of the terms.

Polynomial coefficients (4) are often used in solving combinatorial problems.

Theorem. Let there be k different boxes in which the numbered balls are laid out. Then the number of placements of balls in boxes so that the box with number r contains ri balls,

is determined by polynomial coefficients (4).

Proof. Since the order of the boxes is important, but the order of the balls in the boxes is not important, combinations can be used to calculate the placement of the balls in any box.

In the first box, r1 balls from n can be chosen in ways, in the second box, r2 balls, from the remaining (n - r1) one can choose ways, and so on, in (k - 1) box rk-1 balls we choose

ways; to box k - remaining

balls fall automatically, one way.

Thus, the total placements will be

Example. n balls are randomly distributed among n boxes. Assuming that boxes and balls are distinguishable, find the probabilities of the following events:

  • a) all boxes are not empty = A0;
  • b) one box is empty = A1;
  • c) two empty boxes = A2;
  • d) three empty boxes = A3;
  • e) (n-1) - the box is empty = A4.

Solve the problem for the case n = 5.

Solution. It follows from the condition that the distribution of balls over boxes is a simple random selection, hence, all variants of nn.

This sequence means that there are three balls in the first, second and third boxes, two balls in the fourth and fifth boxes, and one ball in the remaining (n - 5) boxes. In total, such placements of balls in boxes will be

Since the balls are in fact distinguishable, then for each such combination we have

ball placements. Thus, the total number of options will be

Let's move on to the decision on the points of the example:

a) since each box contains one ball, we have a sequence 111…11, for which the number of placements is n!/ n! = 1. If the balls are distinguishable, then we have n!/ 1! placements, hence the total number of options m = 1n!= n!, hence

b) if one box is empty, then some box contains two balls, then we have a sequence 211…10, for which the number of placements is equal to n! (n-2)!. Since the balls are distinguishable, for each such combination we have n!/ 2! placements. Total Options

c) if two boxes are empty, then we have two sequences: 311…100 and 221…100. For the first, the number of placements is

n!/(2!(n - 3)!).

For each such combination, we have n!/ 3! ball placements. So, for the first sequence, the number of options is

For the second sequence, the total options will be

Finally we have

d) for three empty boxes there will be three sequences: 411…1000, or 3211…1000, or 22211…1000.

For the first sequence we have

For the second sequence

For the third sequence, we get

Total options

m = k1 + k2 + k3,

The desired probability is equal to

e) if (n -1) the box is empty, then all the balls must be in one of the boxes. Obviously, the number of combinations is

The probability corresponding to this event is

For n = 5, we have

Note that for n = 5 the events Аi must form a complete group, which is true. Indeed

  • Section 1. Random events (50 hours)
  • Thematic plan of discipline for part-time students
  • Thematic plan of discipline for students of correspondence courses
  • 2.3. Structural-logical scheme of the discipline
  • Mathematics Part 2. Probability theory and elements of mathematical statistics Theory
  • Section 1 Random Events
  • Section 3 Elements of mathematical statistics
  • Section 2 Random Variables
  • 2.5. Practice block
  • 2.6. Point-rating system
  • Information resources of the discipline
  • Bibliographic list Main:
  • 3.2. Reference abstract for the course “Mathematics Part 2. Probability theory and elements of mathematical statistics” introduction
  • Section 1. Random events
  • 1.1. The concept of a random event
  • 1.1.1. Information from set theory
  • 1.1.2. Space of elementary events
  • 1.1.3. Event classification
  • 1.1.4. Sum and product of events
  • 1.2. Probabilities of random events.
  • 1.2.1. Relative frequency of an event, axioms of probability theory. The classical definition of probability
  • 1.2.2. Geometric definition of probability
  • Calculation of the probability of an event through elements of combinatorial analysis
  • 1.2.4. Properties of event probabilities
  • 1.2.5. Independent events
  • 1.2.6. Calculation of the probability of no-failure operation of the device
  • Formulas for calculating the probability of events
  • 1.3.1. Sequence of independent trials (Bernoulli scheme)
  • 1.3.2. Conditional probability of an event
  • 1.3.4. Total probability formula and Bayes formula
  • Section 2. Random variables
  • 2.1. Description of random variables
  • 2.1.1. Definition and methods of setting a random variable One of the basic concepts of probability theory is the concept of a random variable. Consider some examples of random variables:
  • To specify a random variable, you must specify its distribution law. Random variables are usually denoted by Greek letters , , , and their possible values ​​are denoted by Latin letters with indices xi, yi, zi.
  • 2.1.2. Discrete random variables
  • Consider events Ai containing all elementary events  leading to the value XI:
  • Let pi denote the probability of the event Ai:
  • 2.1.3. Continuous random variables
  • 2.1.4. Distribution function and its properties
  • 2.1.5. Probability density distribution and its properties
  • 2.2. Numerical characteristics of random variables
  • 2.2.1. Mathematical expectation of a random variable
  • 2.2.2. Variance of a random variable
  • 2.2.3. Normal distribution of a random variable
  • 2.2.4. Binomial distribution
  • 2.2.5. Poisson distribution
  • Section 3. Elements of mathematical statistics
  • 3.1. Basic definitions
  • bar chart
  • 3.3. Point estimates of distribution parameters
  • Basic concepts
  • Point estimates of mathematical expectation and variance
  • 3.4. Interval Estimates
  • The concept of interval estimation
  • Building interval estimates
  • Basic statistical distributions
  • Interval Estimates of the Expectation of the Normal Distribution
  • Interval estimation of the variance of the normal distribution
  • Conclusion
  • Glossary
  • 4. Guidelines for performing laboratory work
  • Bibliographic list
  • Laboratory work 1 description of random variables. Numerical characteristics
  • Procedure for performing laboratory work
  • Laboratory work 2 Basic definitions. Systematization of the sample. Point estimates of distribution parameters. Interval estimates.
  • The concept of a statistical hypothesis about the type of distribution
  • Procedure for performing laboratory work
  • Cell Value Cell Value
  • 5. Guidelines for the performance of the control work Task for the control work
  • Guidelines for the performance of control work Events and their probabilities
  • random variables
  • Standard deviation
  • Elements of mathematical statistics
  • 6. Block of control of mastering the discipline
  • Questions for the exam on the course "Mathematics Part 2. Probability theory and elements of mathematical statistics»
  • Continuation of the table in
  • End of table in
  • Uniformly distributed random numbers
  • Content
  • Section 1. Random events………………………………………. 18
  • Section 2. Random variables..………………………………….. 41
  • Section 3. Elements of mathematical statistics............... . 64
  • 4. Guidelines for the implementation of laboratory
  • 5. Guidelines for the implementation of the control
      1. Formulas for calculating the probability of events

    1.3.1. Sequence of independent trials (Bernoulli scheme)

    Suppose that some experiment can be carried out repeatedly under the same conditions. Let this experience be made n times, i.e., a sequence of n tests.

    Definition. Subsequence n tests are called mutually independent if any event associated with a given test is independent of any events associated with other tests.

    Let's say that some event A likely to happen p as a result of one test or not happen with probability q= 1- p.

    Definition . Sequence of n test forms a Bernoulli scheme if the following conditions are met:

      subsequence n tests are mutually independent,

    2) probability of an event A does not change from test to test and does not depend on the result in other tests.

    Event A is called a "success" of the test, and the opposite event is called a "failure". Consider an event

    =( in n tests happened exactly m"success").

    To calculate the probability of this event, the Bernoulli formula is valid

    p() =
    , m = 1, 2, …, n , (1.6)

    Where - number of combinations of n elements by m :

    =
    =
    .

    Example 1.16. Throw the dice three times. Find:

    a) the probability that 6 points will fall out twice;

    b) the probability that the number of sixes does not appear more than twice.

    Solution . The “success” of the test will be considered the loss of a face on the die with the image of 6 points.

    a) Total number of tests - n=3, number of “successes” – m = 2. Probability of “success” - p=, and the probability of "failure" - q= 1 - =. Then, according to the Bernoulli formula, the probability that the side with six points falls out twice as a result of throwing the die three times will be equal to

    .

    b) Denote by A the event that a face with a score of 6 will appear at most twice. Then the event can be represented as sums of three incompatible events A=
    ,

    Where IN 3 0 – event when the face of interest never appears,

    IN 3 1 - event when the face of interest appears once,

    IN 3 2 - event when the face of interest appears twice.

    By the Bernoulli formula (1.6) we find

    p(A) = p(
    ) = p(
    )=
    +
    +
    =

    =
    .

    1.3.2. Conditional probability of an event

    The conditional probability reflects the impact of one event on the probability of another. Changing the conditions under which the experiment is conducted also affects

    the probability of occurrence of the event of interest.

    Definition. Let A And B- some events, and the probability p(B)> 0.

    Conditional Probability events A provided that "event Balready happened” is the ratio of the probability of producing these events to the probability of an event that occurred earlier than the event whose probability is to be found. Conditional Probability denoted as p(AB). Then by definition

    p (A B) =
    . (1.7)

    Example 1.17. Throw two dice. The space of elementary events consists of ordered pairs of numbers

    (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)

    (2,1) (2,2) (2,3) (2,4) (2,5) (2,6)

    (3,1) (3,2) (3,3) (3,4) (3,5) (3,6)

    (4,1) (4,2) (4,3) (4,4) (4,5) (4,6)

    (5,1) (5,2) (5,3) (5,4) (5,5) (5,6)

    (6,1) (6,2) (6,3) (6,4) (6,5) (6,6).

    In example 1.16 it was found that the event A=(number of points on the first die > 4) and event C=(the sum of points is 8) are dependent. Let's make a relation

    .

    This relationship can be interpreted as follows. Assume that the result of the first roll is known to be that the number of points on the first die is > 4. It follows that the throwing of the second die can lead to one of the 12 outcomes that make up the event A:

    (5,1) (5,2) (5,3) (5,4) (5,5) (5,6)

    (6,1) (6,2) (6,3) (6,4) (6,5) (6,6) .

    At the same time, the event C only two of them (5.3) (6.2) can match. In this case, the probability of the event C will be equal to
    . Thus, information about the occurrence of an event A influenced the likelihood of an event C.

          Probability of producing events

    Multiplication theorem

    Probability of producing eventsA 1 A 2 A n is determined by the formula

    p(A 1 A 2 A n)=p(A 1)p(A 2 A 1))p(A n A 1 A 2 A n- 1). (1.8)

    For the product of two events, it follows that

    p(AB)=p(AB)p{B)=p(BA)p{A). (1.9)

    Example 1.18. In a batch of 25 items, 5 items are defective. 3 items are chosen at random. Determine the probability that all selected products are defective.

    Solution. Let's denote the events:

    A 1 = (first product is defective),

    A 2 = (second product is defective),

    A 3 = (third product is defective),

    A = (all products are defective).

    Event A is the product of three events A = A 1 A 2 A 3 .

    From the multiplication theorem (1.6) we get

    p(A)= p( A 1 A 2 A 3 ) = p(A 1) p(A 2 A 1))p(A 3 A 1 A 2).

    The classical definition of probability allows us to find p(A 1) is the ratio of the number of defective products to total products:

    p(A 1)= ;

    p(A 2) This the ratio of the number of defective products remaining after the withdrawal of one, to the total number of remaining products:

    p(A 2 A 1))= ;

    p(A 3) is the ratio of the number of defective products remaining after the withdrawal of two defective products to the total number of remaining products:

    p(A 3 A 1 A 2)=.

    Then the probability of the event A will be equal to

    p(A) ==
    .

    I understand that everyone wants to know in advance how a sporting event will end, who will win and who will lose. With this information, you can bet on sports events. But is it possible at all, and if so, how to calculate the probability of an event?

    Probability is a relative value, therefore it cannot speak with accuracy about any event. This value allows you to analyze and evaluate the need to place a bet on a particular competition. The definition of probabilities is a whole science that requires careful study and understanding.

    Probability coefficient in probability theory

    In sports betting, there are several options for the outcome of the competition:

    • victory of the first team;
    • victory of the second team;
    • draw;
    • total

    Each outcome of the competition has its own probability and frequency with which this event will occur, provided that the initial characteristics are preserved. As mentioned earlier, it is impossible to accurately calculate the probability of any event - it may or may not coincide. Thus, your bet can either win or lose.

    There can be no exact 100% prediction of the results of the competition, since many factors influence the outcome of the match. Naturally, bookmakers do not know the outcome of the match in advance and only assume the result, making a decision on their analysis system and offer certain coefficients for rates.

    How to calculate the probability of an event?

    Let's say that the odds of the bookmaker is 2.1/2 - we get 50%. It turns out that the coefficient 2 is equal to the probability of 50%. By the same principle, you can get a break-even probability ratio - 1 / probability.

    Many players think that after several repeated losses, a win will definitely happen - this is an erroneous opinion. The probability of winning a bet does not depend on the number of losses. Even if you throw several heads in a row in a coin game, the probability of throwing tails remains the same - 50%.

    There is a whole class of experiments for which the probabilities of their possible outcomes can be easily estimated directly from the conditions of the experiment itself. For this, it is necessary that the various outcomes of experience have symmetry and, therefore, be objectively equally possible.

    Consider, for example, the experience of throwing dice, i.e. symmetrical cube, on the sides of which a different number of points is applied: from 1 to 6.

    Due to the symmetry of the cube, there are reasons to consider all six possible outcomes of the experiment as equally possible. This is what gives us the right to assume that with repeated throwing of the dice, all six faces will fall out approximately equally often. This assumption, for a properly executed bone, is indeed justified by experience; during repeated throwing of a die, each of its faces appears in about one-sixth of all cases of throwing, and the deviation of this share from 1/6 is the smaller, the smaller more experiences have been made. Bearing in mind that the probability of a certain event is assumed to be equal to one, it is natural to assign a probability equal to 1/6 to the fallout of each individual face. This number characterizes some of the objective properties of this random phenomenon, namely, the property of symmetry of the six possible outcomes of the experience.

    For any experiment in which the possible outcomes are symmetrical and equally possible, a similar technique can be applied, which is called the direct calculation of probabilities.

    The symmetry of the possible outcomes of an experiment is usually observed only in artificially organized experiments, such as gambling. Since the theory of probability was initially developed precisely on gambling schemes, the method of directly calculating probabilities, which historically arose along with the emergence of the mathematical theory of random phenomena, for a long time was considered the main one and was the basis of the so-called "classical" probability theory. At the same time, experiments that did not have the symmetry of possible outcomes were artificially reduced to the “classical” scheme.

    Despite the limited scope practical applications of this scheme, it is nevertheless of certain interest, since it is on experiments with symmetry of possible outcomes, and on events associated with such experiments, that it is easiest to get acquainted with the basic properties of probabilities. We will deal with such events, which allow a direct calculation of probabilities, in the first place.

    Let us first introduce some auxiliary notions.

    1. Complete group of events.

    It is said that several events in a given experiment form a complete group of events if at least one of them must necessarily appear as a result of the experiment.

    Examples of events that form a complete group:

    3) the appearance of 1,2,3,4,5,6 points when throwing a dice;

    4) the appearance of a white ball and the appearance of a black ball when one ball is taken out of an urn containing 2 white and 3 black balls;

    5) not a single misprint, one, two, three or more than three misprints when checking a page of printed text;

    6) at least one hit and at least one miss with two shots.

    2. Incompatible events.

    Several events are said to be incompatible in a given experience if no two of them can appear together.

    Examples of incompatible events:

    1) loss of coat of arms and loss of numbers when tossing a coin;

    2) hit and miss when fired;

    3) the appearance of 1,3, 4 points with one throw of a dice;

    4) exactly one failure, exactly two failures, exactly three failures of a technical device in ten hours of operation.

    3. Equivalent events.

    Several events in a given experiment are said to be equally likely if, according to the symmetry conditions, there is reason to believe that none of these events is objectively more possible than the other.

    Examples of equally likely events:

    1) loss of coat of arms and loss of numbers when tossing a coin;

    2) the appearance of 1,3, 4, 5 points when throwing a dice;

    3) the appearance of a card of diamonds, hearts, clubs suit when removing a card from the deck;

    4) the appearance of a ball with numbers 1, 2, 3 when one ball is taken out of an urn containing 10 renumbered balls.

    There are groups of events that have all three properties: they form a complete group, they are incompatible and equally possible; for example: the appearance of a coat of arms and numbers when a coin is tossed; the appearance of 1, 2, 3, 4, 5, 6 points when throwing a dice. The events that form such a group are called cases (otherwise "chances").

    If any experience in its structure has a symmetry of possible outcomes, then cases represent an exhaustive system of equally possible and mutually exclusive outcomes of experience. Such an experience is said to be "reduced to a scheme of cases" (in other words, to a "scheme of urns").

    The scheme of cases takes place predominantly in artificially organized experiments in which the same possibility of outcomes of the experiment is ensured in advance and consciously (as, for example, in gambling). For such experiments, it is possible to directly calculate the probabilities based on an estimate of the proportion of the so-called "favorable" cases in the total number of cases.

    An event is called favorable (or “favorable”) to some event if the occurrence of this event entails the occurrence of this event.

    For example, when throwing a dice, six cases are possible: the appearance of 1, 2, 3, 4, 5, 6 points. Of these, the event - the appearance of an even number of points - is favorable for three cases: 2, 4, 6 and the remaining three are not favorable.

    If an experiment is reduced to a pattern of cases, then the probability of an event in a given experiment can be estimated from the relative proportion of favorable cases. The probability of an event is calculated as the ratio of the number of favorable cases to the total number of cases:

    where P(A) is the probability of the event ; - total number cases; is the number of cases favorable to the event .

    Since the number of favorable cases is always between 0 and (0 for an impossible and for a certain event), the probability of an event calculated by formula (2.2.1) is always a rational proper fraction:

    Formula (2.2.1), the so-called "classical formula" for calculating probabilities, has long appeared in the literature as a definition of probability. At present, when defining (explaining) probability, one usually proceeds from other principles, directly linking the concept of probability with the empirical concept of frequency; formula (2.2.1) is preserved only as a formula for directly calculating probabilities, suitable if and only if the experiment is reduced to a scheme of cases, i.e. has a symmetry of possible outcomes.

    THEME 1 . The classic formula for calculating probability.

    Basic definitions and formulas:

    An experiment whose outcome cannot be predicted is called random experiment(SE).

    An event that may or may not occur in a given SE is called random event.

    elementary outcomes name events that meet the requirements:

    1. with any implementation of SE, one and only one elementary outcome occurs;

    2. Every event is some combination, some set of elementary outcomes.

    The set of all possible elementary outcomes completely describes the SE. Such a set is called space of elementary outcomes(PEI). The choice of SEI to describe this SC is ambiguous and depends on the problem being solved.

    P (A) \u003d n (A) / n,

    where n is the total number of equally possible outcomes,

    n (A) - the number of outcomes that make up the event A, as they say, favoring the event A.

    The words “at random”, “at random”, “randomly” just guarantee the equality of elementary outcomes.

    Solution of typical examples

    Example 1 From an urn containing 5 red, 3 black and 2 white balls, 3 balls are drawn at random. Find probabilities of events:

    A– “all drawn balls are red”;

    IN– “all drawn balls are of the same color”;

    WITH– “among the extracted exactly 2 blacks”.

    Solution:

    The elementary outcome of this SE is a triple (unordered!) of balls. Therefore, the total number of outcomes is the number of combinations: n == 120 (10 = 5 + 3 + 2).

    Event A consists only of those triples that were drawn from five red balls, i.e. n (A )== 10.

    event IN in addition to 10 red triplets, black triplets also favor, the number of which is = 1. Therefore: n (B)=10+1=11.

    event WITH those triples of balls that contain 2 black and one non-black are favored. Each way of choosing two black balls can be combined with choosing one non-black (out of seven). Therefore: n(C) == 3 * 7 = 21.

    So: P(A) = 10/120; P(B) = 11/120; P(S) = 21/120.

    Example 2 In the conditions of the previous problem, we will assume that the balls of each color have their own numbering, starting from 1. Find the probabilities of events:

    D– “the maximum retrieved number is 4”;

    E– “the maximum extracted number is 3”.

    Solution:

    To calculate n (D ), we can assume that the urn contains one ball with number 4, one ball with a larger number, and 8 balls (3k + 3ch + 2b) with smaller numbers. event D those triples of balls that necessarily contain a ball with number 4 and 2 balls with lower numbers are favored. Therefore: n(D) =

    P(D) = 28/120.

    To calculate n (E), we consider: there are two balls with number 3 in the urn, two with big numbers and six balls with lower numbers (2k+2h+2b). Event E consists of two types of triplets:

    1. one ball with number 3 and two with smaller numbers;

    2. two balls with number 3 and one with a lower number.

    Therefore: n (E )=

    P(E) = 36/120.

    Example 3 Each of M different particles is thrown at random into one of N cells. Find probabilities of events:

    A– all particles fell into the second cell;

    IN– all particles fell into one cell;

    WITH– each cell contains no more than one particle (M £ N );

    D– all cells are occupied (M =N +1);

    E– the second cell contains exactly To particles.

    Solution:

    For each particle, there are N ways to get to a particular cell. According to the basic principle of combinatorics for M particles, we have N *N *N *…*N (M-times). So, the total number of outcomes in this SE is n = N M .

    For each particle we have one opportunity to get into the second cell, therefore n (A ) = 1*1*…*1= 1 M = 1, and P(A) = 1/ N M .

    To get into one cell (to all particles) means to get all into the first, or all into the second, or etc. all in N-th. But each of these N options can be implemented in one way. Therefore n (B)=1+1+…+1(N times)=N and Р(В)=N/N M .

    Event C means that each particle has one less number of placement ways than the previous particle, and the first one can fall into any of N cells. That's why:

    n (C) \u003d N * (N -1) * ... * (N + M -1) and P (C) \u003d

    In a special case for M =N : Р(С)=

    Event D means that one of the cells contains two particles, and each of the (N -1) remaining cells contains one particle. To find n (D ) we argue as follows: we choose a cell in which there will be two particles, this can be done in =N ways; then we select two particles for this cell, there are ways to do this. After that, the remaining (N -1) particles will be distributed one by one into the remaining (N -1) cells, for this there is (N -1)! ways.

    So n(D) =

    .

    The number n (E) can be calculated as follows: To particles for the second cell can be done in ways, the remaining (M - K) particles are randomly distributed over the (N -1) cell (N -1) in M-K ways. That's why:



    Similar articles