Probability theory. Probability of an event, random events (probability theory)

11.10.2019

Let's not think about the lofty for a long time - let's start right away with a definition.

The Bernoulli scheme is when n independent experiments of the same type are performed, in each of which an event of interest to us A may appear, and the probability of this event is known P (A) \u003d p. It is required to determine the probability that event A will occur exactly k times during n trials.

The tasks that are solved according to the Bernoulli scheme are extremely diverse: from simple ones (such as “find the probability that the shooter hits 1 time out of 10”) to very severe ones (for example, tasks for percentages or playing cards). In reality, this scheme is often used to solve problems related to product quality control and the reliability of various mechanisms, all the characteristics of which must be known before starting work.

Let's go back to the definition. Since we are talking about independent trials, and in each trial the probability of the event A is the same, only two outcomes are possible:

  1. A is the occurrence of event A with probability p;
  2. "not A" - event A did not appear, which happens with probability q = 1 − p.

The most important condition without which the Bernoulli scheme loses its meaning is constancy. No matter how many experiments we conduct, we are interested in the same event A that occurs with the same probability p.

Incidentally, not all problems in probability theory can be reduced to constant conditions. Any competent tutor in higher mathematics will tell you about this. Even something as simple as picking colored balls out of a box is not an experiment with constant conditions. They took out another ball - the ratio of colors in the box changed. Therefore, the probabilities have also changed.

If the conditions are constant, one can accurately determine the probability that event A will occur exactly k times out of n possible. We formulate this fact in the form of a theorem:

Bernoulli's theorem. Let the probability of occurrence of event A in each experiment be constant and equal to p. Then the probability that in n independent trials event A will appear exactly k times is calculated by the formula:

where C n k is the number of combinations, q = 1 − p.

This formula is called the Bernoulli formula. It is interesting to note that the problems below are completely solved without using this formula. For example, you can apply probability addition formulas. However, the amount of computation will be simply unrealistic.

Task. The probability of producing a defective product on the machine is 0.2. Determine the probability that in a batch of ten parts produced on a given machine exactly k will be without defects. Solve the problem for k = 0, 1, 10.

By assumption, we are interested in the event A of the release of products without defects, which happens each time with a probability p = 1 − 0.2 = 0.8. We need to determine the probability that this event will occur k times. Event A is opposed to the event “not A”, i.e. production of a defective product.

Thus, we have: n = 10; p=0.8; q = 0.2.

So, we find the probability that all parts in the batch are defective (k = 0), that only one part is defective (k = 1), and that there are no defective parts at all (k = 10):

Task. The coin is tossed 6 times. The loss of the coat of arms and tails is equally probable. Find the probability that:

  1. the coat of arms will drop three times;
  2. the coat of arms will drop once;
  3. the coat of arms will appear at least twice.

So, we are interested in the event A when the coat of arms falls. The probability of this event is p = 0.5. The event A is opposed to the event “not A”, when tails come up, which happens with the probability q = 1 − 0.5 = 0.5. It is necessary to determine the probability that the coat of arms will fall out k times.

Thus, we have: n = 6; p = 0.5; q = 0.5.

Let us determine the probability that the coat of arms fell out three times, i.e. k = 3:

Now let's determine the probability that the coat of arms fell out only once, i.e. k = 1:

It remains to determine with what probability the coat of arms will fall out at least twice. The main snag is in the phrase “no less”. It turns out that any k will suit us, except for 0 and 1, i.e. you need to find the value of the sum X \u003d P 6 (2) + P 6 (3) + ... + P 6 (6).

Note that this sum is also equal to (1 − P 6 (0) − P 6 (1)), i.e. out of all possible options, it is enough to “cut out” those when the coat of arms fell out 1 time (k = 1) or did not fall out at all (k = 0). Since P 6 (1) we already know, it remains to find P 6 (0):

Task. The probability that a TV has hidden defects is 0.2. The warehouse received 20 TVs. Which event is more likely: that there are two TVs with hidden defects in this batch or three?

Event of interest A is the presence of a latent defect. Total TVs n = 20, the probability of a hidden defect p = 0.2. Accordingly, the probability of getting a TV set without a hidden defect is q = 1 − 0.2 = 0.8.

We get the starting conditions for the Bernoulli scheme: n = 20; p = 0.2; q = 0.8.

Let's find the probability of getting two "defective" TVs (k = 2) and three (k = 3):

\[\begin(array)(l)(P_(20))\left(2 \right) = C_(20)^2(p^2)(q^(18)) = \frac((20}{{2!18!}} \cdot {0,2^2} \cdot {0,8^{18}} \approx 0,137\\{P_{20}}\left(3 \right) = C_{20}^3{p^3}{q^{17}} = \frac{{20!}}{{3!17!}} \cdot {0,2^3} \cdot {0,8^{17}} \approx 0,41\end{array}\]!}

Obviously, P 20 (3) > P 20 (2), i.e. the probability of getting three TVs with hidden defects is more likely to get only two such TVs. Moreover, the difference is not weak.

A small note about factorials. Many people experience a vague feeling of discomfort when they see the entry "0!" (read "zero factorial"). So, 0! = 1 by definition.

P. S. And the biggest probability in the last task is to get four TVs with hidden defects. Do the math and see for yourself.

It is unlikely that many people think about whether it is possible to calculate events that are more or less random. In simple terms, is it realistic to know which side of the die will fall next. It was this question that two great scientists asked, who laid the foundation for such a science as the theory of probability, in which the probability of an event is studied quite extensively.

Origin

If you try to define such a concept as probability theory, you get the following: this is one of the branches of mathematics that studies the constancy of random events. Of course, this concept does not really reveal the whole essence, so it is necessary to consider it in more detail.

I would like to start with the creators of the theory. As mentioned above, there were two of them, and it was they who were among the first who tried to calculate the outcome of an event using formulas and mathematical calculations. On the whole, the beginnings of this science appeared in the Middle Ages. At that time, various thinkers and scientists tried to analyze gambling, such as roulette, dice, and so on, thereby establishing a pattern and percentage of a particular number falling out. The foundation was laid in the seventeenth century by the aforementioned scientists.

At first, their work could not be attributed to the great achievements in this field, because everything they did was simply empirical facts, and the experiments were made visually, without the use of formulas. Over time, it turned out to achieve great results, which appeared as a result of observing the throwing of dice. It was this tool that helped to derive the first intelligible formulas.

Like-minded people

It is impossible not to mention such a person as Christian Huygens, in the process of studying a topic called "probability theory" (the probability of an event is covered precisely in this science). This person is very interesting. He, like the scientists presented above, tried to derive the regularity of random events in the form of mathematical formulas. It is noteworthy that he did not do this together with Pascal and Fermat, that is, all his works did not in any way intersect with these minds. Huygens brought out

An interesting fact is that his work came out long before the results of the work of the discoverers, or rather, twenty years earlier. Among the designated concepts, the most famous are:

  • the concept of probability as a magnitude of chance;
  • mathematical expectation for discrete cases;
  • theorems of multiplication and addition of probabilities.

It is also impossible not to remember who also made a significant contribution to the study of the problem. Conducting his own tests, independent of anyone, he managed to present a proof of the law of large numbers. In turn, the scientists Poisson and Laplace, who worked at the beginning of the nineteenth century, were able to prove the original theorems. It was from this moment that probability theory began to be used to analyze errors in the course of observations. Russian scientists, or rather Markov, Chebyshev and Dyapunov, could not bypass this science either. Based on the work done by the great geniuses, they fixed this subject as a branch of mathematics. These figures worked already at the end of the nineteenth century, and thanks to their contribution, phenomena such as:

  • law of large numbers;
  • theory of Markov chains;
  • central limit theorem.

So, with the history of the birth of science and with the main people who influenced it, everything is more or less clear. Now it's time to concretize all the facts.

Basic concepts

Before touching on laws and theorems, it is worth studying the basic concepts of probability theory. The event takes the leading role in it. This topic is quite voluminous, but without it it will not be possible to understand everything else.

An event in probability theory is any set of outcomes of an experiment. There are not so many concepts of this phenomenon. So, the scientist Lotman, who works in this area, said that in this case we are talking about what "happened, although it might not have happened."

Random events (probability theory pays special attention to them) is a concept that implies absolutely any phenomenon that has the ability to occur. Or, conversely, this scenario may not happen when many conditions are met. It is also worth knowing that it is random events that capture the entire volume of phenomena that have occurred. Probability theory indicates that all conditions can be repeated constantly. It was their conduct that was called "experiment" or "test".

A certain event is one that will 100% occur in a given test. Accordingly, an impossible event is one that will not happen.

The combination of a pair of actions (conditionally case A and case B) is a phenomenon that occurs simultaneously. They are designated as AB.

The sum of pairs of events A and B is C, in other words, if at least one of them happens (A or B), then C will be obtained. The formula of the described phenomenon is written as follows: C \u003d A + B.

Disjoint events in probability theory imply that the two cases are mutually exclusive. They can never happen at the same time. Joint events in probability theory are their antipode. This implies that if A happened, then it does not prevent B in any way.

Opposite events (probability theory deals with them in great detail) are easy to understand. It is best to deal with them in comparison. They are almost the same as incompatible events in probability theory. But their difference lies in the fact that one of the many phenomena in any case must occur.

Equally probable events are those actions, the possibility of repetition of which is equal. To make it clearer, we can imagine the tossing of a coin: the loss of one of its sides is equally likely to fall out of the other.

A favorable event is easier to see with an example. Let's say there is episode B and episode A. The first is the roll of the die with the appearance of an odd number, and the second is the appearance of the number five on the die. Then it turns out that A favors B.

Independent events in the theory of probability are projected only on two or more cases and imply the independence of any action from another. For example, A - dropping tails when throwing a coin, and B - getting a jack from the deck. They are independent events in probability theory. At this point, it became clearer.

Dependent events in probability theory are also admissible only for their set. They imply the dependence of one on the other, that is, the phenomenon B can occur only if A has already happened or, on the contrary, has not happened when this is the main condition for B.

The outcome of a random experiment consisting of one component is elementary events. Probability theory explains that this is a phenomenon that happened only once.

Basic formulas

So, the concepts of "event", "probability theory" were considered above, the definition of the main terms of this science was also given. Now it's time to get acquainted directly with the important formulas. These expressions mathematically confirm all the main concepts in such a difficult subject as probability theory. The probability of an event plays a huge role here too.

It is better to start with the main ones. And before proceeding to them, it is worth considering what it is.

Combinatorics is primarily a branch of mathematics, it deals with the study of a huge number of integers, as well as various permutations of both the numbers themselves and their elements, various data, etc., leading to the appearance of a number of combinations. In addition to probability theory, this branch is important for statistics, computer science, and cryptography.

So, now you can move on to the presentation of the formulas themselves and their definition.

The first of these will be an expression for the number of permutations, it looks like this:

P_n = n ⋅ (n - 1) ⋅ (n - 2)…3 ⋅ 2 ⋅ 1 = n!

The equation applies only if the elements differ only in their order.

Now the placement formula will be considered, it looks like this:

A_n^m = n ⋅ (n - 1) ⋅ (n-2) ⋅ ... ⋅ (n - m + 1) = n! : (n - m)!

This expression is applicable not only to the order of the element, but also to its composition.

The third equation from combinatorics, and it is also the last one, is called the formula for the number of combinations:

C_n^m = n ! : ((n - m))! :m!

A combination is called a selection that is not ordered, respectively, and this rule applies to them.

It turned out to be easy to figure out the formulas of combinatorics, now we can move on to the classical definition of probabilities. This expression looks like this:

In this formula, m is the number of conditions favorable to the event A, and n is the number of absolutely all equally possible and elementary outcomes.

There are a large number of expressions, the article will not cover all of them, but the most important of them will be touched upon, such as, for example, the probability of the sum of events:

P(A + B) = P(A) + P(B) - this theorem is for adding only incompatible events;

P(A + B) = P(A) + P(B) - P(AB) - and this one is for adding only compatible ones.

Probability of producing events:

P(A ⋅ B) = P(A) ⋅ P(B) - this theorem is for independent events;

(P(A ⋅ B) = P(A) ⋅ P(B∣A); P(A ⋅ B) = P(A) ⋅ P(A∣B)) - and this one is for dependents.

The event formula will end the list. Probability theory tells us about Bayes' theorem, which looks like this:

P(H_m∣A) = (P(H_m)P(A∣H_m)) : (∑_(k=1)^n P(H_k)P(A∣H_k)),m = 1,..., n

In this formula, H 1 , H 2 , …, H n is the full group of hypotheses.

Examples

If you carefully study any branch of mathematics, it is not complete without exercises and sample solutions. So is the theory of probability: events, examples here are an integral component that confirms scientific calculations.

Formula for number of permutations

Let's say there are thirty cards in a deck of cards, starting with face value one. Next question. How many ways are there to stack the deck so that cards with a face value of one and two are not next to each other?

The task is set, now let's move on to solving it. First you need to determine the number of permutations of thirty elements, for this we take the above formula, it turns out P_30 = 30!.

Based on this rule, we will find out how many options there are to fold the deck in different ways, but we need to subtract from them those in which the first and second cards are next. To do this, let's start with the option when the first is above the second. It turns out that the first card can take twenty-nine places - from the first to the twenty-ninth, and the second card from the second to the thirtieth, it turns out only twenty-nine places for a pair of cards. In turn, the rest can take twenty-eight places, and in any order. That is, for a permutation of twenty-eight cards, there are twenty-eight options P_28 = 28!

As a result, it turns out that if we consider the solution when the first card is above the second, there are 29 ⋅ 28 extra possibilities! = 29!

Using the same method, you need to calculate the number of redundant options for the case when the first card is under the second. It also turns out 29 ⋅ 28! = 29!

From this it follows that there are 2 ⋅ 29! extra options, while there are 30 necessary ways to build the deck! - 2 ⋅ 29!. It remains only to count.

30! = 29! ⋅ 30; 30!- 2 ⋅ 29! = 29! ⋅ (30 - 2) = 29! ⋅ 28

Now you need to multiply all the numbers from one to twenty-nine among themselves, and then at the end multiply everything by 28. The answer is 2.4757335 ⋅〖10〗^32

Example solution. Formula for Placement Number

In this problem, you need to find out how many ways there are to put fifteen volumes on one shelf, but on the condition that there are thirty volumes in total.

In this problem, the solution is slightly simpler than in the previous one. Using the already known formula, it is necessary to calculate the total number of arrangements from thirty volumes of fifteen.

A_30^15 = 30 ⋅ 29 ⋅ 28⋅... ⋅ (30 - 15 + 1) = 30 ⋅ 29 ⋅ 28 ⋅ ... ⋅ 16 = 202 843 204 931 727 360 000

The answer, respectively, will be equal to 202,843,204,931,727,360,000.

Now let's take the task a little more difficult. You need to find out how many ways there are to arrange thirty books on two bookshelves, provided that only fifteen volumes can be on one shelf.

Before starting the solution, I would like to clarify that some problems are solved in several ways, so there are two ways in this one, but the same formula is used in both.

In this problem, you can take the answer from the previous one, because there we calculated how many times you can fill a shelf with fifteen books in different ways. It turned out A_30^15 = 30 ⋅ 29 ⋅ 28 ⋅ ... ⋅ (30 - 15 + 1) = 30 ⋅ 29 ⋅ 28 ⋅ ...⋅ 16.

We calculate the second shelf according to the permutation formula, because fifteen books are placed in it, while only fifteen remain. We use the formula P_15 = 15!.

It turns out that in total there will be A_30^15 ⋅ P_15 ways, but, in addition, the product of all numbers from thirty to sixteen will need to be multiplied by the product of numbers from one to fifteen, as a result, the product of all numbers from one to thirty will be obtained, that is, the answer equals 30!

But this problem can be solved in a different way - easier. To do this, you can imagine that there is one shelf for thirty books. All of them are placed on this plane, but since the condition requires that there be two shelves, we cut one long one in half, it turns out two fifteen each. From this it turns out that the placement options can be P_30 = 30!.

Example solution. Formula for combination number

Now we will consider a variant of the third problem from combinatorics. You need to find out how many ways there are to arrange fifteen books, provided that you need to choose from thirty absolutely identical ones.

For the solution, of course, the formula for the number of combinations will be applied. From the condition it becomes clear that the order of the identical fifteen books is not important. Therefore, initially you need to find out the total number of combinations of thirty books of fifteen.

C_30^15 = 30 ! : ((30-15)) ! : 15 ! = 155 117 520

That's all. Using this formula, in the shortest possible time it was possible to solve such a problem, the answer, respectively, is 155 117 520.

Example solution. The classical definition of probability

Using the formula above, you can find the answer in a simple problem. But it will help to visually see and trace the course of actions.

The problem is given that there are ten absolutely identical balls in the urn. Of these, four are yellow and six are blue. One ball is taken from the urn. You need to find out the probability of getting blue.

To solve the problem, it is necessary to designate getting the blue ball as event A. This experience can have ten outcomes, which, in turn, are elementary and equally probable. At the same time, six out of ten are favorable for event A. We solve using the formula:

P(A) = 6: 10 = 0.6

By applying this formula, we found out that the probability of getting a blue ball is 0.6.

Example solution. Probability of the sum of events

Now a variant will be presented, which is solved using the formula for the probability of the sum of events. So, in the condition given that there are two boxes, the first contains one gray and five white balls, and the second contains eight gray and four white balls. As a result, one of them was taken from the first and second boxes. It is necessary to find out what is the chance that the balls taken out will be gray and white.

To solve this problem, it is necessary to designate events.

  • So, A - take a gray ball from the first box: P(A) = 1/6.
  • A '- they took a white ball also from the first box: P (A ") \u003d 5/6.
  • B - a gray ball was taken out already from the second box: P(B) = 2/3.
  • B' - they took a gray ball from the second box: P(B") = 1/3.

According to the condition of the problem, it is necessary that one of the phenomena occur: AB 'or A'B. Using the formula, we get: P(AB") = 1/18, P(A"B) = 10/18.

Now the formula for multiplying the probability has been used. Next, to find out the answer, you need to apply the equation for their addition:

P \u003d P (AB" + A "B) \u003d P (AB") + P (A "B) \u003d 11/18.

So, using the formula, you can solve similar problems.

Outcome

The article provided information on the topic "Probability Theory", in which the probability of an event plays a crucial role. Of course, not everything was taken into account, but, based on the text presented, one can theoretically get acquainted with this section of mathematics. The science in question can be useful not only in professional work, but also in everyday life. With its help, you can calculate any possibility of any event.

The text also touched upon significant dates in the history of the formation of the theory of probability as a science, and the names of people whose works were invested in it. This is how human curiosity led to the fact that people learned to calculate even random events. Once they were just interested in it, but today everyone already knows about it. And no one will say what awaits us in the future, what other brilliant discoveries related to the theory under consideration will be made. But one thing is for sure - research does not stand still!

Do you want to know what are the mathematical chances of your bet being successful? Then there are two good news for you. First: to calculate the patency, you do not need to carry out complex calculations and spend a lot of time. It is enough to use simple formulas, which will take a couple of minutes to work with. Second, after reading this article, you will easily be able to calculate the probability of passing any of your trades.

To correctly determine the patency, you need to take three steps:

  • Calculate the percentage of the probability of the outcome of an event according to the bookmaker's office;
  • Calculate the probability from statistical data yourself;
  • Find out the value of a bet given both probabilities.

Let us consider in detail each of the steps, using not only formulas, but also examples.

Fast passage

Calculation of the probability embedded in the betting odds

The first step is to find out with what probability the bookmaker evaluates the chances of a particular outcome. After all, it is clear that bookmakers do not bet odds just like that. For this we use the following formula:

PB=(1/K)*100%,

where P B is the probability of the outcome according to the bookmaker's office;

K - bookmaker odds for the outcome.

Let's say the odds are 4 for the victory of the London Arsenal in a duel against Bayern. This means that the probability of its victory by the BC is regarded as (1/4) * 100% = 25%. Or Djokovic is playing against South. The multiplier for Novak's victory is 1.2, his chances are equal to (1/1.2)*100%=83%.

This is how the bookmaker itself evaluates the chances of success for each player and team. Having completed the first step, we move on to the second.

Calculation of the probability of an event by the player

The second point of our plan is our own assessment of the probability of the event. Since we cannot mathematically take into account such parameters as motivation, game tone, we will use a simplified model and use only the statistics of previous meetings. To calculate the statistical probability of an outcome, we use the formula:

PAND\u003d (UM / M) * 100%,

WherePAND- the probability of the event according to the player;

UM - the number of successful matches in which such an event took place;

M is the total number of matches.

To make it clearer, let's give examples. Andy Murray and Rafael Nadal have played 14 matches. In 6 of them, total under 21 games were recorded, in 8 - total over. It is necessary to find out the probability that the next match will be played for a total over: (8/14)*100=57%. Valencia played 74 matches at the Mestalla against Atlético, in which they scored 29 victories. Probability of Valencia winning: (29/74)*100%=39%.

And we all know this only thanks to the statistics of previous games! Naturally, such a probability cannot be calculated for some new team or player, so this betting strategy is only suitable for matches in which opponents meet not for the first time. Now we know how to determine the betting and own probabilities of outcomes, and we have all the knowledge to go to the last step.

Determining the value of a bet

The value (valuability) of the bet and the passability are directly related: the higher the valuation, the higher the chance of a pass. The value is calculated as follows:

V=PAND*K-100%,

where V is the value;

P I - the probability of an outcome according to the better;

K - bookmaker odds for the outcome.

Let's say we want to bet on Milan to win the match against Roma and we calculated that the probability of the Red-Blacks winning is 45%. The bookmaker offers us a coefficient of 2.5 for this outcome. Would such a bet be valuable? We carry out calculations: V \u003d 45% * 2.5-100% \u003d 12.5%. Great, we have a valuable bet with good chances of passing.

Let's take another case. Maria Sharapova plays against Petra Kvitova. We want to make a deal for Maria to win, which, according to our calculations, has a 60% probability. Bookmakers offer a multiplier of 1.5 for this outcome. Determine the value: V=60%*1.5-100=-10%. As you can see, this bet is of no value and should be refrained from.

as an ontological category reflects the measure of the possibility of the emergence of any entity in any conditions. In contrast to the mathematical and logical interpretations of this concept, ontological V. does not associate itself with the necessity of a quantitative expression. The value of V. is revealed in the context of understanding determinism and the nature of development in general.

Great Definition

Incomplete definition ↓

PROBABILITY

a concept that characterizes quantities. a measure of the possibility of the appearance of a certain event at a certain. conditions. In scientific knowledge there are three interpretations of V. The classical concept of V., which arose from the mathematical. analysis of gambling and most fully developed by B. Pascal, J. Bernoulli and P. Laplace, considers V. as the ratio of the number of favorable cases to the total number of all equally possible. For example, when throwing a die that has 6 sides, each of them can be expected to come up with a V equal to 1/6, since neither side has advantages over the other. Such symmetry of the outcomes of experience is specially taken into account when organizing games, but is relatively rare in the study of objective events in science and practice. Classic V.'s interpretation gave way to statistical. V.'s concepts, at the heart of which are valid. observation of the appearance of a certain event during the duration. experience under precisely fixed conditions. Practice confirms that the more often an event occurs, the greater the degree of the objective possibility of its occurrence, or V. Therefore, the statistical. V.'s interpretation is based on the concept of relates. frequencies, a cut can be determined empirically. V. as theoretical. the concept never coincides with an empirically determined frequency, however, in many ways. cases, it practically differs little from the relative. frequency found as a result of the duration. observations. Many statisticians regard V. as a "double" refers. frequency, edge is determined by statistical. study of observational results

or experiments. Less realistic was the definition of V. as the limit relates. frequencies of mass events, or collectives, proposed by R. Mises. As a further development of the frequency approach to V., a dispositional, or propensity, interpretation of V. is put forward (K. Popper, J. Hecking, M. Bunge, T. Setl). According to this interpretation, V. characterizes the property of generating conditions, for example. experiment. installation, to obtain a sequence of massive random events. It is this attitude that gives rise to the physical dispositions, or predispositions, V. to-rykh can be checked by means of relative. frequencies.

Statistical V.'s interpretation dominates the scientific. knowledge, because it reflects the specific. the nature of the patterns inherent in mass phenomena of a random nature. In many physical, biological, economic, demographic and other social processes, it is necessary to take into account the action of many random factors, to-rye are characterized by a stable frequency. Identification of this stable frequency and quantities. its assessment with the help of V. makes it possible to reveal the necessity, which makes its way through the cumulative action of many accidents. This is where the dialectic of the transformation of chance into necessity finds its manifestation (see F. Engels, in the book: K. Marx and F. Engels, Soch., vol. 20, pp. 535-36).

Logical or inductive reasoning characterizes the relationship between the premises and the conclusion of non-demonstrative and, in particular, inductive reasoning. Unlike deduction, the premises of induction do not guarantee the truth of the conclusion, but only make it more or less plausible. This credibility, with precisely formulated premises, can sometimes be estimated with the help of V. The value of this V. is most often determined by comparing. concepts (greater than, less than or equal to), and sometimes in a numerical way. Logic interpretation is often used to analyze inductive reasoning and build various systems of probabilistic logics (R. Carnap, R. Jeffrey). In the semantic logical concepts. V. is often defined as the degree of confirmation of one statement by others (for example, the hypothesis of its empirical data).

In connection with the development of theories of decision-making and games, the so-called. personalistic interpretation of V. Although V. in this case expresses the degree of faith of the subject and the occurrence of a certain event, V. themselves must be chosen in such a way that the axioms of the calculation of V. are satisfied. Therefore, V., with such an interpretation, expresses not so much the degree of subjective as rational faith . Consequently, decisions made on the basis of such V. will be rational, because they do not take into account the psychological. characteristics and inclinations of the subject.

From epistemological t. sp. difference between statistic., logical. and personalistic interpretations of V. lies in the fact that if the first characterizes the objective properties and relations of mass phenomena of a random nature, then the last two analyze the features of the subjective, cognizant. human activities under conditions of uncertainty.

PROBABILITY

one of the most important concepts of science, characterizing a special systemic vision of the world, its structure, evolution and cognition. The specificity of the probabilistic view of the world is revealed through the inclusion of the concepts of chance, independence and hierarchy (ideas of levels in the structure and determination of systems) among the basic concepts of being.

Ideas about probability originated in antiquity and were related to the characteristics of our knowledge, while the presence of probabilistic knowledge was recognized, which differs from reliable knowledge and from false. The impact of the idea of ​​probability on scientific thinking, on the development of knowledge is directly related to the development of the theory of probability as a mathematical discipline. The origin of the mathematical doctrine of probability dates back to the 17th century, when the development of the core of concepts that allow. quantitative (numerical) characteristics and expressing a probabilistic idea.

Intensive applications of probability to the development of knowledge fall on the 2nd floor. 19- 1st floor. 20th century Probability has entered the structures of such fundamental sciences of nature as classical statistical physics, genetics, quantum theory, cybernetics (information theory). Accordingly, probability personifies that stage in the development of science, which is now defined as non-classical science. To reveal the novelty, features of the probabilistic way of thinking, it is necessary to proceed from the analysis of the subject of probability theory and the foundations of its many applications. Probability theory is usually defined as a mathematical discipline that studies the laws of mass random phenomena under certain conditions. Randomness means that within the framework of mass character, the existence of each elementary phenomenon does not depend on and is not determined by the existence of other phenomena. At the same time, the very mass nature of phenomena has a stable structure, contains certain regularities. A mass phenomenon is quite strictly divided into subsystems, and the relative number of elementary phenomena in each of the subsystems (relative frequency) is very stable. This stability is compared with probability. A mass phenomenon as a whole is characterized by a distribution of probabilities, i.e., the assignment of subsystems and their corresponding probabilities. The language of probability theory is the language of probability distributions. Accordingly, the theory of probability is defined as the abstract science of operating with distributions.

Probability gave rise in science to ideas about statistical regularities and statistical systems. The latter are systems formed from independent or quasi-independent entities, their structure is characterized by probability distributions. But how is it possible to form systems from independent entities? It is usually assumed that for the formation of systems with integral characteristics, it is necessary that between their elements there are sufficiently stable bonds that cement the systems. The stability of statistical systems is given by the presence of external conditions, the external environment, external rather than internal forces. The very definition of probability is always based on setting the conditions for the formation of the initial mass phenomenon. Another important idea that characterizes the probabilistic paradigm is the idea of ​​hierarchy (subordination). This idea expresses the relationship between the characteristics of individual elements and the integral characteristics of systems: the latter, as it were, are built on top of the former.

The significance of probabilistic methods in cognition lies in the fact that they allow us to explore and theoretically express the patterns of structure and behavior of objects and systems that have a hierarchical, "two-level" structure.

Analysis of the nature of probability is based on its frequency, statistical interpretation. At the same time, for a very long time, such an understanding of probability dominated in science, which was called logical, or inductive, probability. Logical probability is interested in questions of the validity of a separate, individual judgment under certain conditions. Is it possible to assess the degree of confirmation (reliability, truth) of an inductive conclusion (hypothetical conclusion) in a quantitative form? In the course of the formation of the theory of probability, such questions were repeatedly discussed, and they began to talk about the degrees of confirmation of hypothetical conclusions. This measure of probability is determined by the information at the disposal of a given person, his experience, views on the world and the psychological mindset. In all such cases, the magnitude of the probability is not amenable to strict measurements and practically lies outside the competence of probability theory as a consistent mathematical discipline.

An objective, frequency interpretation of probability was established in science with considerable difficulty. Initially, the understanding of the nature of probability was strongly influenced by those philosophical and methodological views that were characteristic of classical science. Historically, the formation of probabilistic methods in physics occurred under the decisive influence of the ideas of mechanics: statistical systems were treated simply as mechanical ones. Since the corresponding problems were not solved by strict methods of mechanics, statements arose that the appeal to probabilistic methods and statistical regularities is the result of the incompleteness of our knowledge. In the history of the development of classical statistical physics, numerous attempts have been made to substantiate it on the basis of classical mechanics, but they all failed. The basis of probability is that it expresses the features of the structure of a certain class of systems, other than the systems of mechanics: the state of the elements of these systems is characterized by instability and a special (not reducible to mechanics) nature of interactions.

The entry of probability into cognition leads to the denial of the concept of rigid determinism, to the denial of the basic model of being and cognition developed in the process of the formation of classical science. The basic models represented by statistical theories are of a different, more general nature: they include the ideas of randomness and independence. The idea of ​​probability is connected with the disclosure of the internal dynamics of objects and systems, which cannot be completely determined by external conditions and circumstances.

The concept of a probabilistic vision of the world, based on the absolutization of ideas about independence (as before, the paradigm of rigid determination), has now revealed its limitations, which most strongly affects the transition of modern science to analytical methods for studying complex systems and the physical and mathematical foundations of self-organization phenomena.

Great Definition

Incomplete definition ↓

A professional better should be well versed in odds, quickly and correctly evaluate the probability of an event by a coefficient and, if necessary, be able convert odds from one format to another. In this manual, we will talk about what types of coefficients are, as well as using examples, we will analyze how you can calculate the probability from a known coefficient and vice versa.

What are the types of coefficients?

There are three main types of odds offered by bookmakers: decimal odds, fractional odds(English) and american odds. The most common odds in Europe are decimal. American odds are popular in North America. Fractional odds are the most traditional type, they immediately reflect information about how much you need to bet in order to get a certain amount.

Decimal Odds

Decimals or else they are called European odds- this is the usual number format, represented by a decimal fraction with an accuracy of hundredths, and sometimes even thousandths. An example of a decimal odd is 1.91. Calculating your profit with decimal odds is very easy, just multiply your bet amount by that odd. For example, in the match "Manchester United" - "Arsenal", the victory of "MU" is set with a coefficient - 2.05, a draw is estimated with a coefficient - 3.9, and the victory of "Arsenal" is equal to - 2.95. Let's say we're confident United will win and bet $1,000 on them. Then our possible income is calculated as follows:

2.05 * $1000 = $2050;

Isn't it really that difficult? In the same way, the possible income is calculated when betting on a draw and the victory of Arsenal.

Draw: 3.9 * $1000 = $3900;
Arsenal win: 2.95 * $1000 = $2950;

How to calculate the probability of an event by decimal odds?

Imagine now that we need to determine the probability of an event by the decimal odds set by the bookmaker. This is also very easy to do. To do this, we divide the unit by this coefficient.

Let's take the data we already have and calculate the probability of each event:

Manchester United win: 1 / 2.05 = 0,487 = 48,7%;
Draw: 1 / 3.9 = 0,256 = 25,6%;
Arsenal win: 1 / 2.95 = 0,338 = 33,8%;

Fractional Odds (English)

As the name implies fractional coefficient represented by an ordinary fraction. An example of an English odd is 5/2. The numerator of the fraction contains a number that is the potential amount of net winnings, and the denominator contains a number indicating the amount that must be wagered in order to receive this winnings. Simply put, we have to wager $2 dollars to win $5. Odds of 3/2 means that in order to get $3 of net winnings, we will have to bet $2.

How to calculate the probability of an event by fractional odds?

It is also not difficult to calculate the probability of an event by fractional coefficients, you just need to divide the denominator by the sum of the numerator and denominator.

For the fraction 5/2, we calculate the probability: 2 / (5+2) = 2 / 7 = 0,28 = 28%;
For the fraction 3/2, we calculate the probability:

American odds

American odds unpopular in Europe, but very unpopular in North America. Perhaps this type of coefficients is the most difficult, but this is only at first glance. In fact, there is nothing complicated in this type of coefficients. Now let's take a look at everything in order.

The main feature of American odds is that they can be either positive, and negative. An example of American odds is (+150), (-120). The American odds (+150) means that in order to earn $150 we need to bet $100. In other words, a positive American multiplier reflects potential net earnings at a $100 bet. The negative American coefficient reflects the amount of bet that must be made in order to receive a net winning of $100. For example, the coefficient (- 120) tells us that by betting $120 we will win $100.

How to calculate the probability of an event using American odds?

The probability of an event according to the American odds is calculated according to the following formulas:

(-(M)) / ((-(M)) + 100), where M is a negative American coefficient;
100/(P+100), where P is a positive American coefficient;

For example, we have a coefficient (-120), then the probability is calculated as follows:

(-(M)) / ((-(M)) + 100); we substitute the value (-120) instead of "M";
(-(-120)) / ((-(-120)) + 100 = 120 / (120 + 100) = 120 / 220 = 0,545 = 54,5%;

Thus, the probability of an event with an American coefficient (-120) is 54.5%.

For example, we have a coefficient (+150), then the probability is calculated as follows:

100/(P+100); we substitute the value (+150) instead of "P";
100 / (150 + 100) = 100 / 250 = 0,4 = 40%;

Thus, the probability of an event with an American coefficient (+150) is 40%.

How, knowing the percentage of probability, translate it into a decimal coefficient?

In order to calculate the decimal coefficient for a known percentage of probability, you need to divide 100 by the probability of an event in percent. For example, if the probability of an event is 55%, then the decimal coefficient of this probability will be equal to 1.81.

100 / 55% = 1,81

How, knowing the percentage of probability, translate it into a fractional coefficient?

In order to calculate a fractional coefficient from a known percentage of probability, you need to subtract one from dividing 100 by the probability of an event in percent. For example, we have a probability percentage of 40%, then the fractional coefficient of this probability will be equal to 3/2.

(100 / 40%) - 1 = 2,5 - 1 = 1,5;
The fractional coefficient is 1.5/1 or 3/2.

How, knowing the percentage of probability, translate it into an American coefficient?

If the probability of an event is more than 50%, then the calculation is made according to the formula:

- ((V) / (100 - V)) * 100, where V is the probability;

For example, we have an 80% probability of an event, then the American coefficient of this probability will be equal to (-400).

- (80 / (100 - 80)) * 100 = - (80 / 20) * 100 = - 4 * 100 = (-400);

If the probability of an event is less than 50%, then the calculation is made according to the formula:

((100 - V) / V) * 100, where V is the probability;

For example, if we have a probability percentage of an event of 20%, then the American coefficient of this probability will be equal to (+400).

((100 - 20) / 20) * 100 = (80 / 20) * 100 = 4 * 100 = 400;

How to convert the coefficient to another format?

There are times when it is necessary to convert coefficients from one format to another. For example, we have a fractional coefficient 3/2 and we need to convert it to decimal. To convert a fractional to decimal odds, we first determine the probability of an event with a fractional odds, and then convert that probability to a decimal odds.

The probability of an event with a fractional coefficient of 3/2 is 40%.

2 / (3+2) = 2 / 5 = 0,4 = 40%;

Now we translate the probability of an event into a decimal coefficient, for this we divide 100 by the probability of an event as a percentage:

100 / 40% = 2.5;

Thus, a fractional odd of 3/2 is equal to a decimal odd of 2.5. In a similar way, for example, American odds are converted to fractional, decimal to American, etc. The hardest part of all this is just the calculations.



Similar articles