Dice generator - online dice. History of dice

20.06.2020

What are the three laws of randomness and why unpredictability gives us the opportunity to make the most reliable predictions.

Our mind resists the idea of ​​chance with all its might. Over the course of our evolution as a species, we have developed the ability to look for cause-and-effect relationships in everything. Long before the advent of science, we already knew that a crimson-red sunset foreshadows a dangerous storm, and a feverish blush on a baby’s face means that its mother will have a difficult night. Our mind automatically tries to structure the data we receive in such a way that it helps us draw conclusions from our observations and use these conclusions to understand and predict events.

The idea of ​​randomness is so difficult to accept because it contradicts the basic instinct that forces us to look for rational patterns in the world around us. And accidents demonstrate to us that such patterns do not exist. This means that randomness fundamentally limits our intuition, since it proves that there are processes whose course we cannot fully predict. This concept is not easy to accept, even though it is an essential part of the mechanism of the Universe. Without understanding what randomness is, we find ourselves at a dead end in a perfectly predictable world that simply does not exist outside of our imagination.

I would say that only when we have mastered the three aphorisms - the three laws of chance - can we free ourselves from our primitive desire for predictability and accept the Universe as it is, and not as we would like it to be.

Randomness exists

We use any mental mechanisms to avoid facing chance. We are talking about karma, this cosmic equalizer that connects apparently unrelated things. We believe in good and bad omens, in the fact that “God loves the trinity”, we claim that we are influenced by the location of the stars, the phases of the moon and the movement of the planets. If we are diagnosed with cancer, we automatically try to blame it on something (or someone).

But many events cannot be fully predicted or explained. Disasters occur unpredictably, and both good and bad people suffer, including those who were born “under a lucky star” or “under a favorable sign.” Sometimes we manage to predict something, but chance can easily refute even the most reliable predictions. Don't be surprised if your obese chain-smoking biker neighbor lives longer than you.

Moreover, random events can pretend to be non-random. Even the most astute scientist may have difficulty distinguishing between a real effect and a random fluctuation. Chance can turn placebos into magic cures and harmless compounds into deadly poisons; and can even create subatomic particles out of nothing.

Some events cannot be predicted

If you walk into any casino in Las Vegas and watch the crowd of players at the gaming tables, you will probably see someone who thinks he is lucky today. He has won several times in a row, and his brain assures him that he will continue to win, so the gambler continues to bet. You will also see someone who has just lost. The brain of the loser, like the brain of the winner, also advises him to continue the game: since you have lost so many times in a row, it means that now you will probably start getting lucky. It would be stupid to leave now and miss this chance.

But no matter what our brain tells us, there is no mysterious force that can provide us with a “lucky streak”, nor a universal justice that would make sure that the loser finally starts winning. The universe doesn't care whether you win or lose; For her, all dice rolls are the same.

No matter how much effort you put into watching the dice roll again, and no matter how closely you look at the players who think they've gotten lucky, you will get absolutely no information about the next roll. The result of each throw is completely independent of the history of previous throws. Therefore, any expectation that one can gain an advantage by watching the game is doomed to failure. Such events - independent of anything and completely random - defy any attempt to find patterns, because these patterns simply do not exist.

Randomness poses a barrier to human ingenuity because it demonstrates that all our logic, all our science and reasoning cannot fully predict the behavior of the universe. No matter what methods you use, no matter what theory you invent, no matter what logic you apply to predict the results of a dice roll, you will lose five out of six times. Always.

A complex of random events is predictable, even if individual events are not

Randomness is frightening, it limits the reliability of even the most sophisticated theories and hides certain elements of nature from us, no matter how persistently we try to penetrate their essence. Nevertheless, it cannot be argued that the random is a synonym for the unknowable. This is not true at all.

Randomness obeys its own rules, and these rules make the random process understandable and predictable.

The Law of Large Numbers states that although single random events are completely unpredictable, a large enough sample of these events can be quite predictable - and the larger the sample, the more accurate the prediction. Another powerful mathematical tool, central limit theorems, also shows that the sum of a sufficiently large number of random variables will have a distribution close to normal. With these tools, we can predict events quite accurately in the long term, no matter how chaotic, strange and random they may be in the short term.

The rules of chance are so powerful that they form the basis of the most immutable and immutable laws of physics. Although the atoms in a container of gas move randomly, their overall behavior is described by a simple set of equations. Even the laws of thermodynamics assume that a large number of random events are predictable; these laws are unshakable precisely because chance is so absolute.

It is ironic that it is the unpredictability of random events that gives us the opportunity to make our most reliable predictions.

The most common type is shaped like a cube, with numbers from one to six on each side. The player, throwing it onto a flat surface, sees the result on the top edge. Bones are a real mouthpiece of chance, good or bad luck.

Accident.
Cubes (bones) have existed for a long time, but they acquired the traditional form with six sides around 2600 BC. e. The ancient Greeks loved to play dice, and in their legends the hero Palamedes, unjustly accused of treason by Odysseus, is mentioned as their inventor. According to legend, he invented this game to entertain the soldiers who were besieging Troy, which was captured thanks to a huge wooden horse. The Romans during the time of Julius Caesar also entertained themselves with a variety of dice games. In Latin the cube was called datum, which means “given.”

Prohibitions.
In the Middle Ages, around the 12th century, dice became very popular in Europe: dice, which could be taken with you everywhere, were popular with both soldiers and peasants. It is said that there were over six hundred different games! The production of dice becomes a separate profession. King Louis IX (1214-1270), returning from the crusade, did not approve of gambling and ordered the production of dice to be banned throughout the kingdom. More than the game itself, the authorities were dissatisfied with the unrest associated with it - then they played mainly in taverns and games often ended in fights and stabbings. But no prohibitions prevented dice from surviving time and surviving to this day.

Charged dice!
The result of a die roll is always determined by chance, but some cheaters try to change this. By drilling a hole in a die and pouring lead or mercury into it, you can ensure that the throw gives the same result every time. Such a cube is called “charged”. Made from various materials, be it gold, stone, crystal, bone, dice can have different shapes. Small pyramid (tetrahedron) shaped dice were found in the tombs of the Egyptian pharaohs who built the great pyramids! At various times, dice were made with 8, 10, 12, 20 and even 100 sides. Usually they are marked with numbers, but in their place there can also be letters or images, giving room for imagination.

How to throw dice.
Not only do dice come in different shapes, but they also have different ways of playing. The rules of some games require you to roll in a certain way, usually to avoid a calculated roll or to prevent the die from coming to rest in a slanted position. Sometimes they come with a special glass to avoid cheating or falling off the gaming table. In the English game of crepe, all three dice must hit the game table or wall to prevent cheaters from pretending to throw by simply moving the dice without rotating it.

Randomness and probability.
The dice always gives a random result that cannot be predicted. With one die, a player has just as much chance of rolling a 1 as a 6 - it's all determined by chance. With two dice, on the contrary, the level of randomness decreases, since the player has more information about the result: for example, with two dice, the number 7 can be obtained in several ways - by throwing 1 and 6, 5 and 2, or 4 and 3... But the possibility of getting the number 2 is only one: rolling 1 twice. Thus, the probability of getting a 7 is higher than getting a 2! This is called probability theory. Many games are associated with this principle, especially games for money.

About the use of dice.
Dice can be a stand-alone game, without other elements. The only thing that practically does not exist are games for one single cube. The rules require at least two (for example, crepe). To play dice poker you need to have five dice, a pen and paper. The goal is to complete combinations similar to those of the card game of the same name, recording the points for them in a special table. In addition, the cube is a very popular part for board games, allowing you to move chips or decide the outcome of game battles.

Die is cast.
In 49 BC. e. the young Julius Caesar conquered Gaul and returned to Pompeii. But his power was a source of concern to the senators, who decided to disband his army before his return. The future emperor, having arrived at the borders of the republic, decides to violate the order by crossing it with his army. Before crossing the Rubicon (the river that was the border), he said to his legionnaires “Alea jacta est” (“the die is cast”). This saying has become a catchphrase, the meaning of which is that, as in the game, after some decisions are made it is no longer possible to back down.

The advantage of an online dice generator over regular dice is obvious - it will never get lost! The virtual cube will cope with its functions much better than the real one - manipulation of the results is completely excluded and you can only rely on His Majesty's chance. Dice online is, among other things, great entertainment in your spare time. Generating a result takes three seconds, fueling the excitement and interest of players. To simulate dice rolls, you just need to press the “1” button on the keyboard, which allows you not to be distracted, for example, from an exciting board game.

Number of cubes:

Please help the service with one click: Tell your friends about the generator!

When we hear such a phrase as “Dice”, we immediately come to the association of casinos where they simply cannot do without them. To begin with, let’s just remember a little about what this item is.

Dice are cubes, on each side of which the numbers from 1 to 6 are represented by dots. When we throw them, we are always in the hope that the number we have imagined and desired will come up. But there are cases when the cube, falling on its edge, does not show the number. This means that the one who quits like this can choose any one.

It also happens that the cube may roll under the bed or closet, and when it is removed from there, the number changes accordingly. In this case, the die is re-rolled so that everyone can clearly see the number.

Online dice roll in 1 click

In a game involving regular dice, it is very easy to cheat. To get the desired number, you need to put this side of the cube on top and twist it so that it remains the same (only the side part rotates). This is not a complete guarantee, but the winning percentage will be seventy-five percent.

If you use two dice, then the chances are reduced to thirty, but this is still a considerable percentage. Due to cheating, many player campaigns do not like to use dice.

Our wonderful service works precisely to avoid such situations. It will be impossible to cheat with us, since the online dice roll cannot be faked. A number from 1 to 6 will appear on the page in a completely random and uncontrollable manner.

Convenient dice generator

A very big advantage is that the online dice generator cannot get lost (especially since it can be bookmarked), and an ordinary small dice can easily get lost somewhere. Also a huge advantage will be the fact that manipulation of the results is completely excluded. The generator has a function that allows you to select from one to three dice to roll at the same time.

The online dice generator is a very interesting entertainment, one of the ways to develop intuition. Use our service and get instant and reliable results.

4.8 out of 5 (ratings: 116)

Einstein's claim that God does not play dice with the universe has been misinterpreted

Few of Einstein's catchphrases have been as widely quoted as his observation that God does not play dice with the universe. People naturally take this witty comment of his as evidence that he was dogmatically opposed to quantum mechanics, which treats randomness as a characteristic feature of the physical world. When the nucleus of a radioactive element decays, it happens spontaneously; there is no rule that will tell you exactly when or why it will happen. When a particle of light hits a translucent mirror, it either reflects off it or passes through it. The outcome could be anything up to the moment when this event occurred. And you don't need to go to a laboratory to see these kinds of processes: many Internet sites show streams of random numbers generated by Geiger counters or quantum optics devices. Being unpredictable even in principle, such numbers are ideal for problems in cryptography, statistics and online poker tournaments.

Einstein, as the standard legend goes. refused to accept the fact that some events are indeterministic by nature. - they just happen, and nothing can be done to find out why. Remaining almost in splendid isolation, surrounded by his equals, he clung with both hands to the mechanical Universe of classical physics, mechanically measuring seconds, in which each moment predetermines what will happen in the next. The line of the dice game became indicative of the other side of his life: the tragedy of the revolutionary turned reactionary who revolutionized physics with his theory of relativity, but - as Niels Bohr diplomatically put it - when faced with quantum theory, he "went off to lunch."

However, over the years, many historians, philosophers and physicists have questioned this interpretation of this story. By diving into the sea of ​​everything Einstein actually said, they discovered that his judgments about unpredictability were more radical and had a wider range of nuances than is usually portrayed. “Trying to dig up the true story becomes something of a mission,” says Don A. Howard, a historian at the University of Notre Dame. “It’s amazing when you go into the archives and see discrepancies with the conventional wisdom.” As he and other historians of science have shown, Einstein recognized the indeterministic nature of quantum mechanics - which is not surprising, since it was he who discovered its indeterminism. What he never recognized was that indeterminism is fundamental in nature. All this indicated that the problem arose at a deeper level of reality, which the theory did not reflect. His criticism was not mystical, but focused on specific scientific problems that remain unresolved to this day.

The question of whether the universe is a clockwork machine or a dice table destroys the foundations of what we think physics is: the search for simple rules that underlie the amazing diversity of nature. If something happens without any reason, it puts an end to rational inquiry. “Fundamental indeterminism would be the end of science,” says Andrew S. Friedman, a cosmologist at the Massachusetts Institute of Technology. Yet philosophers throughout history have believed that indeterminism is a necessary condition for human free will. Either we are all cogs in a clockwork mechanism, and therefore everything we do is predetermined, or we are the agent of our own destiny, in which case the Universe must not be deterministic after all.

This dichotomy has had very real consequences in the way society holds people accountable for their actions. Our legal system is based on the assumption of free will; For the accused to be found guilty, he must have acted with intent. Courts are constantly puzzling over the question: what if a person is innocent due to insanity, youthful impulsiveness or a rotten social environment?

However, whenever people talk about dichotomy, they tend to try to expose it as a misconception. Indeed, many philosophers believe that it is pointless to talk about whether the universe is deterministic or non-deterministic. It can be both, depending on how large or complex the subject of study is: particles, atoms, molecules, cells, organisms, psyche, communities. “The difference between determinism and indeterminism is a difference depending on the level of study of the problem,” says Christian List, a philosopher at the London School of Economics and Political Science. “Even if you observe determinism at a particular level, it is quite consistent with indeterminism at both higher and lower levels." The atoms in our brains can behave in a completely deterministic manner, while at the same time allowing us freedom of action, since atoms and organs function at different levels.

In a similar way, Einstein sought a deterministic subquantum level, while at the same time not denying that the quantum level is probabilistic.

What did Einstein object to?

How Einstein earned the label of opponent of quantum theory is a mystery almost as big as quantum mechanics itself. The very concept of a quantum - a discrete unit of energy - was the fruit of his thoughts in 1905, and for a decade and a half he stood almost single-handedly in its defense. Einstein suggested this. what physicists today consider to be the main features of quantum physics, such as the strange ability of light to act as both a particle and a wave, and it was from his thinking about wave physics that Erwin Schrödinger developed the most widely accepted formulation of quantum theory in the 1920s. Einstein was not an opponent of chance either. In 1916, he showed that when atoms emit photons, the time and direction of the emission are random variables.

"This goes against the popular image of Einstein as an opponent of the probabilistic approach," argues Jan von Plato of the University of Helsinki. But Einstein and his contemporaries faced a serious problem. Quantum phenomena are random, but quantum theory itself is not. The Schrödinger equation is 100% deterministic. It describes a particle or system of particles using what is called a wave function, which takes advantage of the wave nature of particles and explains the wave-like pattern that a collection of particles produces. The equation predicts what will happen to the wave function at any given time with absolute certainty. In many respects, this equation is more deterministic than Newton's laws of motion: it does not lead to confusions such as singularity (where quantities become infinite and therefore indescribable) or chaos (where motion becomes unpredictable).

The catch is that the determinism of the Schrödinger equation is the determinism of the wave function, and the wave function cannot be observed directly, unlike the positions and velocities of particles. Instead, the wave function determines the quantities that can be observed and the probability of each of the possible outcomes. The theory leaves open the questions of what the wave function itself is and whether it should be considered literally as a real wave in our material world. Accordingly, the following question remains open: is the observed randomness an integral internal property of nature or just its façade? “It is claimed that quantum mechanics is non-deterministic, but this is too hasty a conclusion,” says philosopher Christian Wuthrich of the University of Geneva in Switzerland.

Werner Heisenberg, another of the pioneers of quantum theory, thought of the wave function as a haze indicating potential existence. If you can't clearly and unambiguously tell where a particle is, it's because the particle isn't really located anywhere in particular. Only when you observe a particle does it materialize somewhere in space. The wave function could be spread out over a huge region of space, but at the moment when the observation is made, it instantly collapses, shrinks into a narrow point located in a single specific place, and suddenly a particle appears there. But even when you look at the particle, bang! - it suddenly stops behaving deterministically and jumps into the final state, like a child grabbing a chair in a game of musical chairs. (The game is that children dance in a circle to the music around chairs, the number of which is one less than the number of players, and try to sit on a free seat as soon as the music stops).

There is no law that governs this collapse. There is no equation for it. It just happens - that's all! Collapse became a key element of the Copenhagen interpretation: a view of quantum mechanics named after the city where Bohr and his institute, along with Heisenberg, did much of the seminal work. (Paradoxically, Bohr himself never recognized the collapse of the wave function). The Copenhagen School considers the observed randomness of quantum physics to be its nominal characteristic, not amenable to further explanation. Most physicists agree with this, one of the reasons for this is the so-called anchor effect, known from psychology, or the anchoring effect: this is a completely satisfactory explanation, and it appeared first. Although Einstein was not an opponent of quantum mechanics, he was certainly an opponent of its Copenhagen interpretation. He started from the idea that the act of measurement caused a break in the continuous evolution of the physical system, and it was in this context that he began to express his opposition to the divine throwing of dice. “It was precisely this issue that Einstein lamented in 1926, not the overarching metaphysical claim of determinism as an absolutely necessary condition,” says Howard. “He was particularly active in the heated debate about whether the collapse of the wave function leads to a breakdown of continuity.” ".


Plurality of reality.And yet, is the world deterministic or not? The answer to this question depends not only on the fundamental laws of motion, but also on the level at which we describe the system. Consider five atoms in a gas moving deterministically (top diagram). They start their journey from almost the same location and gradually diverge. However, at the macroscopic level (lower diagram), it is not individual atoms that are visible, but an amorphous flow in the gas. After some time, the gas will probably be randomly distributed into several streams. This randomness at the macro level is a by-product of the observer's ignorance of the laws at the micro level; it is an objective property of nature, reflecting the way atoms come together. Likewise, Einstein proposed that the deterministic internal structure of the universe leads to the probabilistic nature of the quantum realm.

Collapse can hardly be a real process, Einstein argued. This would require instantaneous action at a distance - a mysterious mechanism by which, say, both the left and right sides of the wave function collapse into the same tiny point, even when no force coordinates their behavior. Not only Einstein, but every physicist in his time believed that such a process was impossible; it would have to occur faster than the speed of light, which is in obvious contradiction with the theory of relativity. In fact, quantum mechanics doesn't just give you dice - it gives you pairs of dice that always come up on the same sides, even if you roll one on Vegas and the other on Vega. It seemed obvious to Einstein that the dice must be cheaters, allowing them to secretly influence the outcome of the throws in advance. But the Copenhagen school denies any such possibility, thereby suggesting that the dominoes do indeed instantly influence each other across the vast expanses of space. Moreover, Einstein was concerned about the power that the Copenhagenians attributed to the act of measurement. What is measurement anyway? Could this be something that only intelligent beings, or even only tenured professors, can conduct? Heisenberg and other representatives of the Copenhagen school never specified this concept. Some have suggested that we create the reality around us in our minds through the act of observing it, an idea that sounds poetic, perhaps too poetic. Einstein also considered it the height of the impudence of the Copenhagenians to declare that quantum mechanics was completely completed, that it was the final theory that would never be superseded by another. He considered all theories, including his own, as bridges to something even greater.

In fact. Howard argues that Einstein would be happy to accept indeterminism if he had answers to all his problems that needed solving - if, for example, someone could clearly articulate what a dimension is and how particles can remain synchronized without long-range action. A sign that Einstein considered indeterminism a secondary problem is that he made the same demands on the deterministic alternatives to the Copenhagen school and also rejected them. Another historian is Arthur Fine of the University of Washington. believes. That Howard exaggerates Einstein's susceptibility to indeterminism, but agrees that his judgment rests on a more solid foundation than several generations of physicists have been led to believe, based on snippets of his remarks about the game of dice.

Random thoughts

If you play tug of war on the side of the Copenhagen School, Einstein believed, you will find that quantum disorder is like all other types of disorder in physics: it is the product of deeper insight. The dance of tiny grains of dust in a beam of light reveals the complex movement of molecules, and the emission of photons or the radioactive decay of nuclei is a similar process, Einstein believed. In his view, quantum mechanics is an evaluative theory that expresses the general behavior of the building blocks of nature, but does not have sufficient resolution to capture individual details.

A deeper, more complete theory would explain the movement completely - without any mysterious jumps. From this point of view, the wave function is a collective description, like the statement that a fair die, if tossed repeatedly, will land approximately the same number of times on each of its sides. The collapse of the wave function is not a physical process, but an acquisition of knowledge. If you roll a six-sided die and it comes up with, say, a four, the range of options from one to six shrinks, or one might say collapses, to the actual value of "four." A godlike demon who can track the details of the atomic structure that influence the outcome of a die (i.e., measure exactly how your hand pushes and twists a die before it hits the table) will never talk about collapse.

Einstein's intuition was reinforced by his early work on the collective effect of molecular motion, studied by a branch of physics called statistical mechanics, in which he showed that physics can be probabilistic even when the underlying phenomenon is a deterministic reality. In 1935, Einstein wrote to the philosopher Karl Popper: “I do not think you are right in your assertion that it is impossible to draw statistical conclusions based on a deterministic theory. Take classical statistical mechanics (the theory of gases or the theory of Brownian motion).” Probabilities in Einstein's understanding were as real as those in the Copenhagen School interpretation. Manifesting themselves in the fundamental laws of motion, they also reflect other properties of the surrounding world; they are not just artifacts of human ignorance. Einstein suggested that Popper consider, as an example, a particle that moves in a circle at a constant speed; the probability of finding a particle in a given section of a circular arc reflects the symmetry of its trajectory. Similarly, the probability of a die landing on a given face is one in six, since it has six equal faces. "He understood better than most at the time that important physics was contained in the details of statistical-mechanical probability," says Howard.

Another lesson from statistical mechanics was that the quantities we observe do not necessarily exist at a deeper level. For example, a gas has a temperature, but it makes no sense to talk about the temperature of a single gas molecule. By analogy, Einstein became convinced that a subquantum theory was required to mark a radical break from quantum mechanics. In 1936 he wrote: “There is no doubt that quantum mechanics has captured a beautiful element of truth<...>However, I do not believe that quantum mechanics will be the starting point in the search for this basis, just as, conversely, one cannot move from thermodynamics (and therefore statistical mechanics) to the foundations of mechanics." To fill this deeper level, Einstein searched towards a unified theory field, in which particles are derivatives of structures that are not at all similar to particles. In short, the conventional wisdom that Einstein refused to recognize the probabilistic nature of quantum physics is wrong. He was trying to explain randomness, and not to pretend that it does not exist at all.

Make your level the best

Although Einstein's project to create a unified theory failed, the basic tenets of his intuitive approach to randomness still stand: indeterminism can arise from determinism. The quantum and subquantum levels - or any other pairs of levels in the hierarchy of nature - are composed of different types of structures, so they are subject to different types of laws. The law governing one level may naturally allow an element of randomness, even if the laws of the lower level are completely regulated. “Deterministic microphysics does not give rise to deterministic macrophysics,” says philosopher Jeremy Butterfield of the University of Cambridge.

Imagine a dice at the atomic level. The cube may consist of an unimaginably large number of atomic configurations that are completely indistinguishable from each other to the naked eye. If you track any of these configurations while spinning the cube, it will lead to a specific outcome - in a strictly deterministic way. In some configurations the die will end up with one dot on its top face, in others it will end up with two. etc. Therefore, a single macroscopic state (if the cube is made to spin) can lead to several possible macroscopic outcomes (one of the six faces being up). “If we describe the die at the macro level, we can view it as a stochastic system that allows for objective randomness,” says List, who studies level conjugation with Marcus Pivato, a mathematician at the University of Cergy-Pontoise in France.

Although the higher level builds on the lower one, it is autonomous. To describe dice you have to work at the level at which the dice exist as such, and when you do that you can't help but neglect the atoms and their dynamics. If you cross one level with another, you are committing category substitution: it is like asking the political affiliation of a salmon sandwich (to use the example of the philosopher David Albert of Columbia University). "When we have a phenomenon that can be described at different levels, we have to be conceptually very careful not to mix the levels," says List. For this reason, the result of throwing a die doesn't just appear random. It is truly random. The godlike demon might boast that he knows exactly what will happen, but he only knows what will happen to the atoms. He doesn't even know what a die is because it is higher level information. The demon never sees the forest, only the trees. He is like the main character of the Argentine writer Jorge Luis Borges' story "Funes the Memory" - a man who remembers everything, but does not grasp anything. “To think is to forget difference, to generalize, to abstract,” writes Borges. In order for the demon to know which side the die will fall on, it is necessary to explain what to look for. “The demon will only be able to understand what is happening at the top level if he is given a detailed description of how we define the boundary between levels,” says List. Truly, after this, the demon will probably become jealous that we are mortals.

The logic of the levels also works exactly in the opposite direction. Non-deterministic microphysics can lead to deterministic macrophysics. A baseball can be made from particles that exhibit chaotic behavior, but its flight is completely predictable; quantum chaos, averaging out. disappears. Likewise, gases are made up of molecules that undergo extremely complex—and indeed indeterministic—movements, but their temperature and other properties follow laws that are as simple as two times two. More speculatively, some physicists, such as Robert Laughlin of Stanford University, suggest that the lower level makes absolutely no difference. The building blocks can be anything, and their collective behavior will still be the same. After all, systems as diverse as water molecules, stars in a galaxy, and cars on a freeway obey the same laws of fluid flow.

Finally free

When you think in terms of levels, the worry that indeterminism probably marks the end of science goes away. There is no high wall around us protecting our law-abiding fragment of the Universe from the anarchic and incomprehensible rest of it. In fact, the world is a layer cake of determinism and indeterminism. The Earth's climate, for example, is governed by Newton's deterministic laws of motion, but weather forecasting is probabilistic, and at the same time, seasonal and long-term climate trends are again predictable. Biology also follows from deterministic physics, but organisms and ecosystems require other methods of description, such as Darwinian evolution. “Determinism doesn’t explain absolutely everything,” notes Tufts University philosopher Daniel Dennett. “Why did giraffes appear? Because who determined: so be it?”

People are interspersed within this layer cake. We have a powerful sense of free will. We often make unpredictable and mostly vital decisions; we realize that we could have acted differently (and often we regret that we did not do this). For thousands of years, so-called libertarians, supporters of the philosophical doctrine of free will (not to be confused with the political movement!), have argued that human freedom requires the freedom of a particle. Something must destroy the deterministic course of events, such as quantum randomness or the “deviations” that some ancient philosophers believed that atoms could experience in their movement (the concept of a random, unpredictable deviation of an atom from its original trajectory was introduced into ancient philosophy by Lucretius to defend the atomistic doctrine of Epicurus) .

The main problem with this line of reasoning is that it frees the particles but leaves us as slaves. It doesn't matter whether your decision was predetermined during the Big Bang or by a tiny particle, it is still not your decision. To be free, we require indeterminism not at the particle level, but at the human level. And this is possible because the human level and the particle level are independent of each other. Even if everything you do could be traced back to the very first steps, you are the master of your actions, because neither you nor your actions exist at the level of matter, but only at the macro level of consciousness. “This macro-indeterminism, based on micro-determinism, perhaps guarantees free will,” Butterfield believes. Macroindeterminism is not the reason for your decisions. This is your decision.

Some people will probably object and tell you that you are still a puppet, and the laws of nature act as the puppeteer, and that your freedom is nothing more than an illusion. But the very word “illusion” brings to mind mirages in the desert and women sawed in half: all this does not exist in reality. Macroindeterminism is not that at all. It is very real, just not fundamental. It can be compared to life. Individual atoms are absolutely inanimate matter, but their huge mass can live and breathe. “Everything that has to do with agents, their states of intention, their decisions and choices - none of these entities have anything to do with the conceptual tools of fundamental physics, but this does not mean that these phenomena are not real,” notes List. only means that they are all phenomena of a much higher level."

It would be a category mistake, if not complete ignorance, to describe human decisions as mechanics of the movement of atoms in your head. Instead, it is necessary to use all the concepts of psychology: desire, opportunity, intentions. Why did I drink water and not wine? Because I wanted it that way. My desires explain my actions. Most of the time, when we ask the question “Why?”, we are looking for the individual’s motivation, not his physical background. Psychological explanations allow for the kind of indeterminism that List speaks of. For example, game theorists model human decision making by laying out a range of options and explaining which one you would choose if you acted rationally. Your freedom to choose a particular option drives your choices, even if you never settle on that option.

Of course, List's arguments do not fully explain free will. The hierarchy of levels opens up space for free will, separating psychology from physics and giving us the opportunity to do unexpected things. But we must take advantage of this opportunity. If, for example, we made all our decisions by tossing a coin, this would still be considered macroindeterminism, but it would hardly qualify as free will in any meaningful sense. On the other hand, some people's decision-making can be so exhausting that they cannot be said to act freely.

This approach to the problem of determinism gives meaning to the interpretation of quantum theory, which was proposed a few years after Einstein's death in 1955. It was called the many-worlds interpretation, or the Everett interpretation. Its proponents argue that quantum mechanics describes a collection of parallel universes—a multiverse—that behaves generally deterministically, but appears indeterministic to us because we can only see one single universe. For example, an atom can emit a photon to the right or to the left; quantum theory leaves the outcome of this event open. According to the many-worlds interpretation, such a picture is observed because exactly the same situation arises in countless parallel universes: in some of them the photon flies deterministically to the left, and in the rest - to the right. Without being able to tell exactly which universe we are in, we cannot predict what will happen, so this situation appears inexplicable from the inside. "There is no true randomness in space, but events can appear random in the eyes of the observer," explains cosmologist Max Tegmark of the Massachusetts Institute of Technology, a well-known proponent of this view. "Random reflects your inability to determine where you are."

This is like saying that a die or a brain can be built from any of an infinite number of atomic configurations. This configuration itself may be deterministic, but since we cannot know which one corresponds to our dice or our brain, we are forced to assume that the outcome is indeterministic. Thus, parallel universes are not some exotic idea floating in a sick imagination. Our body and our brain are tiny multiverses; it is the diversity of possibilities that provides us with freedom.

Written by designer Tyler Sigman, on Gamasutra. I affectionately call it the “hair in the orc's nostrils” article, but it does a pretty good job of laying out the basics of probabilities in games.

This week's topic

Until now, almost everything we've talked about has been deterministic, and last week we took a closer look at transitive mechanics and broke it down as much as I can explain. But until now we haven't paid attention to a huge aspect of many games, namely the non-deterministic aspects, in other words - randomness. Understanding the nature of randomness is very important for game designers because we create systems that affect the player's experience in a given game, so we need to know how those systems work. If there is randomness in the system, you need to understand nature this randomness and how to change it to get the results we need.

Dice

Let's start with something simple: rolling dice. When most people think of dice, they think of a six-sided die known as a d6. But most gamers have seen many other dice: four-sided (d4), octagonal (d8), twelve-sided (d12), twenty-sided (d20) ... and if you real geek, you might have 30-sided or 100-sided dice somewhere. If you're not familiar with this terminology, the “d” stands for die, and the number after it is how many sides it has. If before“d” is a number, it means quantity dice when throwing. For example, in the game of Monopoly you roll 2d6.

So, in this case, the phrase “dice” is a symbol. There are a huge number of other random number generators that are not shaped like a plastic lump but perform the same function of generating a random number from 1 to n. An ordinary coin can also be thought of as a dihedral dice d2. I saw two designs of seven-sided dice: one of them looked like a dice, and the other looked more like a seven-sided wooden pencil. The tetrahedral dreidel (also known as titotum) is similar to the tetrahedral bone. The spinning arrow playing field in the game “Chutes & Ladders”, where the result can be from 1 to 6, corresponds to a six-sided die. A random number generator in a computer can create any number from 1 to 19 if the designer specifies such a command, although the computer does not have a 19-sided dice (in general, I will talk more about the probability of numbers appearing on a computer in next week). Although these items all look different, they are actually the same: you have an equal chance of getting one of several outcomes.

Dice have some interesting properties that we need to know about. First, the probability of rolling either face is the same (I'm assuming you're rolling a regular die, not one with an irregular geometric shape). So if you want to know average value throw (also known among those interested in the topic of probability as “mathematical expected value”), add up the values ​​​​of all the sides and divide this sum by quantity faces. The average roll for a standard six-sided die is 1+2+3+4+5+6 = 21, divided by the number of sides (6) and the average is 21/6 = 3.5. This is a special case because we assume that all outcomes are equally likely.

What if you have special dice? For example, I saw a game with a six-sided die with special stickers on the sides: 1, 1, 1, 2, 2, 3, so it behaves like a weird three-sided die that is more likely to roll a 1 than a 2, and 2 than 3. What is the average roll for this die? So, 1+1+1+2+2+3 = 10, divided by 6, equals 5/3 or approximately 1.66. So if you have this special dice and the players roll three dice and then add up the results, you know that the ballpark total of their roll will be about 5, and you can balance the game based on that assumption.

Dice and Independence

As I already said, we proceed from the assumption that each side is equally likely to fall out. This does not depend on how many dice you roll. Every throw of the dice regardless, this means that previous rolls do not affect the results of subsequent ones. With enough testing you will definitely notice a “series” of numbers, such as rolling mostly higher or lower numbers, or other features, and we'll talk about that later, but that doesn't mean the dice are “hot” or “cold.” If you roll a standard six-sided die and get the number 6 twice in a row, the probability that the next roll will result in a 6 is also 1/6. The probability does not increase because the cube is “heated up”. The probability does not decrease because the number 6 has already come up twice in a row, which means that now another side will come up. (Of course, if you roll a die twenty times and get a 6 each time, the chance that the twenty-first time you roll a 6 is pretty high... because that probably means you have the wrong dice!) But if you have the right dice, each side has the same probability of falling out, regardless of the results of other rolls. You can also imagine that each time we change the die, so if the number 6 is rolled twice in a row, remove the “hot” die from the game and replace it with a new six-sided die. I apologize if any of you already knew about this, but I needed to clear this up before moving forward.

How to make the dice roll more or less random

Let's talk about how to get different results on different dice. Whether you roll a die only once or several times, the game will feel more random if the die has more sides. The more times you roll a die, or the more dice you roll, the more the results move toward the average. For example, if you roll 1d6+4 (i.e. a standard six-sided die once and add 4 to the result), the average will be a number between 5 and 10. If you roll 5d2, the average will also be a number between 5 and 10. But when throwing a six-sided dice, the probability of getting the numbers 5, 8 or 10 is the same. The result of rolling 5d2 will mainly be the numbers 7 and 8, less often other values. The same series, even the same average value (7.5 in both cases), but the nature of the randomness is different.

Wait a minute. Didn't I just say that dice don't heat up or cool down? Now I'm saying that if you roll a lot of dice, the results of the rolls tend to be closer to the average? Why?

Let me explain. If you quit one dice, the probability of each side falling out is the same. This means that if you roll a lot of dice, over a period of time each side will appear approximately the same number of times. The more dice you roll, the more the total result will approach the average. This is not because the number drawn “forces” another number to be drawn that has not yet been drawn. And because a small streak of rolling a 6 (or a 20, or whatever number) won't ultimately matter if you roll the dice another ten thousand times and mostly come up with the average... you might now have a few numbers with high value, but maybe later a few low value numbers and over time they will get closer to the average value. Not because previous rolls affect the dice (seriously, dice are made of plastic, she doesn’t have the brains to think, “Oh, it’s been a while since I rolled a 2”), but because that’s what usually happens when you roll a lot of dice. A small series of repeating numbers will be almost invisible in a large number of results.

Thus, doing the calculations for one random roll of a die is fairly straightforward, at least as far as calculating the average value of the roll is concerned. There are also ways to calculate “how random” something is, a way to say that the results of rolling 1d6+4 will be “more random” than 5d2, for 5d2 the distribution of rolls will be more even, usually for this you calculate the standard deviation, and the larger the value, the more random the results will be, but this requires more calculations than I would like to give today (I will explain this topic later). The only thing I ask you to know is that, as a general rule, the fewer dice that are rolled, the greater the randomness. One more addition on this topic: the more sides a die has, the greater the randomness, since you have more options.

How to Calculate Probability Using Counting

You may be wondering: how can we calculate the exact probability of getting a certain result? This is actually quite important for many games, because if you roll a die, there is likely to be some kind of optimal outcome initially. The answer is that we need to count two values. First, count the maximum number of outcomes when throwing a die (regardless of what the outcome is). Then count the number of favorable outcomes. Dividing the second value by the first will give you the desired probability. To get the percentage, multiply the result by 100.

Examples:

Here's a very simple example. You want the number 4 or higher to roll and roll the six-sided die once. The maximum number of outcomes is 6 (1, 2, 3, 4, 5, 6). Of these, 3 outcomes (4, 5, 6) are favorable. This means that to calculate the probability, we divide 3 by 6 and get 0.5 or 50%.

Here's an example a little more complicated. You want an even number when rolling 2d6. The maximum number of outcomes is 36 (6 for each die, and since one die does not affect the other, we multiply 6 results by 6 and get 36). The difficulty with this type of question is that it is easy to count twice. For example, there are actually two options for a 3 on a roll of 2d6: 1+2 and 2+1. They look the same, but the difference is what number is displayed on the first die and what number is displayed on the second. You can also imagine that the dice are different colors, so for example in this case one dice is red and the other is blue. Then count the number of options for rolling an even number: 2 (1+1), 4 (1+3), 4 (2+2), 4 (3+1), 6 (1+5), 6 (2+4), 6 (3+3), 6 (4+2), 6 (5+1), 8 (2+6), 8 (3+5), 8 (4+4), 8 (5+3), 8 (6+2), 10 (4+6), 10 (5+5), 10 (6+4), 12 (6+6). It turns out that there are 18 options for a favorable outcome out of 36, as in the previous case, the probability will be equal to 0.5 or 50%. Perhaps unexpected, but quite accurate.

Monte Carlo Simulation

What if you have too many dice for this calculation? For example, you want to know what the probability is of getting a total of 15 or more when rolling 8d6. There are a LOT of different individual results for eight dice and counting them by hand would take a very long time. Even if we find some good solution to group different series of dice rolls, it will still take a very long time to count. In this case, the easiest way to calculate the probability is not to count manually, but to use a computer. There are two ways to calculate probability on a computer.

The first method can give you an accurate answer, but it involves a bit of programming or scripting. Essentially, the computer will look at each possibility, evaluate and count the total number of iterations and the number of iterations that match the desired result, and then provide the answers. Your code might look something like this:

int wincount=0, totalcount=0;

for (int i=1; i<=6; i++) {

for (int j=1; j<=6; j++) {

for (int k=1; k<=6; k++) {

... // insert more loops here

if (i+j+k+… >= 15) (

float probability = wincount/totalcount;

If you don't know much about programming and just want an approximate rather than an exact answer, you can simulate this situation in Excel, where you roll 8d6 a few thousand times and get the answer. To roll 1d6 in Excel, use the following formula:

FLOOR(RAND()*6)+1

There is a name for the situation when you don't know the answer and just try many times - Monte Carlo simulation, and this is a great solution to fall back on when you're trying to calculate the probability and it's too complicated. The great thing is that in this case we don't need to understand how the math works, and we know that the answer will be “pretty good” because, as we already know, the more the number of rolls, the closer the result gets to the average.

How to combine independent trials

If you ask about multiple repeated but independent trials, the outcome of one roll does not affect the outcomes of other rolls. There is another simpler explanation for this situation.

How to distinguish between something dependent and independent? Basically, if you can isolate each throw of a die (or series of throws) as a separate event, then it is independent. For example, if we want a total of 15 when rolling 8d6, this case cannot be split into multiple independent dice rolls. Since you count the sum of the values ​​of all the dice for the result, the result that comes up on one die affects the results that should come up on the other dice, because only by adding up all the values ​​will you get the required result.

Here's an example of independent rolls: You're playing a dice game, and you're rolling six-sided dice multiple times. To stay in the game, you must roll a number 2 or higher on your first roll. For the second roll - 3 or higher. The third requires a 4 or higher, the fourth requires a 5 or higher, the fifth requires a 6. If all five rolls are successful, you win. In this case, all throws are independent. Yes, if one throw is unsuccessful, it will affect the outcome of the entire game, but one throw does not affect another throw. For example, if your second roll of the dice is very successful, this does not affect the likelihood that the next rolls will be equally successful. Therefore, we can consider the probability of each roll of the dice separately.

If you have separate, independent probabilities and want to know what the probability is that All events will occur, you determine each individual probability and multiply them. Another way: if you use the conjunction “and” to describe several conditions (for example, what is the probability of some random event occurring And some other independent random event?), calculate the individual probabilities and multiply them.

It doesn't matter what you think never Do not add up independent probabilities. This is a common mistake. To understand why this is wrong, imagine a situation where you are tossing a 50/50 coin and want to know what the probability is of getting heads twice in a row. Each side has a 50% chance of landing, so if you add those two probabilities together, you get a 100% chance of getting heads, but we know that's not true because it could have turned up tails twice in a row. If you instead multiply the two probabilities, you get 50%*50% = 25%, which is the correct answer for calculating the probability of getting heads twice in a row.

Example

Let's go back to the six-sided dice game, where you first need to roll a number higher than 2, then higher than 3, and so on. to 6. What are the chances that in a given series of 5 tosses all outcomes will be favorable?

As stated above, these are independent trials and so we calculate the probability for each individual roll and then multiply them. The probability that the outcome of the first roll will be favorable is 5/6. Second - 4/6. Third - 3/6. The fourth - 2/6, the fifth - 1/6. Multiply all these results and you get about 1.5%... So, winning in this game is quite rare, so if you add this element to your game, you will need a fairly large jackpot.

Negation

Here's another useful tip: sometimes it's difficult to calculate the probability of an event occurring, but it's easier to determine what the chances are that an event will occur. won't come.

For example, let's say we have another game and you roll 6d6, and if at least once If you roll a 6, you win. What is the probability of winning?

In this case, you need to consider many options. Perhaps one number will appear, 6, i.e. one of the dice will show the number 6, and the others will have numbers from 1 to 5, and there are 6 possibilities of which dice will show the number 6. Then you can get the number 6 on two dice, or on three, or on even more, and each time we need to do a separate calculation, so it’s easy to get confused.

But there is another way to solve this problem, let's look at it from the other side. You you will lose If not on any the dice will not roll the number 6. In this case, we have six independent trials, the probability of each of them is 5/6 (any other number except 6 can fall on the dice). Multiply them and you get about 33%. Thus, the probability of losing is 1 in 3.

Therefore, the probability of winning is 67% (or 2 to 3).

From this example it is obvious that if you calculate the probability that an event will not occur, you need to subtract the result from 100%. If the probability of winning is 67%, then the probability lose — 100% minus 67%, or 33%. And vice versa. If it is difficult to calculate one probability, but easy to calculate the opposite, calculate the opposite and then subtract from 100%.

We combine the conditions for one independent test

I said just above that you should never add probabilities across independent trials. Are there any cases where Can sum up the probabilities? - Yes, in one special situation.

If you want to calculate the probability of multiple unrelated favorable outcomes on a single trial, add up the probabilities of each favorable outcome. For example, the probability of rolling the numbers 4, 5 or 6 on 1d6 is amount the probability of getting the number 4, the probability of getting the number 5, and the probability of getting the number 6. You can also imagine this situation as follows: if you use the conjunction “or” in a question about probability (for example, what is the probability that or different outcome of one random event?), calculate the individual probabilities and sum them up.

Please note that when you sum all possible outcomes game, the sum of all probabilities must be equal to 100%. If the sum does not equal 100%, your calculation was done incorrectly. This is a good way to double-check your calculations. For example, you analyzed the probability of getting all combinations in poker, if you add up all the results obtained, you should get exactly 100% (or at least a value quite close to 100%, if you use a calculator, you may have a small rounding error , but if you add up the exact numbers manually, everything should add up). If the sum does not add up, it means that most likely you did not take into account some combinations, or you calculated the probabilities of some combinations incorrectly and then you need to double-check your calculations.

Unequal probabilities

Until now, we have assumed that each side of the die is rolled out at the same frequency, because that is how the die works. But sometimes you are faced with a situation where different outcomes are possible and they different drop chances. For example, in one of the expansions of the card game “Nuclear War” there is a playing field with an arrow on which the result of a rocket launch depends: basically, it deals normal damage, stronger or weaker, but sometimes the damage is doubled or tripled, or a rocket explodes on the launch pad and hurts you, or another event occurs. Unlike the arrow board in “Chutes & Ladders” or “A Game of Life,” the game board in “Nuclear War” has unequal outcomes. Some sections of the playing field are larger and the arrow stops on them much more often, while other sections are very small and the arrow stops on them rarely.

So, at first glance, the bone looks something like this: 1, 1, 1, 2, 2, 3; we already talked about it, it's something like a weighted 1d3, so we need to divide all these sections into equal parts, find the smallest unit of measurement that everything is a multiple of and then represent the situation as d522 (or some other ), where many dice faces will represent the same situation, but with more outcomes. And this is one way to solve the problem, and it is technically feasible, but there is an easier way.

Let's go back to our standard six-sided dice. We said that in order to calculate the average value of a roll for a normal die, you need to sum the values ​​​​on all the faces and divide them by the number of faces, but how exactly is there a calculation going on? There is another way to express this. For a six-sided die, the probability of each side being rolled is exactly 1/6. Now we multiply Exodus each face on probability of this outcome (in this case 1/6 for each side), then we sum up the resulting values. Thus, summing (1*1/6) + (2*1/6) + (3*1/6) + (4*1/6) + (5*1/6) + (6*1/6 ), we get the same result (3.5) as in the calculation above. In fact, we count this way every time: we multiply each outcome by the probability of that outcome.

Can we do the same calculation for the arrow on the playing field in the game “Nuclear War”? Of course we can. And if we sum up all the results found, we will get the average value. All we need to do is calculate the probability of each outcome for the arrow on the game board and multiply by the outcome.

Another example

This method of calculating the average by multiplying each outcome by its individual probability is also suitable if the outcomes are equally likely but have different advantages, for example if you roll a die and win more on some sides than others. For example, let's take a casino game: you place a bet and roll 2d6. If you hit three low value numbers (2, 3, 4) or four high value numbers (9, 10, 11, 12), you will win an amount equal to your bet. The numbers with the lowest and highest value are special: if you roll 2 or 12, you win twice as much than your bid. If any other number is rolled (5, 6, 7, 8), you will lose your bet. This is a pretty simple game. But what is the probability of winning?

Let's start by counting how many times you can win:

  • The maximum number of outcomes when rolling 2d6 is 36. What is the number of favorable outcomes?
  • There is 1 option for rolling a two and 1 option for rolling a twelve.
  • There are 2 options for rolling three and eleven.
  • There are 3 options for rolling a four and 3 options for rolling a ten.
  • There are 4 options for rolling a nine.
  • Having summed up all the options, we get the number of favorable outcomes 16 out of 36.

So, under normal conditions, you will win 16 times out of 36 possible... the probability of winning is slightly less than 50%.

But in two cases out of these 16 you will win twice as much, i.e. It's like winning twice! If you play this game 36 times, betting $1 each time, and each of all possible outcomes comes up once, you will win a total of $18 (you will actually win 16 times, but two of those times will count as two winning). If you play 36 times and win $18, doesn't that mean it's an equal chance?

Take your time. If you count the number of times you can lose, you'll get 20, not 18. If you play 36 times, betting $1 each time, you'll win a total of $18 if you hit all the winning picks... but you'll lose the total an amount of $20 if all 20 unfavorable outcomes occur! As a result, you will fall behind a little: you lose an average of $2 net for every 36 games (you can also say that you lose an average of 1/18 of a dollar per day). Now you see how easy it is to make a mistake in this case and calculate the probability incorrectly!

Rearrangement

So far we have assumed that the order of the numbers when throwing dice does not matter. Rolling 2+4 is the same as rolling 4+2. In most cases, we manually count the number of favorable outcomes, but sometimes this method is impractical and it is better to use a mathematical formula.

An example of this situation is from the dice game “Farkle”. For each new round, you roll 6d6. If you are lucky and get all the possible results 1-2-3-4-5-6 (“straight”), you will receive a big bonus. What is the likelihood of this happening? In this case, there are many options for getting this combination!

The solution is as follows: one of the dice (and only one) must have the number 1! How many ways can the number 1 be rolled on one die? Six, since there are 6 dice and any one of them can land the number 1. Accordingly, take one dice and put it aside. Now, one of the remaining dice should roll the number 2. There are five options for this. Take another die and set it aside. Then four of the remaining dice may land a 3, three of the remaining dice may land a 4, two may land a 5, and you end up with one die that should land a 6 (in the latter case There is only one die and there is no choice). To calculate the number of favorable outcomes for hitting a straight, we multiply all the different, independent options: 6x5x4x3x2x1 = 720 - there seems to be quite a large number of possibilities for this combination to come up.

To calculate the probability of getting a straight, we need to divide 720 by the number of all possible outcomes for rolling 6d6. What is the number of all possible outcomes? Each dice can have 6 sides, so we multiply 6x6x6x6x6x6 = 46656 (the number is much higher!). Divide 720/46656 and get a probability of approximately 1.5%. If you were designing this game, this would be helpful for you to know so you could create a scoring system accordingly. Now we understand why in Farkle you will get such a big bonus if you get a straight, because this situation is quite rare!

The result is also interesting for another reason. The example shows how rarely, in fact, a result corresponding to probability occurs in a short period. Of course, if we were tossing several thousand dice, different sides of the dice would come up quite often. But when we only roll six dice, almost never It does not happen that each of the faces falls out! Based on this, it becomes clear that it is stupid to expect that another face will now appear, which has not yet fallen “because we have not rolled the number 6 for a long time, which means it will fall now.”

Listen, your random number generator is broken...

This brings us to a common misconception about probability: the assumption that all outcomes occur at the same frequency. over a short period of time, which is actually not the case. If we throw dice several times, the frequency of each side falling out will not be the same.

If you've ever worked on an online game with any kind of random number generator before, you've most likely encountered a situation where a player writes to technical support to say that your random number generator is broken and is not showing random numbers. and he came to this conclusion because he just killed 4 monsters in a row and received 4 exactly the same rewards, and these rewards should only appear 10% of the time, so this Almost never shouldn't take place, which means this obviously that your random number generator is broken.

You are doing a mathematical calculation. 1/10*1/10*1/10*1/10 is equal to 1 in 10,000, which means it's quite rare. And that's exactly what the player is trying to tell you. Is there a problem in this case?

It all depends on the circumstances. How many players are currently on your server? Let's say you have a fairly popular game and 100,000 people play it every day. How many players can kill four monsters in a row? Anything is possible, several times a day, but let's assume that half of them are just trading various items in auctions or chatting on RP servers, or doing other in-game activities, so only half of them are actually hunting monsters. What is the probability that to someone will the same reward appear? In this situation, you can expect that the same reward can appear several times per day, at least!

By the way, that's why it seems like every few weeks at least somebody wins the lottery, even if it is someone never It’s not you or your friends. If enough people play every week, chances are there will be at least one lucky... but if You If you play the lottery, the likelihood that you will win is less than the likelihood that you will be invited to work at Infinity Ward.

Cards and addiction

We've discussed independent events, such as rolling a die, and now know many powerful tools for analyzing randomness in many games. Calculating probability is a little more complicated when it comes to drawing cards from a deck, because each card we draw affects the remaining cards in the deck. If you have a standard 52-card deck and you take out, for example, 10 hearts and want to know the probability that the next card will be of the same suit, the probability has changed because you have already removed one card of the suit of hearts from the deck. Each card you remove changes the probability of the next card in the deck. Since in this case the previous event influences the next one, we call this probability dependent.

Please note that when I say “cards” I mean any game mechanics in which there is a set of objects and you remove one of the objects without replacing it, a “deck of cards” in this case is analogous to a bag of chips from which you remove one chip and do not replace it, or an urn from which you draw colored ones marbles (I've actually never seen a game that had an urn with colored marbles drawn from it, but it seems like probability teachers prefer this example for some reason).

Dependency Properties

I'd like to clarify that when it comes to cards, I'm assuming you draw cards, look at them, and remove them from the deck. Each of these actions is an important property.

If I had a deck of, say, six cards with the numbers 1 to 6, and I shuffled them and took out one card and then shuffled all six cards again, it would be similar to throwing a six-sided die; one result does not affect subsequent ones. Only if I draw cards and don't replace them will the result of me drawing a card with the number 1 increase the probability that the next time I draw a card with the number 6 (the probability will increase until I eventually draw that card or until I shuffle the cards).

The fact that we look on the cards is also important. If I remove a card from the deck and don't look at it, I have no additional information and the probability doesn't actually change. This may sound counterintuitive. How can simply flipping a card magically change the odds? But it is possible because you can calculate the probability for unknown items just based on what you you know. For example, if you shuffle a standard deck of cards and reveal 51 cards and none of them are a queen of clubs, you will know with 100% certainty that the remaining card is a queen of clubs. If you shuffle a standard deck of cards and draw 51 cards, despite on them, then the probability that the remaining card is a queen of clubs will still be 1/52. As you open each card, you get more information.

Calculating the probability for dependent events follows the same principles as for independent events, except that it is a little more complicated because the probabilities change as you reveal cards. So you need to multiply many different values ​​instead of multiplying the same value. What this really means is that we need to combine all the calculations we did into one combination.

Example

You shuffle a standard 52-card deck and draw two cards. What is the probability that you will draw a pair? There are several ways to calculate this probability, but perhaps the simplest is as follows: what is the probability that if you take out one card, you will not be able to take out a pair? This probability is zero, so it doesn't matter which first card you draw, as long as it matches the second. No matter which card we draw first, we still have a chance to draw a pair, so the probability that we can draw a pair after drawing the first card is 100%.

What is the probability that the second card matches the first? There are 51 cards left in the deck and 3 of them match the first card (actually there would be 4 out of 52, but you already removed one of the matching cards when you took out the first card!), so the probability is 1/17. (So ​​the next time the guy sitting across the table from you playing Texas Hold'em says, "Cool, another pair? I'm feeling lucky today," you'll know there's a pretty good chance he's bluffing.)

What if we add two jokers and now we have 54 cards in the deck and we want to know what the probability of drawing a pair is? The first card may be a joker and then the deck will only contain one card, not three, which will match. How to find the probability in this case? We will divide the probabilities and multiply each possibility.

Our first card could be a joker or some other card. The probability of drawing a joker is 2/54, the probability of drawing some other card is 52/54.

If the first card is a joker (2/54), then the probability that the second card will match the first is 1/53. Multiplying the values ​​(we can multiply them because these are separate events and we want both events occurred) and we get 1/1431 - less than one tenth of a percent.

If you draw some other card first (52/54), the probability of matching the second card is 3/53. We multiply the values ​​and get 78/1431 (a little more than 5.5%).

What do we do with these two results? They don't intersect and we want to know the probability everyone of them, so we sum the values! We get a final result of 79/1431 (still about 5.5%).

If we wanted to be sure of the accuracy of the answer, we could calculate the probability of all the other possible outcomes: drawing a joker and not matching the second card, or drawing some other card and not matching the second card, and adding them all up with the probability of winning, we would got exactly 100%. I won't give the math here, but you can try the math to double check.

Monty Hall Paradox

This brings us to a rather famous paradox that often confuses many people - the Monty Hall Paradox. The paradox is named after the host of the TV show “Let’s Make a Deal” Monty Hall. If you've never seen this show, it was the opposite of the TV show “The Price Is Right.” In “The Price Is Right,” the host (the host used to be Bob Barker, now it's... Drew Carey? Anyway...) is your friend. He wants so you can win money or cool prizes. It tries to give you every opportunity to win, as long as you can guess how much the items purchased by the sponsors are actually worth.

Monty Hall behaved differently. He was like Bob Barker's evil twin. His goal was to make you look like an idiot on national television. If you were on the show, he was your opponent, you played against him, and the odds were in his favor. Perhaps I'm being too harsh, but when the chance of being chosen as a contestant seems directly proportional to whether you wear a ridiculous suit, I come to these kinds of conclusions.

But one of the show's most famous memes was this: There were three doors in front of you, and they were called Door Number 1, Door Number 2, and Door Number 3. You could choose one door... for free! Behind one of these doors was a magnificent prize, for example, a new car. There were no prizes behind the other doors; these two doors were of no value. Their goal was to humiliate you and so it's not that there was nothing behind them at all, there was something behind them that looked stupid, like there was a goat behind them or a huge tube of toothpaste or something... something, what exactly happened Not a new passenger car.

You were choosing one of the doors and Monty was about to open it to let you know if you won or not... but wait, before we know, let's look at one of those the door you not chosen. Since Monty knows which door the prize is behind, and there is only one prize and two doors that you didn't choose, no matter what, he can always open a door that doesn't have a prize behind it. “Are you choosing Door number 3? Then, let's open Door No. 1 to show that there was no prize behind it." And now, out of generosity, he offers you the chance to trade your chosen Door number 3 for what is behind Door number 2. It is at this point that the question of probability arises: does being able to choose another door increase your probability of winning, or decrease it, or does it remain the same? How do you think?

Correct answer: the ability to choose another door increases probability of winning from 1/3 to 2/3. This is illogical. If you haven't encountered this paradox before, you're probably thinking: wait, did we magically change the probability by opening one door? But as we have already seen in the example with the cards above, this exactly what happens when we get more information. It is obvious that the probability of winning when you first pick is 1/3, and I believe that everyone will agree with this. When one door comes off, it does not change the probability of winning for the first choice at all, the probability is still 1/3, but this means that the probability that other the door is now 2/3 correct.

Let's look at this example from a different perspective. You choose a door. The probability of winning is 1/3. I suggest you change two other doors, which is what Monty Hall actually proposes to do. Of course, he opens one of the doors to show that there is no prize behind it, but he Always can do this, so it doesn't really change anything. Of course you'll want to choose a different door!

If you're not quite clear on this issue and need a more convincing explanation, click on this link to be taken to a great little Flash application that will allow you to explore this paradox in more detail. You can play starting with about 10 doors and then gradually move up to a game with three doors; There is also a simulator where you can select any number of doors from 3 to 50 and play or run several thousand simulations and see how many times you would win if you played.

A remark from higher mathematics teacher and game balance specialist Maxim Soldatov, which, of course, Schreiber did not have, but without which it is quite difficult to understand this magical transformation:

You choose a door, one of three, the probability of “winning” is 1/3. Now you have 2 strategies: change after opening the wrong door, choice or not. If you do not change your choice, then the probability will remain 1/3, since the choice occurs only at the first stage, and you have to guess right away, but if you change, then you can win if you first choose the wrong door (then they open another wrong one, will remain faithful, you change your mind and take her)
The probability of choosing the wrong door at the beginning is 2/3, so it turns out that by changing your decision you make the probability of winning 2 times greater

And again about the Monty Hall paradox

As for the show itself, Monty Hall knew this because even if his competitors weren't good at math, He understands it well. Here's what he did to change the game a little. If you chose a door behind which there was a prize, the probability of which is 1/3, it Always offered you the opportunity to choose another door. After all, you chose a passenger car and then you'll trade it for a goat and you'll look pretty stupid, which is exactly what he needs because he's kind of an evil guy. But if you choose the door behind which there will be no prize, only in half In such cases, he will prompt you to choose another door, and in other cases, he will simply show you your new goat and you will leave the scene. Let's analyze this new game in which Monty Hall can choose offer you the chance to choose another door or not.

Let's say he follows this algorithm: if you choose a door with a prize, he always offers you the opportunity to choose another door, otherwise there is a 50/50 chance that he will offer you to choose another door or give you a goat. What is your probability of winning?

In one of the three options, you immediately choose the door behind which the prize is located, and the presenter invites you to choose another door.

Of the remaining two options out of three (you initially choose a door without a prize), in half the cases the presenter will offer you to choose another door, and in the other half of the cases - not. Half of 2/3 is 1/3, i.e. in one case out of three you will get a goat, in one case out of three you choose the wrong door and the host will ask you to choose another one and in one case out of three you choose the right door and he will ask you to choose another door.

If the presenter offers to choose another door, we already know that that one case out of three when he gives us a goat and we leave did not happen. This is useful information because it means our chances of winning have changed. In two cases out of three, when we have the opportunity to choose, in one case it means that we guessed correctly, and in the other that we guessed incorrectly, so if we were offered the opportunity to choose at all, it means that the probability of our winning is 50 /50, and there is no mathematical benefits, remain with your choice or choose another door.

Like poker, it is now a psychological game, not a mathematical one. Monty gave you a choice because he thinks you're a sucker who doesn't know that choosing the other door is the "right" decision, and that you'll stubbornly hold on to your choice because psychologically the situation is when you chose the car, and then lost it, harder? Or does he think that you are smart and choose another door, and he offers you this chance because he knows that you guessed correctly in the first place and that you will be hooked and trapped? Or maybe he's being uncharacteristically kind to himself and pushing you to do something in your personal interest because he hasn't given away a car in a while and his producers are telling him that the audience is getting bored and he'd better give away a big prize soon so that ratings don't fall?

This way, Monty manages to offer choices (sometimes) and still keep the overall probability of winning at 1/3. Remember that the probability that you will lose outright is 1/3. The probability that you will guess correctly right away is 1/3, and 50% of those times you will win (1/3 x 1/2 = 1/6). The chance of you guessing wrong at first but then having a chance to choose another door is 1/3, and 50% of those times you will win (also 1/6). Add up two independent possibilities of winning and you get a probability of 1/3, so whether you stick with your choice or choose another door, your overall probability of winning throughout the game is 1/3... the probability does not become greater than in a situation where you would guess the door and the presenter would show you what is behind this door, without the opportunity to choose another door! So the point of offering the option to choose a different door is not to change the probability, but to make the decision-making process more fun to watch on television.

By the way, this is one of the very reasons why poker can be so interesting: in most formats, between rounds when bets are made (for example, the flop, turn and river in Texas Hold'em), cards are gradually revealed, and if at the beginning of the game you have one probability of winning, then after each round of betting, when more cards are revealed, this probability changes.

Boy and girl paradox

This brings us to another famous paradox that usually puzzles everyone - the boy-girl paradox. The only thing I'm writing about today that isn't directly related to games (though I guess that just means I should encourage you to create relevant game mechanics). It's more of a puzzle, but an interesting one, and to solve it, you need to understand conditional probability, which we talked about above.

Problem: I have a friend with two children, at least one the child is a girl. What is the probability that the second child Same girl? Let's assume that in any family there is a 50/50 chance of having a girl or a boy, and this is true for each child (in fact, some men have more sperm with an X chromosome or a Y chromosome, so the probability changes a little if you know that one child is a girl, the probability of having a girl is slightly higher, in addition there are other conditions, for example, hermaphroditism, but to solve this problem, we will not take this into account and assume that the birth of a child is an independent event and the probability of having a boy or girls are the same).

Since we're talking about a 1/2 chance, intuitively we'd expect the answer to probably be 1/2 or 1/4, or some other round number that's a multiple of two. But the answer is: 1/3 . Wait, why?

The difficulty here is that the information we have reduces the number of possibilities. Suppose the parents are fans of Sesame Street and, regardless of whether the child was born a boy or a girl, named their children A and B. Under normal conditions, there are four equally likely possibilities: A and B are two boys, A and B are two girls, A is a boy and B is a girl, A is a girl and B is a boy. Since we know that at least one the child is a girl, we can eliminate the possibility that A and B are two boys, so we are left with three (still equally likely) possibilities. If all possibilities are equally likely and there are three of them, we know that the probability of each of them is 1/3. Only in one of these three options are both children girls, so the answer is 1/3.

And again about the paradox of a boy and a girl

The solution to the problem becomes even more illogical. Imagine that I tell you that my friend has two children and one child - girl who was born on Tuesday. Let us assume that under normal conditions the probability of a child being born on one of the seven days of the week is the same. What is the probability that the second child is also a girl? You might think the answer would still be 1/3; What is the significance of Tuesday? But even in this case, intuition fails us. Answer: 13/27 , which is not only unintuitive, it's very strange. What's the matter in this case?

Actually Tuesday changes the probability because we don't know Which baby was born on Tuesday or maybe two children born on Tuesday. In this case, we use the same logic as above, we count all possible combinations when at least one child is a girl born on Tuesday. As in the previous example, let's assume that the children's names are A and B, the combinations look like this:

  • A is a girl who was born on Tuesday, B is a boy (in this situation there are 7 possibilities, one for each day of the week when a boy could be born).
  • B is a girl born on Tuesday, A is a boy (also 7 possibilities).
  • A is a girl who was born on Tuesday, B is a girl who was born on another day of the week (6 possibilities).
  • B is a girl who was born on Tuesday, A is a girl who was not born on Tuesday (also 6 probabilities).
  • A and B are two girls who were born on Tuesday (1 possibility, you need to pay attention to this so as not to count twice).

We add up and get 27 different equally possible combinations of births of children and days with at least one possibility of a girl being born on Tuesday. Of these, there are 13 possibilities when two girls are born. It also seems completely illogical, and it seems like this task was created just to cause headaches. If you're still puzzled by this example, game theorist Jesper Juhl has a good explanation of this issue on his website.

If you are currently working on a game...

If there's a randomness in the game you're designing, this is a great time to analyze it. Select some element that you want to analyze. First ask yourself what the probability for a given element is according to your expectations, what you think it should be in the context of the game. For example, if you're making an RPG and you're wondering what the probability should be that the player will be able to defeat a monster in battle, ask yourself what win percentage feels right to you. Typically when playing console RPGs, players get very upset when they lose, so it's best if they don't lose often... maybe 10% of the time or less? If you're an RPG designer, you probably know better than I do, but you need to have a basic idea of ​​what the probability should be.

Then ask yourself if this is something dependent(like cards) or independent(like dice). Analyze all possible outcomes and their probabilities. Make sure that the sum of all probabilities is 100%. And finally, of course, compare your results with the results of your expectations. Is the dice rolling or card drawing happening the way you intended or do you see that you need to adjust the values. And, of course, if you you will find what needs to be adjusted, you can use the same calculations to determine how much something needs to be adjusted!

Homework assignment

Your “homework” this week will help you sharpen your probability skills. Here are two dice games and a card game that you will analyze using probability, as well as a strange game mechanic I once developed that will test the Monte Carlo method.

Game #1 - Dragon Bones

This is a dice game that my colleagues and I once came up with (thanks to Jeb Havens and Jesse King!), and which specifically blows people's minds with its probabilities. This is a simple casino game called “Dragon Dice” and it is a gambling dice competition between the player and the house. You are given a normal 1d6 die. The goal of the game is to roll a number higher than the house's. Tom is given a non-standard 1d6 - the same as yours, but instead of a 1 on one side there is an image of a Dragon (thus, the casino has a Dragon die - 2-3-4-5-6). If the house gets the Dragon, it automatically wins and you lose. If you both get the same number, it's a tie and you roll the dice again. The one who rolls the highest number wins.

Of course, everything doesn’t work out entirely in the player’s favor, because the casino has an advantage in the form of the Dragon’s Edge. But is this really true? You have to calculate this. But before that, check your intuition. Let's say the winnings are 2 to 1. So if you win, you keep your bet and get double your bet. For example, if you bet $1 and win, you keep that dollar and get 2 more on top for a total of $3. If you lose, you only lose your bet. Would you play? So, do you intuitively feel that the probability is greater than 2 to 1, or do you still think that it is less? In other words, on average over 3 games, do you expect to win more than once, or less, or once?

Once you've got your intuition sorted out, use math. There are only 36 possible positions for both dice, so you can count them all with no problem. If you're not sure about that 2-for-1 offer, consider this: Let's say you played the game 36 times (betting $1 each time). For every win you get 2 dollars, for every loss you lose 1, and a draw doesn't change anything. Calculate all your likely winnings and losses and decide whether you will lose or gain some dollars. Then ask yourself how right your intuition was. And then realize what a villain I am.

And, yes, if you've already thought about this question - I'm deliberately confusing you by misrepresenting the actual mechanics of dice games, but I'm sure you can overcome this obstacle with just a little thought. Try to solve this problem yourself. I'll post all the answers here next week.

Game No. 2 - Throw for luck

This is a gambling game of dice called "Roll for Luck" (also "Birdcage" because sometimes the dice are not thrown, but placed in a large wire cage, reminiscent of the cage from "Bingo"). It's a simple game that basically boils down to this: Bet, say, $1 on a number from 1 to 6. Then you roll 3d6. For each die that lands your number, you get $1 (and keep your original bet). If your number doesn't come up on any of the dice, the casino gets your dollar and you get nothing. So if you bet on a 1 and you get a 1 on the sides three times, you get $3.

Intuitively, it seems that this game has equal chances. Each die is an individual 1 in 6 chance of winning, so when you add up all three, your chance of winning is 3 in 6. However, of course, remember that you are adding three separate dice, and you are only allowed to add them if we we are talking about separate winning combinations of the same die. Something you will need to multiply.

Once you calculate all the possible outcomes (probably easier to do in Excel than by hand, since there are 216 of them), the game still looks odd-even at first glance. But in reality, the casino still has a better chance of winning—how much more? Specifically, how much money on average do you expect to lose each round of play? All you have to do is add up the wins and losses of all 216 results and then divide by 216, which should be pretty easy... But as you can see, there are a few traps you can fall into, and that's why I'm telling you: If you think this game has an even chance of winning, you've got it all wrong.

Game #3 - 5 Card Stud Poker

If you've already warmed up with previous games, let's check what we know about conditional probability using this card game as an example. Specifically, let's imagine a poker game with a 52-card deck. Let's also imagine 5 card stud, where each player only receives 5 cards. You can't discard a card, you can't draw a new one, there's no shared deck - you only get 5 cards.

A royal flush is 10-J-Q-K-A in one hand, there are four in total, so there are four possible ways to get a royal flush. Calculate the probability that you will get one such combination.

I must warn you of one thing: remember that you can draw these five cards in any order. That is, first you can draw an ace or a ten, it doesn’t matter. So when calculating this, keep in mind that there are actually more than four ways to get a royal flush, assuming the cards were dealt in order!

Game No. 4 - IMF Lottery

The fourth problem cannot be solved so easily using the methods we talked about today, but you can easily simulate the situation using programming or Excel. It is on the example of this problem that you can work out the Monte Carlo method.

I mentioned earlier the game “Chron X”, which I once worked on, and there was one very interesting card there - the IMF lottery. Here's how it worked: you used it in a game. After the round ended, the cards were redistributed and there was a 10% chance that the card would go out of play and that a random player would receive 5 units of each type of resource whose token was present on that card. The card was entered into the game without a single chip, but each time it remained in play at the beginning of the next round, it received one chip. So there was a 10% chance that if you put it into play, the round would end, the card would leave the game, and no one would get anything. If this does not happen (90% chance), there is a 10% chance (actually 9%, since it is 10% of 90%) that in the next round she will leave the game and someone will receive 5 units of resources. If the card leaves the game after one round (10% of the 81% available, so the probability is 8.1%), someone will receive 10 units, another round - 15, another - 20, and so on. Question: What is the general expected value of the number of resources you will get from this card when it finally leaves the game?

Normally we would try to solve this problem by finding the possibility of each outcome and multiplying by the number of all outcomes. So there is a 10% chance that you will get 0 (0.1*0 = 0). 9% that you will receive 5 units of resources (9%*5 = 0.45 resources). 8.1% of what you get is 10 (8.1%*10 = 0.81 resources in total, expected value). And so on. And then we would sum it all up.

And now the problem is obvious to you: there is always a chance that the card Not will leave the game so she can stay in the game forever, for an infinite number of rounds, so it’s possible to calculate every possibility does not exist. The methods we have learned today do not allow us to calculate infinite recursion, so we will have to create it artificially.

If you are good enough at programming, write a program that will simulate this map. You should have a time loop that brings the variable to a starting position of zero, shows a random number and with a 10% chance the variable exits the loop. Otherwise, it adds 5 to the variable and the cycle repeats. When it finally exits the loop, increase the total number of trial runs by 1 and the total number of resources (by how much depends on where the variable ends up). Then reset the variable and start again. Run the program several thousand times. Finally, divide the total number of resources by the total number of runs - this will be your expected Monte Carlo value. Run the program several times to make sure the numbers you get are roughly the same; if the scatter is still large, increase the number of repetitions in the outer loop until you start getting matches. You can be sure that whatever numbers you end up with will be approximately correct.

If you are unfamiliar with programming (and even if you are), here is a short exercise to warm up your Excel skills. If you are a game designer, Excel skills are never a bad thing.

Now you will find the IF and RAND functions very useful. RAND doesn't require values, it just spits out a random decimal number between 0 and 1. We typically combine it with FLOOR and pluses and minuses to simulate rolling a dice, which is what I mentioned earlier. However, in this case we're just leaving a 10% chance that the card will leave the game, so we can just check to see if the RAND value is less than 0.1 and not worry about it anymore.

IF has three meanings. In order: a condition that is either true or false, then a value that is returned if the condition is true, and a value that is returned if the condition is false. So the following function will return 5% of the time, and 0 the other 90% of the time:
=IF(RAND()<0.1,5,0)

There are many ways to set this command, but I would use this formula for the cell that represents the first round, let's say it's cell A1:

IF(RAND()<0.1,0,-1)

Here I use a negative variable to mean “this card has not left the game and has not given up any resources yet.” So if the first round is over and the card leaves play, A1 is 0; otherwise it is -1.

For the next cell representing the second round:

IF(A1>-1, A1, IF(RAND()<0.1,5,-1))

So if the first round ended and the card immediately left the game, A1 is 0 (the number of resources) and this cell will simply copy that value. Otherwise, A1 is -1 (the card has not yet left the game), and this cell continues to move randomly: 10% of the time it will return 5 units of resources, the rest of the time its value will still be equal to -1. If we apply this formula to additional cells, we get additional rounds, and whichever cell you end up with will give you the final result (or -1 if the card never left the game after all the rounds you played).

Take that row of cells, which represents the only round with that card, and copy and paste several hundred (or thousand) rows. We may not be able to do it endless test for Excel (there are a limited number of cells in a table), but at least we can cover most cases. Then select one cell in which you will place the average of the results of all rounds (Excel kindly provides an AVERAGE() function for this).

On Windows, you can at least press F9 to recalculate all random numbers. As before, do this a few times and see if the values ​​you get are the same. If the spread is too large, double the number of runs and try again.

Unsolved problems

If you just happen to have a degree in Probability and the above problems seem too easy, here are two problems that I've been scratching my head over for years, but alas, I'm not good enough at math to solve them. If you happen to know a solution, please post it here in the comments, I'll be happy to read it.

Unsolved Problem #1: LotteryIMF

The first unsolved problem is the previous homework assignment. I can easily apply the Monte Carlo method (using C++ or Excel) and be confident in the answer to the question “how many resources will the player receive”, but I don’t know exactly how to provide an exact provable answer mathematically (it’s an infinite series ). If you know the answer, post it here... after testing it with Monte Carlo, of course.

Unsolved problem #2: Sequences of figures

This problem (and again it goes far beyond the scope of the problems solved in this blog) was given to me by a gamer friend more than 10 years ago. He noticed an interesting thing while playing blackjack in Vegas: when he pulled cards from an 8-deck shoe, he saw ten figures in a row (a piece, or face card - 10, Joker, King or Queen, so there are 16 in total in a standard 52-card deck, so there are 128 in a 416-card shoe). What is the probability that in this shoe at least one sequence of ten or more figures? Let's assume that they were shuffled fairly, in random order. (Or, if you prefer, what is the probability that not found anywhere a sequence of ten or more figures?)

We can simplify the task. Here is a sequence of 416 parts. Each part is a 0 or a 1. There are 128 ones and 288 zeros scattered randomly throughout the sequence. How many ways are there to randomly intersperse 128 ones with 288 zeros, and how many times in these ways will there be at least one group of ten or more ones?

Every time I started solving this problem, it seemed easy and obvious to me, but as soon as I delved into the details, it suddenly fell apart and seemed simply impossible to me. So don't rush to blurt out the answer: sit down, think carefully, study the conditions of the problem, try to plug in real numbers, because all the people I talked to about this problem (including several graduate students working in this field) reacted about the same : “It’s completely obvious... oh, no, wait, it’s not obvious at all.” This is the very case for which I do not have a method for calculating all the options. I could certainly brute force the problem through a computer algorithm, but I would be much more curious to know the mathematical way to solve this problem.

Translation - Y. Tkachenko, I. Mikheeva



Similar articles