What is the subject of mathematical logic. Mathematical logic: subject, structure and basic principles of operations

23.09.2019

Introduction

Study questions:

          Concepts and definitions of mathematical logic.

          Basic operations of propositional algebra.

          Laws and consequences of Boolean algebra.

Conclusion

Introduction

The theoretical basis for the construction of computers are special mathematical disciplines. One of them is the algebra of logic, or Boolean algebra (J. Boole is an English mathematician of the 19th century, the founder of this discipline). Its apparatus is widely used to describe computer circuits, their design and optimization.

1. Concepts and definitions of mathematical logic.

Logics- a science that studies the laws and forms of thinking; the doctrine of methods of reasoning and evidence.

Mathematical logic (theoretical logic, symbolic logic) is a branch of mathematics that studies the proofs and questions of the foundations of mathematics. "The subject of modern mathematical logic is varied." According to the definition of P. S. Poretsky, "mathematical logic is logic by subject, mathematics by method." According to the definition of N. I. Kondakov, “mathematical logic is the second, after traditional logic, stage in the development of formal logic, applying mathematical methods and a special apparatus of symbols and exploring thinking with the help of calculus (formalized languages).” This definition corresponds to the definition of S. K. Kleene: mathematical logic is "the logic developed with the help of mathematical methods." Also, A. A. Markov defines modern logic as “an exact science that applies mathematical methods.” All these definitions do not contradict, but complement each other.

The use of mathematical methods in logic becomes possible when judgments are formulated in some precise language. Such precise languages ​​have two sides: syntax and semantics. Syntax is a set of rules for constructing language objects (usually called formulas). Semantics is a set of conventions that describe our understanding of formulas (or some of them) and allow us to consider some formulas to be true and others not.

Mathematical logic studies the logical connections and relationships underlying logical (deductive) inference, using the language of mathematics.

The laws of the world, the essence of objects, the common in them, we learn through abstract thinking. The main forms of abstract thinking are concepts, judgments and inferences.

concept- a form of thinking that reflects the essential features of an individual object or class of homogeneous objects. Concepts in language are expressed in words.

The scope of the concept- a set of objects, each of which has attributes that make up the content of the concept. The concepts of general and singular are distinguished.

The following relations of concepts are distinguished by volume:

    identity or coincidence of volumes, meaning that the volume of one concept is equal to the volume of another concept;

    subordination or inclusion of volumes: the volume of one of the concepts is fully included in the volume of the other;

    exception volumes - a case in which there is not a single feature that would be in two volumes;

    intersection or partial coincidence of volumes;

    subordination volumes - the case when the volumes of two concepts, excluding each other, are included in the volume of the third.

Judgment- this is a form of thinking in which something is affirmed or denied about objects, signs or their relations.

inference- a form of thinking, through which from one or more judgments, called premises, we, according to certain rules of inference, obtain a judgment-conclusion.

Algebra in the broad sense of the word, the science of general operations similar to addition and multiplication, which can be performed not only on numbers, but also on other mathematical objects.

Algebra of logic (propositional algebra, Boolean algebra 1 ) - a branch of mathematical logic, which studies logical operations on statements. Most often it is assumed (so-called binary or binary logic, in contrast to, for example, ternary logic) that statements can only be true or false.

Examples of algebras: algebra of natural numbers, algebra of rational numbers, algebra of polynomials, algebra of vectors, algebra of matrices, algebra of sets, etc. The objects of the algebra of logic or Boolean algebra are propositions.

statement- is any sentence of any language (statement), the content of which can be determined as true or false.

Any statement or true, or false; it cannot be both at the same time.

In natural language, utterances are expressed in declarative sentences. Exclamatory and interrogative sentences are not statements.

Statements can be expressed using mathematical, physical, chemical and other signs. From two numerical expressions, statements can be made by connecting them with equal or inequality signs.

The statement is called simple(elementary) if no part of it is itself a statement.

A statement made up of simple statements is called composite(difficult).

Simple statements in the algebra of logic are denoted by capital Latin letters:

A= (Aristotle is the founder of logic),

IN= (Bananas grow on apple trees).

Justification of the truth or falsity of simple statements is decided outside the algebra of logic. For example, the truth or falsity of the statement: "The sum of the angles of a triangle is 180 degrees" is established by geometry, and - in Euclid's geometry this statement is true, and in Lobachevsky's geometry it is false.

A true statement is assigned 1, a false one - 0. Thus, A = 1, IN = 0.

The algebra of logic is abstracted from the semantic content of statements. She is interested in only one fact - the given statement is true or false, which makes it possible to determine the truth or falsity of compound statements by algebraic methods.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION

Federal State Budgetary Educational Institution of Higher Professional Education

LIPETSK STATE PEDAGOGICAL UNIVERSITY

Faculty of Physics, Mathematics and Computer Science

Department of Math


Control work on the topic:

"History of the Development of Mathematical Logic"


Performed:

2nd year student

group MF-2

Ponamareva Victoria Sergeevna

Scientific adviser:

c.f.-m. PhD, Associate Professor

Ershova Alexandra Alekseevna


Lipetsk, 2014



Introduction

§1. The history of the emergence of mathematical logic

§2. Application of mathematical logic

§3. Mathematical logic in engineering

§4. Mathematical logic in cryptography

§5. Mathematical logic in programming

Conclusion

Bibliography

mathematical notation cryptography logic programming


Introduction


Logics<#"center">§1. The history of the emergence of mathematical logic


Mathematical logic is closely related to logic and owes its origin to it. The foundations of logic, the science of the laws and forms of human thinking (hence one of its names - formal logic), were laid by the greatest ancient Greek philosopher Aristotle (384-322 BC), who in his treatises thoroughly studied the terminology of logic, in detail analyzed the theory of inferences and proofs, described a number of logical operations, formulated the basic laws of thinking, including the laws of contradiction and the exclusion of the third. Aristotle's contribution to logic is very great, not without reason its other name is Aristotelian logic. Even Aristotle himself noticed that between the science he created and mathematics (at that time it was called arithmetic) there is much in common. He tried to combine these two sciences, namely, to reduce reflection, or rather inference, to calculation on the basis of initial positions. In one of his treatises, Aristotle came close to one of the sections of mathematical logic - the theory of proofs.

In the future, many philosophers and mathematicians developed certain provisions of logic and sometimes even outlined the contours of the modern propositional calculus, but the closest to the creation of mathematical logic came in the second half of the 17th century, the outstanding German scientist Gottfried Wilhelm Leibniz (1646 - 1716), who pointed out the ways for translating logic "from the verbal realm, full of uncertainties, to the realm of mathematics, where the relations between objects or statements are determined with absolute precision" . Leibniz even hoped that in the future philosophers, instead of fruitlessly arguing, would take paper and figure out which of them was right. At the same time, Leibniz also touched upon the binary number system in his works.

It should be noted that the idea of ​​using two characters to encode information is very old. The Australian aborigines counted in deuces, some tribes of hunter-gatherers of New Guinea and South America also used a binary counting system. In some African tribes, messages are transmitted using drums in the form of combinations of voiced and dull beats. A familiar example of two-character coding is Morse code, where the letters of the alphabet are represented by certain combinations of dots and dashes.

After Leibniz, many eminent scientists conducted research in this area, but the real success here came to the self-taught English mathematician George Boole (1815-1864), whose determination knew no bounds. The financial situation of George's parents (whose father was a shoemaker) allowed him to finish only an elementary school for the poor. After some time, Buhl, having changed several professions, opened a small school, where he taught himself. He devoted a lot of time to self-education and soon became interested in the ideas of symbolic logic. In 1847, Boole published the article "Mathematical Analysis of Logic, or the Experience of the Calculus of Deductive Inferences", and in 1854 his main work "Investigation of the laws of thought on which the mathematical theories of logic and probability are based" appeared.

Boole invented a kind of algebra - a system of notation and rules applicable to all kinds of objects, from numbers and letters to sentences. Using this system, he could encode statements (statements that needed to be proven true or false) using the symbols of his language, and then manipulate them in the same way that numbers are manipulated in mathematics. The basic operations of Boolean algebra are conjunction (AND), disjunction (OR), and negation (NOT).

After some time, it became clear that Boole's system is well suited for describing electrical switching circuits. Current in a circuit can either flow or not, just as a statement can be either true or false. And a few decades later, already in the 20th century, scientists combined the mathematical apparatus created by George Boole with the binary number system, thereby laying the foundation for the development of a digital electronic computer.

Individual provisions of Boole's work were touched upon to some extent both before and after him by other mathematicians and logicians. However, today in this area, it is the works of George Boole that are ranked among the mathematical classics, and he himself is rightfully considered the founder of mathematical logic and, all the more so, its most important sections - the algebra of logic (Boolean algebra) and the algebra of propositions.

A great contribution to the development of logic was also made by Russian scientists P.S. Poretsky (1846-1907), I.I. Zhegalkin (1869-1947).

In the 20th century, D. Hilbert (1862-1943) played a huge role in the development of mathematical logic, who proposed a program for the formalization of mathematics, connected with the development of the foundations of mathematics itself. Finally, in the last decades of the 20th century, the rapid development of mathematical logic was due to the development of the theory of algorithms and algorithmic languages, automata theory, graph theory (S.K. Kleene, A. Church, A.A. Markov, P.S. Novikov, Hegel and many other).

Hegel (1770-1831) spoke rather ironically about the law of contradiction and the law of the excluded middle. He presented the latter, in particular, in the following form: "The spirit is green or is not green", and asked the "tricky" question: which of these two statements is true? The answer to this question is not difficult, however. Neither of the two statements "Spirit is green" and "Spirit is not green" is true, since both of them are meaningless. The law of the excluded middle applies only to meaningful statements. Only they can be true or false. The meaningless is neither true nor false. Hegel's critique of logical laws relied, as is often the case, on giving them a meaning that they do not have, and attributing to them those functions to which they have nothing to do. The case of criticism of the law of the excluded middle is one example of such an approach. Criticism of the law of the excluded middle (L. Bauer) led to the creation of a new direction in logic - intuitionistic logic. In the latter, this law is not accepted and all those methods of reasoning that are associated with it are discarded. Among those rejected, for example, is proof by reduction to contradiction, or absurdity.

I draw attention to the essence of any criticism of the laws of formal logic: all supporters of the concept of "extension" of formal logic shift the center of gravity of logical research from the study of correct methods of reasoning to the development of any specific problems: the theory of knowledge, causality, induction, etc. Topics are introduced into logic that are interesting and important in themselves, but have nothing to do with formal logic proper as a set of methods of correct thinking. The Law of the Excluded Middle, without considering the contradictions themselves, forbids recognizing two contradictory propositions as true or false at the same time. This is its meaning.

Conclusion: one cannot avoid recognizing as true one of the two contradictory statements and look for something third between them.

The result of the application: the unambiguity of logical thinking is achieved.

The fourth law is the law of sufficient reason

Formulation: every true thought has a sufficient reason.

Commentary: This law actually states that all thoughts that can be explained are considered true, and those that cannot be explained are considered false. In propositional logic, this law has no formula, since it has a substantive character. It is worth dwelling on this in a little more detail:

Sufficient, i.e., real, non-fictional basis of our thoughts can be individual practice. Indeed, the truth of some judgments is confirmed by their direct comparison with the facts of reality (Example: "[It is true that] It is raining", "[It is a lie that] I was in Acapulco"). But personal experience is limited. Therefore, in real activity, you always have to rely on the experience of other people. Thanks to the development of scientific knowledge, the subject uses as the basis of his thoughts the experience of his predecessors, enshrined in the laws and axioms of science, in the principles and provisions that exist in any field of human activity. To confirm any particular case, there is no need to turn to its practical verification, to substantiate it with the help of personal experience. If, for example, I know the law of Archimedes, then it is not at all necessary for me to look for a bath of water in order to place an object there and find out how much he lost in weight. The law of Archimedes will be a sufficient basis for confirming this particular case.

The purpose of science is not only the acquisition of knowledge, but also its transmission. That is why no logical flaws in the formal presentation of already obtained knowledge are unacceptable. Thus - knowledge must be logically controlled. This is what is optimal for its preservation, transmission and development. And that is why scientific knowledge, as a set of already proven logical propositions, can serve as the basis for subsequent demonstrative reasoning.

The law of sufficient reason actually boils down to the following requirement: "any judgment, before being accepted as true, must be justified." Thus, it follows from this law that, with correct reasoning, nothing should be accepted just like that, on faith. In each case of each statement, the reasons why it is considered true should be indicated. As you can see, the law of sufficient reason initially acts as a methodological principle that ensures the ability of thinking to supply grounds for subsequent reasoning. After all, everything that has already been correctly proved can be used as the basis for subsequent proofs.

Conclusion: any other, already tested and recognized as true thought, from which the truth of the considered thought follows, can be a sufficient basis for any thought.

The result of the application: the law provides the validity of thinking. In all cases when we assert something, we are obliged to prove our case, i.e. give sufficient reasons to support the truth of our thoughts.


§2. Application of mathematical logic


Combining the mathematical-logical setting with other mathematical approaches, primarily with probabilistic-statistical ideas and methods - against the backdrop of a deep interest in computing devices - was largely decisive in shaping the concept of cybernetics as a complex scientific direction that has processes as its subject.

In a number of cases, the technical apparatus of mathematical logic is used (synthesis of relay-contact circuits); in addition to what is especially important, the ideas of mathematical logic, of course, in the theory of algorithms, but also of all science as a whole and its characteristic style of thinking have had and continue to have a very great influence on those peculiar areas of activity, the content of which is automatic processing of information (informatics), use in cryptography and automation of control processes (cybernetics).

Computer science is the science that studies the computer, as well as the interaction of a computer with a person.

The construction of logic machines is an interesting chapter in the history of logic and cybernetics. It captures the first projects for the creation of artificial intelligence and the first disputes about the possibility of this. The idea of ​​logical machines appeared in the 13th century with the Spanish scholastic Raymond Lull, was then considered by Leibniz and received a new development in the 19th century, after the emergence of mathematical logic. In 1870, the English philosopher and economist William Stanley Jevons built in Manchester logic piano , which extracted corollaries from algebraically written premises, highlighting acceptable combinations of terms. This is also called the decomposition of propositions into constituents. It is important to note the possibility of practical application of the logical machine for solving complex logical problems.

Modern universal computers are at the same time logical machines. It was the introduction of logical operations that made them so flexible; it also allows them to model reasoning. So the arithmetic branch intelligent automata connected with logic. In the 1920s, however, formal logic seemed too abstract and metaphysical for application to life. Meanwhile, even then it was possible to foresee the introduction of logical calculus into technology.

Mathematical logic facilitates the mechanization of mental labor. Today's machines perform much more complex logical operations than their modest prototypes of the beginning of the century.

The problem of artificial intelligence is complex and multifaceted. We will probably not be mistaken if we say that the final limits of the mechanization of thought can only be established experimentally. We also note that in modern cybernetics, the possibility of modeling not only formal, but also meaningful thought processes is discussed.


§3.MathematicallogicsVtechnique


The role of logical processing of binary data at the present stage of development of computer technology has increased significantly. This is due, first of all, to the creation of technical systems. implementing in one form or another the technology of obtaining and accumulating knowledge, modeling individual intellectual functions of a person. The core of such systems are powerful computers and computer systems. In addition, there is a large class of applied problems that can be reduced to solving logical problems, for example, image processing and synthesis, transport problems. The required performance of computing facilities is achieved by parallelization and pipelining of computing processes. This is implemented, as a rule, on the basis of very large integrated circuits (VLSI). However, the VLSI technology and their structure imposes a number of specific requirements on algorithms, namely: regularity, parallel-threading organization of calculations, superlinear operational complexity (repeated use of each input data element), locality of computation links, two-dimensionality of the computation implementation space. These requirements make it necessary to solve the problem of effective diving algorithm into a computing environment, or, as they say, mapping an algorithm into a computing architecture. At present, the erroneousness of previously widely held views, which consists in the fact that the transition to parallel-pipeline computer architectures will require only a slight modification of known algorithms, has been proved. It turned out that the parallelism and pipelining of computational processes requires the development of new algorithms even for those tasks for which there were well-studied and tested methods and algorithms for solving, but focused on the sequential principle of implementation. According to experts' forecasts, in the next decade we should expect the emergence of new concepts for the construction of computing facilities. The forecasts are based on the results of ongoing prospective research, in particular in the field of biochips and organic switching elements. Some directions aim to create schemes in the form of layers of organic molecules and films with a highly developed structure. This will allow, according to the researchers, grow computers based on genetic engineering and strengthen the analogy between the elements of technical systems and brain cells. Thus, neurocomputers that imitate the intellectual functions of biological objects, including humans, acquire real outlines. Apparently, molecular electronics will become the basis for the creation of computers of the sixth generation. All this objectively determines intensive work on the methods of synthesizing algorithms for processing logical data and their effective immersion in the operating environment of binary elements. Obviously, binary elements and binary data most fully correspond to each other in terms of the representation and processing of the latter on such elements, if considered separately. Indeed, suppose that the algebra of logic over numbers (0,1) is realized on a binary element with the full use of its operational resource. In other words, the question is raised about the efficiency, and sometimes even the possibility of implementing a given algorithm on such a network (structure). This is the essence of immersing an algorithm in a structure.


§4. Mathematical logic in cryptography


Cryptography is the study of techniques for forwarding messages in disguise, in which only the intended recipients of the sender can remove the disguise and read the message. The general scheme of information protection is shown in Figure 2. The stage of error coding is based on the introduction of an excess of information into the transmitted message, sufficient to overcome interference on the communication line. For example, suppose you are passing a sequence of characters like 0And 1. In this case, in the communication network, with a certain probability, errors in signal reception may occur. 0 instead of a signal 1or vice versa, then the encoder sends five pulses 00000 for each character ai of the message if ai is -0 and vice versa. At the receiving end, the received pulse sequence is divided into five pulses, called blocks. If the received block contains 2 or less pulse 0, then it is decided that the symbol ai-1 was transmitted. Thus, the initial error probability will be greatly reduced. More elegant coding methods that, with sufficient reliability, allow you to enter not so much excess information. For expression in information, it is required to enter some alphabet, which will consist of the message (finite ordered sets of these symbols). Denote by A - the cardinality of the chosen alphabet. We will also assume that all sets of information or, what is the same, the set of all possible messages is finite. As a measure of information in a message of a given length, we can take log 2from the number of possible messages of course. Then the amount of information falling on one character of the alphabet X=log 2a. Next, we deal with words of length S, then in total there will be N=AS (Cartesian S-degree of the alphabet), and therefore, the amount of information in the word Y=Log 2N=Log 2As=SX. The lion's share of cryptanalysis is made up of methods based on the probabilistic analysis of the cryptogram and the proposed source language. Since any ordinary language has an excess of information, and unevenly distributed in words, the letters of the alphabet of this language can have stable particular characteristics. For example, in English it is a frequently repeated letter e , in addition, letter combinations and their combinations can be frequency characteristics. The general scheme of a cryptosystem with a secret key is shown in Figure 3. Here X is the plain text, Y is the text cipher, K is the cipher key, R is the randomizing sequence.


§5.MathematicallogicsVprogramming


A function of one argument is a rule that matches any value that lies in the range of this argument (which will also be the domain of this function) with another value that lies in the range of the function.

The concept of a function has been carried over into programming languages. A programming language usually has a number of built-in functions such as sin, cos, sqrt, and so on. In addition, the programmer has the ability to define his own functions. They can work not only with real numbers, but also with various data types, usually including integer (integer), real (real), boolean (boolean), character (string). They can also work with structures. Pascal, Algol=68 and PL/1 have, for example, the types records (records), arrays (arrays), lists (lists), files of records (files consisting of records), and the values ​​of functions can be pointers to these structures . All this is consistent with the notion of a domain, outside of which a function is not defined. In programming languages, this area is usually specified by specifying the data type, which is some set of values. So, in Pascal, the compiler must ensure that no function is applied to a value of the wrong type, which could go beyond the scope of the function.

Function of many arguments. Now we need to generalize the definition to cover the functions of many arguments. To do this, we collect n arguments into an ordered set, which we will consider as one argument. Take the subtraction function diff(x.y). It is interpreted as a mapping of pairs<х,у>into whole numbers. As a set of ordered pairs, it can be written as follows: diff = (<<5,3>, 2>. <<6,3>, 3>, <<4,5>, -1>...) If instead we had a function of four arguments h(x,y,z,w), we would use the mapping defined on fours . This technique is also used in programming. If it is necessary to reduce the number of arguments to a procedure or function (and they all have the same type), then in Fortran you can write these values ​​into an array and pass this array as a parameter, rather than individual values. In a more general case (for example, in Pascal), when arguments are allowed to have different types, you can pass an entry as a parameter and store the values ​​as separate components of this entry. In fact, a set of n elements in mathematics corresponds to a notation in programming. Each of its components is taken from its own separate area, as in the case of a record. The only difference is that a component is defined by its location (position) and not by its name. The relational data model operates on sets of ordered sets that correspond to record files stored on the machine. Also, mathematical logic is used in other areas of computer science - this is in the development in the field of modeling and automation of intelligent procedures - the direction of the so-called artificial intelligence.


Conclusion


Mathematical logic contributed a lot to the rapid development of information technology in the 20th century, but the concept of "judgment" fell out of its field of vision, which appeared in logic in the time of Aristotle and on which, as a foundation, the logical basis of natural language rests. Such an omission did not contribute to the development of the logical culture of society and even gave rise to the illusion for many that computers are capable of thinking no worse than a person himself. Many are not even embarrassed by the fact that against the backdrop of general computerization on the eve of the third millennium, logical absurdities within science itself (not to mention politics, lawmaking and pseudoscience) are even more common than at the end of the 19th century. And in order to understand the essence of these absurdities, there is no need to turn to complex mathematical structures with many-place relations and recursive functions that are used in mathematical logic. It turns out that to understand and analyze these absurdities, it is quite enough to apply a much simpler mathematical structure of judgment, which not only does not contradict the mathematical foundations of modern logic, but in some way supplements and expands them.


Bibliography


1. Igoshin, V.I. Mathematical logic and theory of algorithms [Text] / V.I. Igoshin. - M.: Academy, 2008. - 448 p.; from ill.

Styazhkin, N.I. Formation of mathematical logic [Text] / N.I. Styazhkin. - M.: Nauka, 1967. - 508 p.; from ill.

Markov, A.A. Elements of mathematical logic [Text] / A.A. Markov. - M.: MGU, 2004. - 310 p.; from ill.

Curry, H.B. Foundations of mathematical logic [Text] / Kh.B. Curry. - M.: Mir, 1969. - 568 p.; from ill.


Tutoring

Need help learning a topic?

Our experts will advise or provide tutoring services on topics of interest to you.
Submit an application indicating the topic right now to find out about the possibility of obtaining a consultation.

One of the names of modern logic, which came in the second. floor. 19 early 20th century instead of traditional logic. The term symbolic logic is also used as another name for the modern stage in the development of the science of logic. Definition… … Philosophical Encyclopedia

mathematical logic- SYMBOLIC LOGIC, mathematical logic, theoretical logic, the area of ​​logic in which logical conclusions are investigated by means of logical calculus based on a strict symbolic language. The term L. With." was, apparently, the first time ... ... Encyclopedia of Epistemology and Philosophy of Science

MATHEMATICAL LOGIC- It is also called symbolic logic. M. l. this is the same Aristotelian syllogistic logic, but only cumbersome verbal conclusions are replaced in it by mathematical symbols. This achieves, firstly, brevity, secondly, clarity, in ... ... Encyclopedia of cultural studies

MATHEMATICAL LOGIC- MATHEMATICAL logic, deductive logic, using mathematical methods for studying the ways of reasoning (conclusions); mathematical theory of deductive ways of reasoning ... Modern Encyclopedia

MATHEMATICAL LOGIC- deductive logic, including mathematical methods for studying methods of reasoning (conclusions); mathematical theory of deductive reasoning methods. Mathematical logic is also called the logic used in mathematics ... Big Encyclopedic Dictionary

MATHEMATICAL LOGIC- (symbolic logic), analytical section of logic, the result of applying mathematical methods to problems of classical logic. Considers concepts that can be true or false, the relationship between concepts and the operation of them, including ... ... Scientific and technical encyclopedic dictionary

MATHEMATICAL LOGIC- one of the leading sections of modern logic and mathematics. Formed in 1920 Art. as a realization of the idea of ​​the possibility of writing down all the initial assumptions in the language of signs similar to mathematical ones and thereby replacing reasoning with calculations. ... ... The latest philosophical dictionary

mathematical logic- noun, number of synonyms: 1 logistics (9) ASIS synonym dictionary. V.N. Trishin. 2013 ... Synonym dictionary

mathematical logic- - Telecommunication topics, basic concepts of EN mathematical logic ... Technical Translator's Handbook

MATHEMATICAL LOGIC- theoretical logic, symbolic logic, a branch of mathematics devoted to the study of mathematical. proofs and questions of the foundations of mathematics. Historical essay. The idea of ​​building a universal language for all mathematics and formalization based on ... ... Mathematical Encyclopedia

Books

  • Mathematical logic, Ershov Yuri Leonidovich, Palyutin Evgeny Andreevich. The book outlines the basic classical calculus of mathematical logic: the propositional calculus and the predicate calculus; there is a summary of the main concepts of set theory and theory ... Buy for 1447 UAH (Ukraine only)
  • Mathematical logic, YL Ershov. The book outlines the main classical calculus of mathematical logic: propositional calculus and predicate calculus; there is a summary of the basic concepts of set theory and theory ...

Other sections

MATHEMATICAL LOGIC, deductive logic, including mathematical methods for studying methods of reasoning (conclusions); mathematical theory of deductive reasoning methods. Mathematical logic is also called the logic used in mathematics.

An important role in mathematical logic is played by the concepts of deductive theory and calculus.Calculus is a set of inference rules that make it possible to consider certain formulas as derivable. Inference rules are divided into two classes. Some of them directly qualify certain formulas as derivable. Such inference rules are called axioms . Others, however, allow formulas to be considered derivable that are syntactically related in some predetermined way to finite sets of derivable formulas. A widely used rule of the second type is the modus ponens rule: if formulas and are derivable, then so is the formula.

The relation of calculi to semantics is expressed in terms of semantic suitability and semantic completeness of the calculus. The AND calculus is said to be semantically suitable for the language I if any formula of the language I can be deduced in AND is true. Similarly, a calculus AND is said to be semantically complete in I if any valid formula in I is deducible in I.


Mathematical logic studies the logical connections and relationships that underlie logical (deductive) inference using the language of mathematics.


Many of the languages ​​considered in mathematical logic have semantically complete and semantically useful calculi. In particular, K. Gödel's result is known that the so-called classical predicate calculus is semantically complete and semantically suitable for the language of classical first-order predicate logic. On the other hand, there are many languages ​​for which the construction of a semantically complete and semantically suitable calculus is impossible. In this area, the classic result is Gödel's incompleteness theorem, stating the impossibility of a semantically complete and semantically usable calculus for the language of formal arithmetic.


It should be noted that in practice, many elementary logical operations are an obligatory part of the instruction set of all modern microprocessors and, accordingly, are included in programming languages. This is one of the most important practical applications of mathematical logic methods studied in modern computer science textbooks.


Sections of mathematical logic

    Algebra of logic

    propositional logic

    proof theory

    Model theory

propositional logic (or propositional logic from English propositional logic, or propositional calculus) is a formal theory, the main object of which is the concept of a logical statement. In terms of expressiveness, it can be characterized as classical zero-order logic.

Despite its importance and wide scope, propositional logic is the simplest logic and has very limited means for investigating propositions.

Algebra of logic (algebra of propositions) - a section of mathematical logic that studies logical operations on propositions. Most often it is assumed that statements can only be true or false.

The basic elements that the algebra of logic operates on are propositions. Statements are built on the set , on the elements of which three operations are defined:

    Negation (unary operation),

    Conjunction (binary),

    Disjunction (binary),

as well as constants - logical zero 0 and logical unit 1.

Probability theory is a branch of mathematics that studies random events, their properties and operations on them.

In probability theory, those random events are studied that can be reproduced under the same conditions and have the following property: as a result of an experiment, under the condition S, an event A can occur with a certain probability p.


The basic concepts of probability theory are: event, probability, random event, random phenomenon, mathematical expectation, variance, distribution function, probability space.


As a science, the theory of probability arises in the middle of the 17th century. The first works appear in connection with the calculation of probabilities in gambling. Investigating the prediction of winnings when rolling the dice,
Blaise Pascal and Pierre Fermat, in their correspondence of 1654, discovered the first probabilistic patterns. In particular, in this correspondence they came to the concept of mathematical expectation and theorems of multiplication and addition of probabilities. In 1657, these results were given in the book of H. Huygens "On the calculations in gambling", which is the first treatise on the theory of probability.

Great progress in the theory of probability was made by
Jacob Bernoulli : he established the law of large numbers in the simplest case, formulated many concepts of modern probability theory. He wrote a monograph on the theory of probability, which was published posthumously in 1713, under the title The Art of Conjectures.

In the first half of the 19th century, the theory of probability began to be applied in the theory of observational errors. At this time it has been proven
de Moivre-Laplace theorem (1812) and Poisson theorem(1837), which are the first limit theorems. Laplace expanded and systematized the mathematical foundations of probability theory. Gauss and Legendre developed the least squares method.

In the second half of the 19th century, most of the discoveries in probability theory were made by Russian scientists.
P. L. Chebyshev and his students and A. M. Lyapunov and A. A. Markov.In 1867, Chebyshev formulated and quite simply proved the law of large numbers under very general conditions. In 1887, he was the first to formulate and propose a method for solving the central limit theorem for sums of independent random variables. In 1901 this theorem was proved by Lyapunov under more general conditions. Markov in 1907 for the first time considered a test scheme connected in a flail, thereby laying the foundation for the theory of Markov chains. He also made a great contribution to research concerning the theory of large numbers and the central limit theorem.

At the beginning of the 20th century, the range of application of the theory of probability was expanded, systems of strictly mathematical justification and new methods of probability theory were created. During this period, thanks to the work
Andrey Nikolaevich Kolmogorovprobability theory takes on a modern form.

In 1926, as a graduate student, Kolmogorov received the necessary and sufficient conditions under which the law of large numbers holds. In 1933, in his work Basic Concepts of Probability Theory, Kolmogorov introduced the axiomatics of probability theory, which is generally recognized as the best.


The mathematical apparatus of probability theory is widely used in science and technology. In particular, in astronomy, the method of least squares is used to calculate the orbits of comets. In medicine, when evaluating the effectiveness of treatment methods, the theory of probability is also used.


/ BDE Mathematics /

Deduction

Remember Sherlock Holmes constantly talking about his deductive abilities? So what is deduction?

DEDUCTION (lat. deductio - derivation)- a form of thinking in which a new thought is derived in a purely logical wayfrom previous thoughts. Such a sequence of thoughts is called a conclusion, and each component of this conclusion is either a previously proven thought, or an axiom, or a hypothesis. The last thought of this conclusion is called the conclusion.

Deductive reasoning, which is the subject of traditional logic, is used by us whenever we need to consider a phenomenon on the basis of a general position already known to us and draw the necessary conclusion regarding this phenomenon. We know, for example, the following specific fact - "a given plane intersects a ball" and the general rule for all planes intersecting a ball - "every section of a ball by a plane is a circle." Applying this general rule to a specific fact, every right-thinking person will necessarily come to the same conclusion: "then this plane is a circle."


The structure of deductive reasoning and the coercive nature of its rules
displayed the most common relationships between objects of the material world: the relationship of the genus, species and individual, that is, the general, particular and singular: what is inherent in all species of a given genus is also inherent in any species; what is inherent in all individuals of the genus is inherent in each individual.

The theory of deduction was first elaborated by Aristotle. He found out the requirements that individual thoughts that make up a deductive inference must meet, defined the meaning of terms and revealed the rules for certain types of deductive reasoning. The positive side of the Aristotelian doctrine of deduction is that it reflects the real patterns of the objective world.

The term "deduction" in the narrow sense of the word also means the following:
1) The research method, which is as follows: in order to to obtain new knowledge about an object or a group of homogeneous objects, it is necessary, firstly, to find the nearest genus, which includes these objects, and, secondly, to apply to them the appropriate law inherent in the entire given genus of objects. The deductive method plays a huge role in mathematics. It is known that all theorems are derived in a logical way with the help of deduction from a small finite number of initial principles, called axioms.
2) The form of presentation of the material in a book, lecture, report, conversation, when from general provisions, rules, laws go to less general provisions, rules, laws.
This method allows you to set formal axiomatic theories.
2. Asking only axioms
In this case, the inference rules are considered well-known, so only axioms are given. Therefore, with such a construction of theorems, we say that semi-formal axiomatic theory.
3.Setting only inference rules
This method of constructing theorems is based on specifying only inference rules, since the set of axioms is empty. Based on this, a theory defined in this way is a special case of a formal theory. This variety was later called theory of natural inference.

The main properties of deductive theories are:
1. Controversy
A theory is called contradictory in which the set of theorems covers the entire set of formulas.

2. Completeness
A theory is called complete if, for any formula F, either the F, or its negation -F.
3. Independence of axioms
When a particular axiom of a theory cannot be deduced from the rest of the axioms, then it is called independent. A system of axioms is said to be independent only if every axiom in it is independent.
4. Resolvability
When a theory has an efficient algorithm to determine the number of steps proving a theorem, the theory is called solvable.
For example, propositional logic, first-order logic (predicate calculus), formal arithmetic (theory S).

Mathematical logic, like classical logic, explores the processes of inference and allows one to draw conclusions from the truth of some judgments about the truth or falsity of others, regardless of their specific content. The use of mathematical methods in logic (algebraization of logic and the construction of logical calculi) gave rise to the development of a new area of ​​mathematics called "Mathematical Logic". The main task of mathematical logic is the formalization of knowledge and reasoning. Mathematics is a science in which all statements are proved with the help of inferences, so mathematical logic, in essence, is the science of mathematics.

Mathematical logic provided the means for constructing logical theories and the computing apparatus for solving problems. Mathematical logic and the theory of algorithms have found wide application in various fields of scientific research and technology (for example, in the theory of automata, in linguistics, in the theory of relay-contact circuits, in economic research, in computer technology, in information systems, etc.). The basic concepts of mathematical logic underlie its applications such as databases, expert systems, and logic programming systems. The same concepts become the methodological basis for describing the analysis and modeling of automated integrated production.

The questions studied by mathematical logic can be considered both by means of the semantic (semantic) theory, which is based on the concept of algebra, and of the formal axiomatic (syntactic) theory, based on the concept of logical calculus. This course examines both of these approaches, starting with propositional algebra, which is then generalized to predicate algebra, and both of them serve to understand the construction of logical calculi and their special cases: propositional calculus and predicate calculus.

Section I. Propositional Algebra

Propositional algebra can be thought of as a translation into another (algebraic) language of the results learned in the section "Boolean Functions" using the functional language. With the functional approach, each of the logical operations and formulas is associated with a certain two-valued function. In the algebraic approach, logical operations are interpreted as algebraic, acting on a set of two elements.

1. Statements and operations on them. Formulas

saying any statement is called, about which it is possible to say quite definitely and objectively whether it is true or false.

For example, the statement "2 > 0" is a statement and is true, and the statement "2< 0" - ложно, утверждение "x 2 + y 2 = z 2 " высказыванием не является, так как оно может быть, как истинным, так и ложным при различных значениях переменных x, y, z. Высказывание полностью определяется своим истинностным значением. Условимся, значение истинности высказывания обозначать 1, если высказывание истинно, и 0, если высказывание ложно, что в точности соответствует значениям переменных булевых функций.

Distinguish between simple and complex statements, a statement is called simple if no part of it is a statement. Simple statements will be denoted by the initial capital letters of the Latin alphabet A, B, C or A 1 , A 2 , . . .. Compound statements are characterized by the fact that they are formed from several simple statements with the help of logical operations, i.e. are formulas of the propositional algebra.

Recall that an algebraic structure or algebra is a structure formed by a certain set together with the operations introduced on it. Let us define the algebra of propositions.

Denote by B = (0, 1) is the set of statements. We define operations on the set B .

Denial statement A is called a statement that evaluates to true if A is false, and vice versa. Negation is denoted (A) and is a unary operation.

Let A and B be some statements, we introduce binary operations on them.

conjunction statements A and B is called a statement that takes the value true if and only if both statements A and B are true. The conjunction is denoted - A B (AB).

disjunction statements A and B is called a statement that takes the value true if at least one of the statements A or B is true. The disjunction is denoted - A b.

implication statements A and B is called a statement that evaluates to false if and only if A is true and B is false. Referred to as AB.

Equivalence of statements A and B is called a statement that evaluates to true if and only if statements A and B have the same value. Operation designation - АВ (АВ).

Logical operations are also defined using tables called truth tables . We present a summary truth table for all the introduced logical operations.

Propositional (propositional) variable A variable whose values ​​are simple propositions is called. Denote the propositional variables by X 1 , X 2 , . . . , X n .

The notion of a propositional algebra formula is introduced by induction. Propositional algebra formulas are:

1) logical constants 0 and 1;

2) propositional variables;

3) if A And IN - formulas, then each of the expressions ( A), (A) (IN), (A) (IN), (A) (IN), (A) ~ (IN) is a formula;

4) formulas other than those constructed according to paragraphs. 1) - 3), no.

Denote by M is the set of all formulas of the propositional algebra, M is closed under logical operations.

For the formula constructed according to item 3 of the formula A And B are called subformulas. The number of parentheses in a formula can be reduced. The order in which the operations in a formula are performed is determined by their priority. List of logical operations in descending order of priority:
~. Changing the order of operations, as in algebraic operations, is done using parentheses.

Let U – formula over propositional variables X 1 , X 2 , . . . , X n, denoted U(X 1 , X 2 , . . . , X n). Set of concrete values ​​of propositional variables X 1 , X 2 , . . . , X n is called the interpretation of the formula U and denoted I(U).

The formula is called doable , if there is such a set of variable values ​​for which this formula takes the value 1 (there is an interpretation I(U) on which the formula is true).

The formula is called rebuttable , if there is such a set of variable values ​​for which this formula takes the value 0 (there is an interpretation I(U) on which the formula is false).

The formula is called identically true (TI-formula) or tautology , if this formula takes the value 1 for all sets of variable values ​​(the formula is true on all interpretations).

The formula is called identically false (TL-formula) or contradiction if this formula takes the value 0 for all sets of variable values ​​(the formula is false on all interpretations).

Formulas A And IN called equivalent (denoted AIN) if for any values ​​of propositional variables the value of the formula A matches the value of the formula IN.

The tasks of determining the equivalence, satisfiability, refutation, identical truth and falsity of formulas can be solved using the construction of truth tables, but there are less cumbersome ways to solve these problems.



Similar articles