Propositional calculus

The propositional calculus[a] is a branch of logic.[1] It is also called propositional logic,[2] statement logic,[1] sentential calculus,[3] sentential logic,[1] or sometimes zeroth-order logic.[4][5] It deals with propositions[1] (which can be true or false)[6] and relations between propositions,[7] including the construction of arguments based on them.[8] Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, equivalence, and negation.[9][10][11][12] Some sources include other connectives, as in the table below.

Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.

Propositional logic is typically studied with a formal language, in which propositions are represented by letters, which are called propositional variables. These are then used, together with symbols for connectives, to make compound propositions. Because of this, the propositional variables are called atomic formulas of a formal zeroth-order language.[10][2] While the atomic propositions are typically represented by letters of the alphabet,[10] there is a variety of notations to represent the logical connectives. The following table shows the main notational variants for each of the connectives in propositional logic.

Notational variants of the connectives[13][14]
Connective Symbol
AND , , , ,
equivalent , ,
implies , ,
NAND , ,
nonequivalent , ,
NOR , ,
NOT , , ,
OR , , ,
XNOR XNOR
XOR ,

The most thoroughly researched branch of propositional logic is classical truth-functional propositional logic,[1] in which formulas are interpreted as having precisely one of two possible truth values, the truth value of true or the truth value of false.[15] The principle of bivalence and the law of excluded middle are upheld. By comparison with first-order logic, truth-functional propositional logic is considered to be zeroth-order logic.[4][5]

History edit

Although propositional logic (which is interchangeable with propositional calculus) had been hinted by earlier philosophers, it was developed into a formal logic (Stoic logic) by Chrysippus in the 3rd century BC[16] and expanded by his successor Stoics. The logic was focused on propositions. This was different from the traditional syllogistic logic, which focused on terms. However, most of the original writings were lost[17] and, at some time between the 3rd and 6th century CE, Stoic logic faded into oblivion, to be resurrected only in the 20th century, in the wake of the (re)-discovery of propositional logic.[18]

Symbolic logic, which would come to be important to refine propositional logic, was first developed by the 17th/18th-century mathematician Gottfried Leibniz, whose calculus ratiocinator was, however, unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan, completely independent of Leibniz.[19]

Gottlob Frege's predicate logic builds upon propositional logic, and has been described as combining "the distinctive features of syllogistic logic and propositional logic."[20] Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, including natural deduction, truth trees and truth tables. Natural deduction was invented by Gerhard Gentzen and Stanisław Jaśkowski. Truth trees were invented by Evert Willem Beth.[21] The invention of truth tables, however, is of uncertain attribution.

Within works by Frege[22] and Bertrand Russell,[23] are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil Post (or both, independently).[22] Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole, Charles Sanders Peirce,[24] and Ernst Schröder. Others credited with the tabular structure include Jan Łukasiewicz, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving Lewis.[23] Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables.".[23]

Sentences edit

Propositional logic, as currently studied in universities, is a specification of a standard of logical consequence in which only the meanings of propositional connectives are considered in evaluating the conditions for the truth of a sentence, or whether a sentence logically follows from some other sentence or group of sentences.[2]

Declarative sentences edit

Propositional logic deals with statements, which are defined as declarative sentences having truth value.[25][1] Examples of statements might include:

Declarative sentences are contrasted with questions, such as "What is Wikipedia?", and imperative statements, such as "Please add citations to support the claims in this article.".[26][27] Such non-declarative sentences have no truth value,[28] and are only dealt with in nonclassical logics, called erotetic and imperative logics.

Compounding sentences with connectives edit

In propositional logic, a statement can contain one or more other statements as parts.[1] Compound sentences are formed from simpler sentences and express relationships among the constituent sentences.[29] This is done by combining them with logical connectives:[29][30] the main types of compound sentences are negations, conjunctions, disjunctions, implications, and biconditionals,[29] which are formed by using the corresponding connectives to connect propositions.[31][32] In English, these connectives are expressed by the words "and" (conjunction), "or" (disjunction), "not" (negation), "if" (material conditional), and "if and only if" (biconditional).[1][9] Examples of such compound sentences might include:

  • Wikipedia is a free online encyclopedia that anyone can edit, and millions already have. (conjunction)
  • It is not true that all Wikipedia editors speak at least three languages. (negation)
  • Either London is the capital of England, or London is the capital of the United Kingdom, or both. (disjunction)[b]

If sentences lack any logical connectives, they are called simple sentences,[1] or atomic sentences;[30] if they contain one or more logical connectives, they are called compound sentences,[29] or molecular sentences.[30]

Sentential connectives are a broader category that includes logical connectives.[2][30] Sentential connectives are any linguistic particles that bind sentences to create a new compound sentence,[2][30] or that inflect a single sentence to create a new sentence.[2] A logical connective, or propositional connective, is a kind of sentential connective with the characteristic feature that, when the original sentences it operates on are (or express) propositions, the new sentence that results from its application also is (or expresses) a proposition.[2] Philosophers disagree about what exactly a proposition is,[6][2] as well as about which sentential connectives in natural languages should be counted as logical connectives.[30][2] Sentential connectives are also called sentence-functors,[33] and logical connectives are also called truth-functors.[33]

Arguments edit

An argument is defined as a pair of things, namely a set of sentences, called the premises,[c] and a sentence, called the conclusion.[34][30][33] The conclusion is claimed to follow from the premises,[33] and the premises are claimed to support the conclusion.[30]

Example argument edit

The following is an example of an argument within the scope of propositional logic:

Premise 1: If it's raining, then it's cloudy.
Premise 2: It's raining.
Conclusion: It's cloudy.

The logical form of this argument is known as modus ponens,[35] which is a classically valid form.[36] So, in classical logic, the argument is valid, although it may or may not be sound, depending on the meteorological facts in a given context. This example argument will be reused when explaining § Formalization.

Validity and soundness edit

An argument is valid if, and only if, it is necessary that, if all its premises are true, its conclusion is true.[34][37][38] Alternatively, an argument is valid if, and only if, it is impossible for all the premises to be true while the conclusion is false.[38][34]

Validity is contrasted with soundness.[38] An argument is sound if, and only if, it is valid and all its premises are true.[34][38] Otherwise, it is unsound.[38]

Logic, in general, aims to precisely specify valid arguments.[30] This is done by defining a valid argument as one in which its conclusion is a logical consequence of its premises,[30] which, when this is understood as semantic consequence, means that there is no case in which the premises are true but the conclusion is not true[30] – see § Semantics below.

Formalization edit

Propositional logic is typically studied through a formal system in which formulas of a formal language are interpreted to represent propositions. This formal language is the basis for proof systems, which allow a conclusion to be derived from premises if, and only if, it is a logical consequence of them. This section will show how this works by formalizing the § Example argument. The formal language for a propositional calculus will be fully specified in § Language, and an overview of proof systems will be given in § Proof systems.

Propositional variables edit

Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives,[35][1] it is typically studied by replacing such atomic (indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables).[1] With propositional variables, the § Example argument would then be symbolized as follows:

Premise 1:  
Premise 2:  
Conclusion:  

When P is interpreted as "It's raining" and Q as "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the same logical form.

When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such as  ,   and  ) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself.

Gentzen notation edit

If we assume that the validity of modus ponens has been accepted as an axiom, then the same § Example argument can also be depicted like this:

 

This method of displaying it is Gentzen's notation for natural deduction and sequent calculus.[39] The premises are shown above a line, called the inference line,[11] separated by a comma, which indicates combination of premises.[40] The conclusion is written below the inference line.[11] The inference line represents syntactic consequence,[11] sometimes called deductive consequence,[41] which is also symbolized with ⊢.[42][41] So the above can also be written in one line as  .[d]

Syntactic consequence is contrasted with semantic consequence,[43] which is symbolized with ⊧.[42][41] In this case, the conclusion follows syntactically because the natural deduction inference rule of modus ponens has been assumed. For more on inference rules, see the sections on proof systems below.

Language edit

The language (commonly called  )[44][45][30] of a propositional calculus is defined in terms of:[2][10]

  1. a set of primitive symbols, called atomic formulas, atomic sentences,[35][30] atoms,[46] placeholders, prime formulas,[46] proposition letters, sentence letters,[35] or variables, and
  2. a set of operator symbols, called connectives,[14][1][47] logical connectives,[1] logical operators,[1] truth-functional connectives,[1] truth-functors,[33] or propositional connectives.[2]

A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar. The language  , then, is defined either as being identical to its set of well-formed formulas,[45] or as containing that set (together with, for instance, its set of connectives and variables).[10][30]

Usually the syntax of   is defined recursively by just a few definitions, as seen next; some authors explicitly include parentheses as punctuation marks when defining their language's syntax,[30][48] while others use them without comment.[2][10]

Syntax edit

Given a set of atomic propositional variables  ,  ,  , ..., and a set of propositional connectives  ,  ,  , ...,  ,  ,  , ...,  ,  ,  , ..., a formula of propositional logic is defined recursively by these definitions:[2][10][47][e]

Definition 1: Atomic propositional variables are formulas.
Definition 2: If   is a propositional connective, and  A, B, C, …  is a sequence of m, possibly but not necessarily atomic, possibly but not necessarily distinct, formulas, then the result of applying   to to  A, B, C, …  is a formula.
Definition 3: Nothing else is a formula.

Writing the result of applying   to  A, B, C, …  in functional notation, as  (A, B, C, …), we have the following as examples of well-formed formulas:

  •  
  •  
  •  
  •  
  •  
  •  
  •  

What was given as Definition 2 above, which is responsible for the composition of formulas, is referred to by Colin Howson as the principle of composition.[35][f] It is this recursion in the definition of a language's syntax which justifies the use of the word "atomic" to refer to propositional variables, since all formulas in the language   are built up from the atoms as ultimate building blocks.[2] Composite formulas (all formulas besides atoms) are called molecules,[46] or molecular sentences.[30] (This is an imperfect analogy with chemistry, since a chemical molecule may sometimes have only one atom, as in monatomic gases.)[46]

The definition that "nothing else is a formula", given above as Definition 3, excludes any formula from the language which is not specifically required by the other definitions in the syntax.[33] In particular, it excludes infinitely long formulas from being well-formed.[33]

Constants and schemata edit

Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propositional constants represent some particular proposition,[50] while propositional variables range over the set of all atomic propositions.[50] Schemata, or schematic letters, however, range over all formulas.[33][1] It is common to represent propositional constants by A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often φ, ψ, and χ.[33][1]

However, some authors recognize only two "propositional constants" in their formal system: the special symbol  , called "truth", which always evaluates to True, and the special symbol  , called "falsity", which always evaluates to False.[51][52][53] Other authors also include these symbols, with the same meaning, but consider them to be "zero-place truth-functors",[33] or equivalently, "nullary connectives".[47]

Semantics edit

To serve as a model of the logic of a given natural language, a formal language must be semantically interpreted.[30] In classical logic, all propositions evaluate to exactly one of two truth-values: True or False.[1][54] For example, "Wikipedia is a free online encyclopedia that anyone can edit" evaluates to True,[55] while "Wikipedia is a paper encyclopedia" evaluates to False.[56]

In other respects, the following formal semantics can apply to the language of any propositional logic, but the assumptions that there are only two semantic values (bivalence), that only one of the two is assigned to each formula in the language (noncontradiction), and that every formula gets assigned a value (excluded middle), are distinctive features of classical logic.[54][57][33] To learn about nonclassical logics with more than two truth-values, and their unique semantics, one may consult the articles on "Many-valued logic", "Three-valued logic", "Finite-valued logic", and "Infinite-valued logic".

Interpretation (case) and argument edit

For a given language  , an interpretation,[58] or case,[30][g] is an assignment of semantic values to each formula of  .[30] For a formal language of classical logic, a case is defined as an assignment, to each formula of  , of one or the other, but not both, of the truth values, namely truth (T, or 1) and falsity (F, or 0).[59][60] An interpretation of a formal language for classical logic is often expressed in terms of truth tables.[61][1] Since each formula is only assigned a single truth-value, an interpretation may be viewed as a function, whose domain is  , and whose range is its set of semantic values  ,[2] or  .[30]

For   distinct propositional symbols there are   distinct possible interpretations. For any particular symbol  , for example, there are   possible interpretations: either   is assigned T, or   is assigned F. And for the pair  ,   there are   possible interpretations: either both are assigned T, or both are assigned F, or   is assigned T and   is assigned F, or   is assigned F and   is assigned T.[61] Since   has  , that is, denumerably many propositional symbols, there are  , and therefore uncountably many distinct possible interpretations of   as a whole.[61]

Where   is an interpretation and   and   represent formulas, the definition of an argument, given in § Arguments, may then be stated as a pair  , where   is the set of premises and   is the conclusion. The definition of an argument's validity, i.e. its property that  , can then be stated as its absence of a counterexample, where a counterexample is defined as a case   in which the argument's premises   are all true but the conclusion   is not true.[30][35] As will be seen in § Semantic truth, validity, consequence, this is the same as to say that the conclusion is a semantic consequence of the premises.

Propositional connective semantics edit

An interpretation assigns semantic values to atomic formulas directly.[58][30] Molecular formulas are assigned a function of the value of their constituent atoms, according to the connective used;[58][30] the connectives are defined in such a way that the truth-value of a sentence formed from atoms with connectives depends on the truth-values of the atoms that they're applied to, and only on those.[58][30] This assumption is referred to by Colin Howson as the assumption of the truth-functionality of the connectives.[35]

Since logical connectives are defined semantically only in terms of the truth values that they take when the propositional variables that they're applied to take either of the two possible truth values,[1][30] the semantic definition of the connectives is usually represented as a truth table for each of the connectives,[1][30] as seen below:

p q pq pq pq pq ¬p ¬q
T T T T T T F F
T F F T F F F T
F T F T T F T F
F F F F T T T T

This table covers each of the main five logical connectives:[9][10][11][12] conjunction (here notated p ∧ q), disjunction (p ∨ q), implication (p → q), equivalence (p ⇔ q) and negation, (¬p, or ¬q, as the case may be). It is sufficient for determining the semantics of each of these operators.[1][62][30] For more detail on each of the five, see the articles on each specific one, as well as the articles "Logical connective" and "Truth function". For more truth tables for more different kinds of connectives, see the article "Truth table".

Some of these connectives may be defined in terms of others: for instance, implication, p → q, may be defined in terms of disjunction and negation, as ¬p ∨ q;[63] and disjunction may be defined in terms of negation and conjunction, as ¬(¬p ∧ ¬q).[48] In fact, a truth-functionally complete system,[h] in the sense that all and only the classical propositional tautologies are theorems, may be derived using only disjunction and negation (as Russell, Whitehead, and Hilbert did),[2] or using only implication and negation (as Frege did),[2] or using only conjunction and negation,[2] or even using only a single connective for "not and" (the Sheffer stroke),[3][2] as Jean Nicod did.[2] A joint denial connective (logical NOR) will also suffice, by itself, to define all other connectives,[48] but no other connectives have this property.[48]

Semantic truth, validity, consequence edit

Given   and   as formulas (or sentences) of a language  , and   as an interpretation (or case)[i] of  , then the following definitions apply:[61][60]

  • Truth-in-a-case:[30] A sentence   of   is true under an interpretation   if   assigns the truth value T to  .[60][61] If   is true under  , then   is called a model of  .[61]
  • Falsity-in-a-case:[30]   is false under an interpretation   if, and only if,   is true under  .[61][65][30] This is the "truth of negation" definition of falsity-in-a-case.[30] Falsity-in-a-case may also be defined by the "complement" definition:   is false under an interpretation   if, and only if,   is not true under  .[60][61] In classical logic, these definitions are equivalent, but in nonclassical logics, they are not.[30]
  • Semantic consequence: A sentence   of   is a semantic consequence ( ) of a sentence   if there is no interpretation under which   is true and   is not true.[60][61][30]
  • Valid formula (tautology): A sentence   of   is logically valid ( ),[j] or a tautology,[66][67][48] if it is true under every interpretation,[60][61] or true in every case.[30]
  • Consistent sentence: A sentence of   is consistent if it is true under at least one interpretation. It is inconsistent if it is not consistent.[60][61] An inconsistent formula is also called self-contradictory,[1] and said to be a self-contradiction,[1] or simply a contradiction,[68][69][70] although this latter name is sometimes reserved specifically for statements of the form  .[1]

For interpretations (cases)   of  , these definitions are sometimes given:

  • Complete case: A case   is complete if, and only if, either   is true-in-  or   is true-in- , for any   in  .[30][71]
  • Consistent case: A case   is consistent if, and only if, there is no   in   such that both   and   are true-in- .[30][72]

For classical logic, which assumes that all cases are complete and consistent,[30] the following theorems apply:

  • For any given interpretation, a given formula is either true or false under it.[61][65]
  • No formula is both true and false under the same interpretation.[61][65]
  •   is true under   if, and only if,   is false under  ;[61][65]   is true under   if, and only if,   is not true under  .[61]
  • If   and   are both true under  , then   is true under  .[61][65]
  • If   and  , then  .[61]
  •   is true under   if, and only if, either   is not true under  , or   is true under  .[61]
  •   if, and only if,   is logically valid, that is,   if, and only if,  .[61][65]

Proof systems edit

Proof systems in propositional logic can be broadly classified into semantic proof systems and syntactic proof systems,[73][74][75] according to the kind of logical consequence that they rely on: semantic proof systems rely on semantic consequence ( ),[76] whereas syntactic proof systems rely on syntactic consequence ( ).[77] Semantic consequence deals with the truth values of propositions in all possible interpretations, whereas syntactic consequence concerns the derivation of conclusions from premises based on rules and axioms within a formal system.[78] This section gives a very brief overview of the kinds of proof systems, with anchors to the relevant sections of this article on each one, as well as to the separate Wikipedia articles on each one.

Semantic proof systems edit

 
Example of a truth table
 
A graphical representation of a partially built propositional tableau

Semantic proof systems rely on the concept of semantic consequence, symbolized as  , which indicates that if   is true, then   must also be true in every possible interpretation.[78]

Truth tables edit

A truth table is a semantic proof method used to determine the truth value of a propositional logic expression in every possible scenario.[79] By exhaustively listing the truth values of its constituent atoms, a truth table can show whether a proposition is true, false, tautological, or contradictory.[80] See § Semantic proof via truth tables.

Semantic tableaux edit

A semantic tableau is another semantic proof technique that systematically explores the truth of a proposition.[81] It constructs a tree where each branch represents a possible interpretation of the propositions involved.[82] If every branch leads to a contradiction, the original proposition is considered to be a contradiction, and its negation is considered a tautology.[35] See § Semantic proof via tableaux.

Syntactic proof systems edit

 
Rules for the propositional sequent calculus LK, in Gentzen notation

Syntactic proof systems, in contrast, focus on the formal manipulation of symbols according to specific rules. The notion of syntactic consequence,  , signifies that   can be derived from   using the rules of the formal system.[78]

Axiomatic systems edit

An axiomatic system is a set of axioms or assumptions from which other statements (theorems) are logically derived.[83] In propositional logic, axiomatic systems define a base set of propositions considered to be self-evidently true, and theorems are proved by applying deduction rules to these axioms.[84] See the § Jan Łukasiewicz axiomatic proof system example.

Natural deduction edit

Natural deduction is a syntactic method of proof that emphasizes the derivation of conclusions from premises through the use of intuitive rules reflecting ordinary reasoning.[85] Each rule reflects a particular logical connective and shows how it can be introduced or eliminated.[85] See the § Natural deduction proof system example.

Sequent calculus edit

The sequent calculus is a formal system that represents logical deductions as sequences or "sequents" of formulas.[86] Developed by Gerhard Gentzen, this approach focuses on the structural properties of logical deductions and provides a powerful framework for proving statements within propositional logic.[86][87]

Semantic proof via truth tables edit

Taking advantage of the semantic concept of validity (truth in every interpretation), it is possible to prove a formula's validity by using a truth table, which gives every possible interpretation (assignment of truth values to variables) of a formula.[80][46][33] If, and only if, all the lines of a truth table come out true, the formula is semantically valid (true in every interpretation).[80][46] Further, if (and only if)   is valid, then   is inconsistent.[68][69][70]

For instance, this table shows that "p → (q ∨ r → (r → ¬p))" is not valid:[46]

p q r qr r → ¬p qr → (r → ¬p) p → (qr → (r → ¬p))
T T T T F F F
T T F T T T T
T F T T F F F
T F F F T T T
F T T T T T T
F T F T T T T
F F T T T T T
F F F F T T T

The computation of the last column of the third line may be displayed as follows:[46]

p (q r (r ¬ p))
T (F T (T ¬ T))
T ( T (T F ))
T ( T F )
T F
F
T F F T T F T F F T

Further, using the theorem that   if, and only if,   is valid,[61][65] we can use a truth table to prove that a formula is a semantic consequence of a set of formulas:   if, and only if, we can produce a truth table that comes out all true for the formula   (that is, if  ).[88][89]

Semantic proof via tableaux edit

Since truth tables have 2n lines for n variables, they can be tiresomely long for large values of n.[35] Analytic tableaux are a more efficient, but nevertheless mechanical,[90] semantic proof method; they take advantage of the fact that "we learn nothing about the validity of the inference from examining the truth-value distributions which make either the premises false or the conclusion true: the only relevant distributions when considering deductive validity are clearly just those which make the premises true or the conclusion false."[35]

Analytic tableaux for propositional logic are fully specified by the rules that are stated in schematic form below.[48] These rules use "signed formulas", where a signed formula is an expression   or  , where   is a (unsigned) formula of the language  .[48] (Informally,   is read "  is true", and   is read "  is false".)[48] Their formal semantic definition is that "under any interpretation, a signed formula   is called true if   is true, and false if   is false, whereas a signed formula   is called false if   is true, and true if   is false."[48]

 

In this notation, rule 2 means that   yields both  , whereas   branches into  . The notation is to be understood analogously for rules 3 and 4.[48] Often, in tableaux for classical logic, the signed formula notation is simplified so that   is written simply as  , and   as  , which accounts for naming rule 1 the "Rule of Double Negation".[35][90]

One constructs a tableau for a set of formulas by applying the rules to produce more lines and tree branches until every line has been used, producing a complete tableau. In some cases, a branch can come to contain both   and   for some  , which is to say, a contradiction. In that case, the branch is said to close.[35] If every branch in a tree closes, the tree itself is said to close.[35] In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false.[35] Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close.[35]

To construct a tableau for an argument  , one first writes out the set of premise formulas,  , with one formula on each line, signed with   (that is,   for each   in the set);[90] and together with those formulas (the order is unimportant), one also writes out the conclusion,  , signed with   (that is,  ).[90] One then produces a truth tree (analytic tableau) by using all those lines according to the rules.[90] A closed tree will be proof that the argument was valid, in virtue of the fact that   if, and only if,   is inconsistent (also written as  ).[90]

List of classically valid argument forms edit

Using semantic checking methods, such as truth tables or semantic tableaux, to check for tautologies and semantic consequences, it can be shown that, in classical logic, the following classical argument forms are semantically valid, i.e., these tautologies and semantic consequences hold.[33] We use    to denote equivalence of   and  , that is, as an abbreviation for both   and  ;[33] as an aid to reading the symbols, a description of each formula is given. The description reads the symbol ⊧ (called the "double turnstile") as "therefore", which is a common reading of it,[33][91] although many authors prefer to read it as "entails",[33][92] or as "models".[93]

Name Sequent Description
Modus Ponens  [30] If p then q; p; therefore q
Modus Tollens  [30] If p then q; not q; therefore not p
Hypothetical Syllogism   If p then q; if q then r; therefore, if p then r
Disjunctive Syllogism  [94] Either p or q, or both; not p; therefore, q
Constructive Dilemma   If p then q; and if r then s; but p or r; therefore q or s
Destructive Dilemma   If p then q; and if r then s; but not q or not s; therefore not p or not r
Bidirectional Dilemma   If p then q; and if r then s; but p or not s; therefore q or not r
Simplification  [30] p and q are true; therefore p is true
Conjunction  [30] p and q are true separately; therefore they are true conjointly
Addition  [30][94] p is true; therefore the disjunction (p or q) is true
Composition   If p then q; and if p then r; therefore if p is true then q and r are true
De Morgan's Theorem (1)   [30] The negation of (p and q) is equiv. to (not p or not q)
De Morgan's Theorem (2)   [30] The negation of (p or q) is equiv. to (not p and not q)
Commutation (1)   [94] (p or q) is equiv. to (q or p)
Commutation (2)   [94] (p and q) is equiv. to (q and p)
Commutation (3)   [94] (p iff q) is equiv. to (q iff p)
Association (1)   [35] p or (q or r) is equiv. to (p or q) or r
Association (2)   [35] p and (q and r) is equiv. to (p and q) and r
Distribution (1)   [94] p and (q or r) is equiv. to (p and q) or (p and r)
Distribution (2)   [94] p or (q and r) is equiv. to (p or q) and (p or r)
Double Negation   [30][94] p is equivalent to the negation of not p
Transposition   [30] If p then q is equiv. to if not q then not p
Material Implication   [94] If p then q is equiv. to not p or q
Material Equivalence (1)   [94] (p iff q) is equiv. to (if p is true then q is true) and (if q is true then p is true)
Material Equivalence (2)   [94] (p iff q) is equiv. to either (p and q are true) or (both p and q are false)
Material Equivalence (3)    (p iff q) is equiv to., both (p or not q is true) and (not p or q is true)
Exportation  [95] from (if p and q are true then r is true) we can prove (if q is true then r is true, if p is true)
Importation    If p then (if q then r) is equivalent to if p and q then r
Tautology (1)   [94] p is true is equiv. to p is true or p is true
Tautology (2)   [94] p is true is equiv. to p is true and p is true
Tertium non datur (Law of Excluded Middle)  [30][94] p or not p is true
Law of Non-Contradiction  [30][94] p and not p is false, is a true statement
Explosion  [30] p and not p; therefore q

Example syntactic proof systems edit

Consider again the logical form of the § Example argument, formalized as in § Formalization:

 

The logical form of this argument is modus ponens, which is a classically valid form. It generalizes schematically. Thus, where φ and ψ may be any propositions at all, rather than only atomic propositions (cf. § Constants and schemata):

 

Other argument forms are convenient, but not necessary. Given a complete set of axioms (see below for one such set), modus ponens is sufficient to prove all other argument forms in propositional logic, thus they may be considered to be a derivative. Note, this is not true of the extension of propositional logic to other logics like first-order logic. First-order logic requires at least one additional rule of inference in order to obtain completeness.

The significance of argument in formal logic is that one may obtain new truths from established truths. In the first example above, given the two premises, the truth of Q is not yet known or stated. After the argument is made, Q is deduced. In this way, we define a deduction system to be a set of all propositions that may be deduced from another set of propositions. For instance, given the set of propositions  , we can define a deduction system, Γ, which is the set of all propositions which follow from A. Reiteration is always assumed, so  . Also, from the first element of A, last element, as well as modus ponens, R is a consequence, and so  . Because we have not included sufficiently complete axioms, though, nothing else may be deduced. Thus, even though most deduction systems studied in propositional logic are able to deduce  , this one is too weak to prove such a proposition.

Formal structure for example systems edit

One of the main uses of a propositional calculus, when interpreted for logical applications, is to determine relations of logical equivalence between propositional formulas. These relationships are determined by means of the available transformation rules, sequences of which are called derivations or proofs.

The following examples of proof systems for a propositional calculus will assume a calculus defined as a formal system  , where:

  • The alpha set   is a countably infinite set of  's atomic formulas or propositional variables. In the examples to follow, the elements of   are typically the letters p, q, r, and so on.
  • The omega set Ω is a finite set of elements called operator symbols or logical connectives. The set Ω is partitioned into disjoint subsets as  , where,   is the set of operator symbols of arity j. For instance, a partition of Ω for the typical five connectives would have   and   Also, the constant logical values are treated as operators of arity zero, so that  
  • The zeta set   is a finite set of transformation rules, called inference rules when they acquire logical applications.
  • The iota set   is a countable set of initial points that are called axioms when they receive logical interpretations.

The language   is its set of well-formed formulas, inductively defined by the following rules:

  1. Base: Any element of the alpha set   is a formula of  .
  2. If   are formulas and   is in  , then   is a formula.
  3. Closed: Nothing else is a formula of  .

Repeated applications of these rules permits the construction of complex formulas. Examples of formulas that follow these rules include "p" (by rule 1), " " (by rule 2), "q" (by rule 1), and " " (by rule 2).[k]

In the discussion to follow, after a proof system is defined, a proof is presented as a sequence of numbered lines, with each line consisting of a single formula followed by a reason or justification for introducing that formula. Each premise of the argument, that is, an assumption introduced as a hypothesis of the argument, is listed at the beginning of the sequence and is marked as a "premise" in lieu of other justification. The conclusion is listed on the last line. A proof is complete if every line follows from the previous ones by the correct application of a transformation rule. (For a contrasting approach, see proof-trees).

Natural deduction proof system example edit

Let  , where  ,  ,  ,   are defined as follows:

  • The alpha set  , is a countably infinite set of symbols, thus:  
  • The omega set   partitions as   and  

In the following example of a propositional calculus, the transformation rules are intended to be interpreted as the inference rules of a so-called natural deduction system. The particular system presented here has no initial points, which means that its interpretation for logical applications derives its theorems from an empty axiom set.

  • The set of initial points is empty, that is,  .
  • The set of transformation rules,  , is described as follows:

Our propositional calculus has eleven inference rules. These rules allow us to derive other true formulas given a set of formulas that are assumed to be true. The first ten simply state that we can infer certain well-formed formulas from other well-formed formulas. The last rule however uses hypothetical reasoning in the sense that in the premise of the rule we temporarily assume an (unproven) hypothesis to be part of the set of inferred formulas to see if we can infer a certain other formula. Since the first ten rules do not do this they are usually described as non-hypothetical rules, and the last one as a hypothetical rule.

In describing the transformation rules, we may introduce a metalanguage symbol  . It is basically a convenient shorthand for saying "infer that". The format is  , in which Γ is a (possibly empty) set of formulas called premises, and ψ is a formula called conclusion. The transformation rule   means that if every proposition in Γ is a theorem (or has the same truth value as the axioms), then ψ is also a theorem. Considering the following rule Conjunction introduction, we will know whenever Γ has more than one formula, we can always safely reduce it into one formula using conjunction. So for short, from that time on we may represent Γ as one formula instead of a set. Another omission for convenience is when Γ is an empty set, in which case Γ may not appear.

Inference rules edit

  • Negation introduction: From   and  , infer  ; that is,  .
  • Negation elimination: From  , infer  ; that is,  .
  • Double negation elimination: From  , infer p; that is,  .
  • Conjunction introduction: From p and q, infer  ; that is,  .
  • Conjunction elimination: From  , infer p, and from  , infer q; that is,   and  .
  • Disjunction introduction: From p, infer  .
From q, infer  ; that is,   and  .
  • Disjunction elimination: From   and   and  , infer r; that is,  .
  • Biconditional introduction: From   and  , infer  ; that is,  .
  • Biconditional elimination: From  , infer  , and from  , infer  ; that is,   and  .
  • Modus ponens (conditional elimination) : From p and  , infer q; that is,  .
  • Conditional proof (conditional introduction) : From [accepting p allows a proof of q], infer  ; that is,  .

Example of a proof in natural deduction system edit

  • To be shown that AA.
  • One possible proof of this (which, though valid, happens to contain more steps than are necessary) may be arranged as follows:
Example of a proof
Number Formula Reason
1   premise
2   From (1) by disjunction introduction
3   From (1) and (2) by conjunction introduction
4   From (3) by conjunction elimination
5   Summary of (1) through (4)
6   From (5) by conditional proof

Interpret   as "Assuming A, infer A". Read   as "Assuming nothing, infer that A implies A", or "It is a tautology that A implies A", or "It is always true that A implies A".

Jan Łukasiewicz axiomatic proof system example edit

Let  , where  ,  ,  ,   are defined as follows:

  • The set  , the countably infinite set of symbols that serve to represent logical propositions:  
  • The functionally complete set   of logical operators (logical connectives and negation) is as follows. Of the three connectives for conjunction, disjunction, and implication ( , and ), one can be taken as primitive and the other two can be defined in terms of it and negation (¬).[96] Alternatively, all of the logical operators may be defined in terms of a sole sufficient operator, such as the Sheffer stroke (nand). The biconditional ( ) can of course be defined in terms of conjunction and implication as  . Adopting negation and implication as the two primitive operations of a propositional calculus is tantamount to having the omega set   partitioned as   and   Then   is defined as  , and   is defined as  .
  • The set   (the set of initial points of logical deduction, i.e., logical axioms) is the axiom system proposed by Jan Łukasiewicz, and used as the propositional-calculus part of a Hilbert system. The axioms are all substitution instances of:
    •  
    •  
    •  
  • The set   of transformation rules (rules of inference) is the sole rule modus ponens (i.e., from any formulas of the form   and  , infer  ).

This system is used in Metamath set.mm formal proof database.

Example of a proof in an axiomatic propositional calculus system edit

We now prove the same theorem   in the axiomatic system by Jan Łukasiewicz described above, which is an example of a Hilbert-style deductive system for the classical propositional calculus.

The axioms are:

(A1)  
(A2)  
(A3)  

And the proof is as follows:

  1.         (instance of (A1))
  2.         (instance of (A2))
  3.         (from (1) and (2) by modus ponens)
  4.         (instance of (A1))
  5.         (from (4) and (3) by modus ponens)

Soundness and completeness of the rules edit

The crucial properties of this set of rules are that they are sound and complete. Informally this means that the rules are correct and that no other rules are required. These claims can be made more formal as follows. The proofs for the soundness and completeness of the propositional logic are not themselves proofs in propositional logic; these are theorems in ZFC used as a metatheory to prove properties of propositional logic.

We define a truth assignment as a function that maps propositional variables to true or false. Informally such a truth assignment can be understood as the description of a possible state of affairs (or possible world) where certain statements are true and others are not. The semantics of formulas can then be formalized by defining for which "state of affairs" they are considered to be true, which is what is done by the following definition.

We define when such a truth assignment A satisfies a certain well-formed formula with the following rules:

  • A satisfies the propositional variable P if and only if A(P) = true
  • A satisfies ¬φ if and only if A does not satisfy φ
  • A satisfies (φψ) if and only if A satisfies both φ and ψ
  • A satisfies (φψ) if and only if A satisfies at least one of either φ or ψ
  • A satisfies (φψ) if and only if it is not the case that A satisfies φ but not ψ
  • A satisfies (φψ) if and only if A satisfies both φ and ψ or satisfies neither one of them

With this definition we can now formalize what it means for a formula φ to be implied by a certain set S of formulas. Informally this is true if in all worlds that are possible given the set of formulas S the formula φ also holds. This leads to the following formal definition: We say that a set S of well-formed formulas semantically entails (or implies) a certain well-formed formula φ if all truth assignments that satisfy all the formulas in S also satisfy φ.

Finally we define syntactical entailment such that φ is syntactically entailed by S if and only if we can derive it with the inference rules that were presented above in a finite number of steps. This allows us to formulate exactly what it means for the set of inference rules to be sound and complete:

Soundness: If the set of well-formed formulas S syntactically entails the well-formed formula φ then S semantically entails φ.

Completeness: If the set of well-formed formulas S semantically entails the well-formed formula φ then S syntactically entails φ.

For the above set of rules this is indeed the case.

Sketch of a soundness proof edit

(For most logical systems, this is the comparatively "simple" direction of proof)

Notational conventions: Let G be a variable ranging over sets of sentences. Let A, B and C range over sentences. For "G syntactically entails A" we write "G proves A". For "G semantically entails A" we write "G implies A".

We want to show: (A)(G) (if G proves A, then G implies A).

We note that "G proves A" has an inductive definition, and that gives us the immediate resources for demonstrating claims of the form "If G proves A, then ...". So our proof proceeds by induction.

  1. Basis. Show: If A is a member of G, then G implies A.
  2. Basis. Show: If A is an axiom, then G implies A.
  3. Inductive step (induction on n, the length of the proof):
    1. Assume for arbitrary G and A that if G proves A in n or fewer steps, then G implies A.
    2. For each possible application of a rule of inference at step n + 1, leading to a new theorem B, show that G implies B.

Notice that Basis Step II can be omitted for natural deduction systems because they have no axioms. When used, Step II involves showing that each of the axioms is a (semantic) logical truth.

The Basis steps demonstrate that the simplest provable sentences from G are also implied by G, for any G. (The proof is simple, since the semantic fact that a set implies any of its members, is also trivial.) The Inductive step will systematically cover all the further sentences that might be provable—by considering each case where we might reach a logical conclusion using an inference rule—and shows that if a new sentence is provable, it is also logically implied. (For example, we might have a rule telling us that from "A" we can derive "A or B". In III.a We assume that if A is provable it is implied. We also know that if A is provable then "A or B" is provable. We have to show that then "A or B" too is implied. We do so by appeal to the semantic definition and the assumption we just made. A is provable from G, we assume. So it is also implied by G. So any semantic valuation making all of G true makes A true. But any valuation making A true makes "A or B" true, by the defined semantics for "or". So any valuation which makes all of G true makes "A or B" true. So "A or B" is implied.) Generally, the Inductive step will consist of a lengthy but simple case-by-case analysis of all the rules of inference, showing that each "preserves" semantic implication.

By the definition of provability, there are no sentences provable other than by being a member of G, an axiom, or following by a rule; so if all of those are semantically implied, the deduction calculus is sound.

Sketch of completeness proof edit

(This is usually the much harder direction of proof.)

We adopt the same notational conventions as above.

We want to show: If G implies A, then G proves A. We proceed by contraposition: We show instead that if G does not prove A then G does not imply A. If we show that there is a model where A does not hold despite G being true, then obviously G does not imply A. The idea is to build such a model out of our very assumption that G does not prove A.

  1. G does not prove A. (Assumption)
  2. If G does not prove A, then we can construct an (infinite) maximal set, G, which is a superset of G and which also does not prove A.
    1. Place an ordering (with order type ω) on all the sentences in the language (e.g., shortest first, and equally long ones in extended alphabetical ordering), and number them (E1, E2, ...)
    2. Define a series Gn of sets (G0, G1, ...) inductively:
      1.  
      2. If   proves A, then  
      3. If   does not prove A, then  
    3. Define G as the union of all the Gn. (That is, G is the set of all the sentences that are in any Gn.)
    4. It can be easily shown that
      1. G contains (is a superset of) G (by (b.i));
      2. G does not prove A (because the proof would contain only finitely many sentences and when the last of them is introduced in some Gn, that Gn would prove A contrary to the definition of Gn); and
      3. G is a maximal set with respect to A: If any more sentences whatever were added to G, it would prove A. (Because if it were possible to add any more sentences, they should have been added when they were encountered during the construction of the Gn, again by definition)
  3. If G is a maximal set with respect to A, then it is truth-like. This means that it contains C if and only if it does not contain ¬C; If it contains C and contains "If C then B" then it also contains B; and so forth. In order to show this, one has to show the axiomatic system is strong enough for the following:
    • For any formulas C and D, if it proves both C and ¬C, then it proves D. From this it follows, that a maximal set with respect to A cannot prove both C and ¬C, as otherwise it would prove A.
    • For any formulas C and D, if it proves both CD and ¬CD, then it proves D. This is used, together with the deduction theorem, to show that for any formula, either it or its negation is in G: Let B be a formula not in G; then G with the addition of B proves A. Thus from the deduction theorem it follows that G proves BA. But suppose ¬B were also not in G, then by the same logic G also proves ¬BA; but then G proves A, which we have already shown to be false.
    • For any formulas C and D, if it proves C and D, then it proves CD.
    • For any formulas C and D, if it proves C and ¬D, then it proves ¬(CD).
    • For any formulas C and D, if it proves ¬C, then it proves CD.
    If additional logical operation (such as conjunction and/or disjunction) are part of the vocabulary as well, then there are additional requirement on the axiomatic system (e.g. that if it proves C and D, it would also prove their conjunction).
  4. If G is truth-like there is a G-Canonical valuation of the language: one that makes every sentence in G true and everything outside G false while still obeying the laws of semantic composition in the language. The requirement that it is truth-like is needed to guarantee that the laws of semantic composition in the language will be satisfied by this truth assignment.
  5. A G-canonical valuation will make our original set G all true, and make A false.
  6. If there is a valuation on which G are true and A is false, then G does not (semantically) imply A.

Thus every system that has modus ponens as an inference rule, and proves the following theorems (including substitutions thereof) is complete:

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

The first five are used for the satisfaction of the five conditions in stage III above, and the last three for proving the deduction theorem.

Example edit

As an example, it can be shown that as any other tautology, the three axioms of the classical propositional calculus system described earlier can be proven in any system that satisfies the above, namely that has modus ponens as an inference rule, and proves the above eight theorems (including substitutions thereof). Out of the eight theorems, the last two are two of the three axioms; the third axiom,  , can be proven as well, as we now show.

For the proof we may use the hypothetical syllogism theorem (in the form relevant for this axiomatic system), since it only relies on the two axioms that are already in the above set of eight theorems. The proof then is as follows:

  1.         (instance of the 7th theorem)
  2.         (instance of the 7th theorem)
  3.         (from (1) and (2) by modus ponens)
  4.         (instance of the hypothetical syllogism theorem)
  5.         (instance of the 5th theorem)
  6.         (from (5) and (4) by modus ponens)
  7.         (instance of the 2nd theorem)
  8.         (instance of the 7th theorem)
  9.         (from (7) and (8) by modus ponens)
  10.  
            (instance of the 8th theorem)
  11.         (from (9) and (10) by modus ponens)
  12.         (from (3) and (11) by modus ponens)
  13.         (instance of the 8th theorem)
  14.         (from (12) and (13) by modus ponens)
  15.         (from (6) and (14) by modus ponens)
Verifying completeness for the classical propositional calculus system edit

We now verify that the classical propositional calculus system described earlier can indeed prove the required eight theorems mentioned above. We use several lemmas proven here:

(DN1)   - Double negation (one direction)
(DN2)   - Double negation (another direction)
(HS1)   - one form of Hypothetical syllogism
(HS2)   - another form of Hypothetical syllogism
(TR1)   - Transposition
(TR2)   - another form of transposition.
(L1)  
(L3)  

We also use the method of the hypothetical syllogism metatheorem as a shorthand for several proof steps.

  •   - proof:
    1.         (instance of (A1))
    2.         (instance of (TR1))
    3.         (from (1) and (2) using the hypothetical syllogism metatheorem)
    4.         (instance of (DN1))
    5.         (instance of (HS1))
    6.         (from (4) and (5) using modus ponens)
    7.         (from (3) and (6) using the hypothetical syllogism metatheorem)
  •   - proof:
    1.         (instance of (HS1))
    2.         (instance of (L3))
    3.         (instance of (HS1))
    4.         (from (2) and (3) by modus ponens)
    5.         (from (1) and (4) using the hypothetical syllogism metatheorem)
    6.         (instance of (TR2))
    7.         (instance of (HS2))
    8.         (from (6) and (7) using modus ponens)
    9.         (from (5) and (8) using the hypothetical syllogism metatheorem)
  •   - proof:
    1.         (instance of (A1))
    2.         (instance of (A1))
    3.         (from (1) and (2) using modus ponens)
  •   - proof:
    1.         (instance of (L1))
    2.         (instance of (TR1))
    3.         (from (1) and (2) using the hypothetical syllogism metatheorem)
  •   - proof:
    1.         (instance of (A1))
    2.         (instance of (A3))
    3.         (from (1) and (2) using the hypothetical syllogism metatheorem)
  •   - proof given in the proof example above
  •   - axiom (A1)
  •   - axiom (A2)
Another outline for a completeness proof edit

If a formula is a tautology, then there is a truth table for it which shows that each valuation yields the value true for the formula. Consider such a valuation. By mathematical induction on the length of the subformulas, show that the truth or falsity of the subformula follows from the truth or falsity (as appropriate for the valuation) of each propositional variable in the subformula. Then combine the lines of the truth table together two at a time by using "(P is true implies S) implies ((P is false implies S) implies S)". Keep repeating this until all dependencies on propositional variables have been eliminated. The result is that we have proved the given tautology. Since every tautology is provable, the logic is complete.

More complex axiomatic proof system example edit

It is possible to define another version of propositional calculus, which defines most of the syntax of the logical operators by means of axioms, and which uses only one inference rule.

Axioms edit

Let φ, χ, and ψ stand for well-formed formulas. (The well-formed formulas themselves would not contain any Greek letters, but only capital Roman letters, connective operators, and parentheses.) Then the axioms are as follows:

Axioms
Name Axiom Schema Description
THEN-1   Add hypothesis χ, implication introduction
THEN-2   Distribute hypothesis   over implication
AND-1   Eliminate conjunction
AND-2    
AND-3   Introduce conjunction
OR-1   Introduce disjunction
OR-2    
OR-3   Eliminate disjunction
NOT-1   Introduce negation
NOT-2   Eliminate negation
NOT-3   Excluded middle, classical logic
IFF-1   Eliminate equivalence
IFF-2    
IFF-3   Introduce equivalence
  • Axiom THEN-2 may be considered to be a "distributive property of implication with respect to implication."
  • Axioms AND-1 and AND-2 correspond to "conjunction elimination". The relation between AND-1 and AND-2 reflects the commutativity of the conjunction operator.
  • Axiom AND-3 corresponds to "conjunction introduction."
  • Axioms OR-1 and OR-2 correspond to "disjunction introduction." The relation between OR-1 and OR-2 reflects the commutativity of the disjunction operator.
  • Axiom NOT-1 corresponds to "reductio ad absurdum."
  • Axiom NOT-2 says that "anything can be deduced from a contradiction."
  • Axiom NOT-3 is called "tertium non-datur" (Latin: "a third is not given") and reflects the semantic valuation of propositional formulas: a formula can have a truth-value of either true or false. There is no third truth-value, at least not in classical logic. Intuitionistic logicians do not accept the axiom NOT-3.
Inference rule edit

The inference rule is modus ponens:

 .
Meta-inference rule edit

Let a demonstration be represented by a sequence, with hypotheses to the left of the turnstile and the conclusion to the right of the turnstile. Then the deduction theorem can be stated as follows:

If the sequence
 
has been demonstrated, then it is also possible to demonstrate the sequence
 .

This deduction theorem (DT) is not itself formulated with propositional calculus: it is not a theorem of propositional calculus, but a theorem about propositional calculus. In this sense, it is a meta-theorem, comparable to theorems about the soundness or completeness of propositional calculus.

On the other hand, DT is so useful for simplifying the syntactical proof process that it can be considered and used as another inference rule, accompanying modus ponens. In this sense, DT corresponds to the natural conditional proof inference rule which is part of the first version of propositional calculus introduced in this article.

The converse of DT is also valid:

If the sequence
 
has been demonstrated, then it is also possible to demonstrate the sequence
 

in fact, the validity of the converse of DT is almost trivial compared to that of DT:

If
 
then
1:  
2:  
and from (1) and (2) can be deduced
3:  
by means of modus ponens, Q.E.D.

The converse of DT has powerful implications: it can be used to convert an axiom into an inference rule. For example, by axiom AND-1 we have,

 

which can be transformed by means of the converse of the deduction theorem into

 

which tells us that the inference rule

 

is admissible. This inference rule is conjunction elimination, one of the ten inference rules used in the first version (in this article) of the propositional calculus.

Example of a proof in the more complex axiomatic system edit

The following is an example of a (syntactical) demonstration, involving only axioms THEN-1 and THEN-2:

Prove:   (Reflexivity of implication).

Proof:

  1.  
    Axiom THEN-2 with  
  2.  
    Axiom THEN-1 with  
  3.  
    From (1) and (2) by modus ponens.
  4.  
    Axiom THEN-1 with  
  5.  
    From (3) and (4) by modus ponens.

Solvers edit

One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula is decidable.[97] Deciding satisfiability of propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g., DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers.

See also edit

Higher logical levels edit

Related topics edit

Notes edit

  1. ^ Many sources write this with a definite article, as the propositional calculus, while others just call it propositional calculus with no article.
  2. ^ The "or both" makes it clear[30] that it's a logical disjunction, not an exclusive or, which is more common in English.
  3. ^ The set of premises may be the empty set;[33][34] an argument from an empty set of premises is valid if, and only if, the conclusion is a tautology.[33][34]
  4. ^ The turnstile (for syntactic consequence) is of a higher level than the comma (for premise combination), which is of a higher level than the arrow (for material implication), so no parentheses are needed to interpret this formula.[40]
  5. ^ A very general and abstract syntax is given here, following the notation in the SEP,[2] but including the third definition, which is very commonly given explicitly by other sources, such as Gillon.[10] As noted elsewhere in the article, languages variously compose their set of atomic propositional variables from uppercase or lowercase letters (often focusing on P/p, Q/q, and R/r), with or without subscript numerals; and in their set of connectives, they may include either the full set of five typical connectives,  , or any of the truth-functionally complete subsets of it. (And, of course, they may also use any of the notational variants of these connectives.)
  6. ^ Note that the phrase "principle of composition" has referred to other things in other contexts, and even in the context of logic, since Bertrand Russell used it to refer to the principle that "a proposition which implies each of two propositions implies them both."[49]
  7. ^ The name "interpretation" is used by some authors and the name "case" by other authors. This article will be indifferent and use either, since it is collaboratively edited and there is no consensus about which terminology to adopt.
  8. ^ A truth-functionally complete set of connectives[2] is also called simply functionally complete, or adequate for truth-functional logic,[35] or expressively adequate,[64] or simply adequate.[35][64]
  9. ^ Some of these definitions use the word "interpretation", and speak of sentences/formulas being true or false "under" it, and some will use the word "case", and speak of sentences/formulas being true or false "in" it. Published reliable sources (WP:RS) have used both kinds of terminological convention, although usually a given author will use only one of them. Since this article is collaboratively edited and there is no consensus about which convention to use, these variations in terminology have been left standing.
  10. ^ Conventionally  , with nothing to the left of the turnstile, is used to symbolize a tautology. It may be interpreted as saying that   is a semantic consequence of the empty set of formulae, i.e.,  , but with the empty brackets omitted for simplicity;[33] which is just the same as to say that it is a tautology, i.e., that there is no interpretation under which it is false.[33]
  11. ^ Formally, rule 2 obtains formulas in Polish notation, i.e.   in this example. For convenience, we will use the common infix notation instead in this and all following examples.

References edit

  1. ^ a b c d e f g h i j k l m n o p q r s t u v w x y "Propositional Logic | Internet Encyclopedia of Philosophy". Retrieved 22 March 2024.
  2. ^ a b c d e f g h i j k l m n o p q r s t u v Franks, Curtis (2023), "Propositional Logic", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Fall 2023 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
  3. ^ a b Weisstein, Eric W. "Propositional Calculus". mathworld.wolfram.com. Retrieved 22 March 2024.
  4. ^ a b Bělohlávek, Radim; Dauben, Joseph Warren; Klir, George J. (2017). Fuzzy logic and mathematics: a historical perspective. New York, NY, United States of America: Oxford University Press. p. 463. ISBN 978-0-19-020001-5.
  5. ^ a b Manzano, María (2005). Extensions of first order logic. Cambridge tracts in theoretical computer science (Digitally printed first paperback version ed.). Cambridge: Cambridge University Press. p. 180. ISBN 978-0-521-35435-6.
  6. ^ a b McGrath, Matthew; Frank, Devin (2023), "Propositions", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
  7. ^ "Predicate Logic". www3.cs.stonybrook.edu. Retrieved 22 March 2024.
  8. ^ "Philosophy 404: Lecture Five". www.webpages.uidaho.edu. Retrieved 22 March 2024.
  9. ^ a b c "3.1 Propositional Logic". www.teach.cs.toronto.edu. Retrieved 22 March 2024.
  10. ^ a b c d e f g h i Davis, Steven; Gillon, Brendan S., eds. (2004). Semantics: a reader. New York: Oxford University Press. ISBN 978-0-19-513697-5.
  11. ^ a b c d e Plato, Jan von (2013). Elements of logical reasoning (1. publ ed.). Cambridge: Cambridge University press. pp. 9, 32, 121. ISBN 978-1-107-03659-8.
  12. ^ a b "Propositional Logic". www.cs.miami.edu. Retrieved 22 March 2024.
  13. ^ Plato, Jan von (2013). Elements of logical reasoning (1. publ ed.). Cambridge: Cambridge University press. p. 9. ISBN 978-1-107-03659-8.
  14. ^ a b Weisstein, Eric W. "Connective". mathworld.wolfram.com. Retrieved 22 March 2024.
  15. ^ "Propositional Logic | Brilliant Math & Science Wiki". brilliant.org. Retrieved 20 August 2020.
  16. ^ Bobzien, Susanne (1 January 2016). "Ancient Logic". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University – via Stanford Encyclopedia of Philosophy.
  17. ^ "Propositional Logic | Internet Encyclopedia of Philosophy". Retrieved 20 August 2020.
  18. ^ Bobzien, Susanne (2020), "Ancient Logic", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Summer 2020 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
  19. ^ Peckhaus, Volker (1 January 2014). "Leibniz's Influence on 19th Century Logic". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University – via Stanford Encyclopedia of Philosophy.
  20. ^ Hurley, Patrick (2007). A Concise Introduction to Logic 10th edition. Wadsworth Publishing. p. 392.
  21. ^ Beth, Evert W.; "Semantic entailment and formal derivability", series: Mededlingen van de Koninklijke Nederlandse Akademie van Wetenschappen, Afdeling Letterkunde, Nieuwe Reeks, vol. 18, no. 13, Noord-Hollandsche Uitg. Mij., Amsterdam, 1955, pp. 309–42. Reprinted in Jaakko Intikka (ed.) The Philosophy of Mathematics, Oxford University Press, 1969
  22. ^ a b Truth in Frege
  23. ^ a b c "Russell: the Journal of Bertrand Russell Studies".
  24. ^ Anellis, Irving H. (2012). "Peirce's Truth-functional Analysis and the Origin of the Truth Table". History and Philosophy of Logic. 33: 87–97. doi:10.1080/01445340.2011.621702. S2CID 170654885.
  25. ^ "Part2Mod1: LOGIC: Statements, Negations, Quantifiers, Truth Tables". www.math.fsu.edu. Retrieved 22 March 2024.
  26. ^ "Lecture Notes on Logical Organization and Critical Thinking". www2.hawaii.edu. Retrieved 22 March 2024.
  27. ^ "Logical Connectives". sites.millersville.edu. Retrieved 22 March 2024.
  28. ^ "Lecture1". www.cs.columbia.edu. Retrieved 22 March 2024.
  29. ^ a b c d "Introduction to Logic - Chapter 2". intrologic.stanford.edu. Retrieved 22 March 2024.
  30. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax Beall, Jeffrey C. (2010). Logic: the basics. The basics (1. publ ed.). London: Routledge. pp. 6, 8, 14–16, 19–20, 44–48, 50–53, 56. ISBN 978-0-203-85155-5.
  31. ^ "Watson". watson.latech.edu. Retrieved 22 March 2024.
  32. ^ "Introduction to Theoretical Computer Science, Chapter 1". www.cs.odu.edu. Retrieved 22 March 2024.
  33. ^ a b c d e f g h i j k l m n o p q r s t Bostock, David (1997). Intermediate logic. Oxford : New York: Clarendon Press ; Oxford University Press. pp. 4–5, 8–13, 18–19, 22, 27, 29. ISBN 978-0-19-875141-0.
  34. ^ a b c d e f Allen, Colin; Hand, Michael (2022). Logic primer (3rd ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-54364-4.
  35. ^ a b c d e f g h i j k l m n o p q r s Howson, Colin (1997). Logic with trees: an introduction to symbolic logic. London ; New York: Routledge. pp. ix, x, 5–6, 15–16, 20, 24–29, 38, 42–43, 47. ISBN 978-0-415-13342-5.
  36. ^ Stojnić, Una (2017). "One's Modus Ponens: Modality, Coherence and Logic". Philosophy and Phenomenological Research. 95 (1): 167–214. ISSN 0031-8205.
  37. ^ Dutilh Novaes, Catarina (2022), Zalta, Edward N.; Nodelman, Uri (eds.), "Argument and Argumentation", The Stanford Encyclopedia of Philosophy (Fall 2022 ed.), Metaphysics Research Lab, Stanford University, retrieved 5 April 2024
  38. ^ a b c d e "Validity and Soundness | Internet Encyclopedia of Philosophy". Retrieved 5 April 2024.
  39. ^ Pelletier, Francis Jeffry; Hazen, Allen (2024), "Natural Deduction Systems in Logic", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Spring 2024 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
  40. ^ a b Restall, Greg (2018), "Substructural Logics", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2018 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
  41. ^ a b c "Compactness | Internet Encyclopedia of Philosophy". Retrieved 22 March 2024.
  42. ^ a b "Lecture Topics for Discrete Math Students". math.colorado.edu. Retrieved 22 March 2024.
  43. ^ Paseau, Alexander; Pregel, Fabian (2023), "Deductivism in the Philosophy of Mathematics", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Fall 2023 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
  44. ^ "Compactness | Internet Encyclopedia of Philosophy". Retrieved 22 March 2024.
  45. ^ a b Demey, Lorenz; Kooi, Barteld; Sack, Joshua (2023), "Logic and Probability", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Fall 2023 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
  46. ^ a b c d e f g h Kleene, Stephen Cole (2002). Mathematical logic (Dover ed.). Mineola, N.Y: Dover Publications. ISBN 978-0-486-42533-7.
  47. ^ a b c Humberstone, Lloyd (2011). The connectives. Cambridge, Mass: MIT Press. pp. 118, 702. ISBN 978-0-262-01654-4. OCLC 694679197.
  48. ^ a b c d e f g h i j Smullyan, Raymond M. (1995). First-order logic. New York: Dover. pp. 5, 11, 14. ISBN 978-0-486-68370-6.
  49. ^ Russell, Bertrand (2010). Principles of mathematics. Routledge classics. London: Routledge. p. 17. ISBN 978-0-415-48741-2.
  50. ^ a b Lande, Nelson P. (2013). Classical logic and its rabbit holes: a first course. Indianapolis, Ind: Hackett Publishing Co., Inc. p. 20. ISBN 978-1-60384-948-7.
  51. ^ Goldrei, Derek (2005). Propositional and predicate calculus: a model of argument. London: Springer. p. 69. ISBN 978-1-85233-921-0.
  52. ^ "Propositional Logic". www.cs.rochester.edu. Retrieved 22 March 2024.
  53. ^ "Propositional calculus". www.cs.cornell.edu. Retrieved 22 March 2024.
  54. ^ a b Shramko, Yaroslav; Wansing, Heinrich (2021), "Truth Values", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Winter 2021 ed.), Metaphysics Research Lab, Stanford University, retrieved 23 March 2024
  55. ^ Metcalfe, David; Powell, John (2011). "Should doctors spurn Wikipedia?". Journal of the Royal Society of Medicine. 104 (12): 488–489. doi:10.1258/jrsm.2011.110227. ISSN 0141-0768. PMC 3241521. PMID 22179287.
  56. ^ Ayers, Phoebe; Matthews, Charles; Yates, Ben (2008). How Wikipedia works: and how you can be a part of it. San Francisco: No Starch Press. p. 22. ISBN 978-1-59327-176-3. OCLC 185698411.
  57. ^ Shapiro, Stewart; Kouri Kissel, Teresa (2024), Zalta, Edward N.; Nodelman, Uri (eds.), "Classical Logic", The Stanford Encyclopedia of Philosophy (Spring 2024 ed.), Metaphysics Research Lab, Stanford University, retrieved 25 March 2024
  58. ^ a b c d Landman, Fred (1991). "Structures for Semantics". Studies in Linguistics and Philosophy. 45: 127. doi:10.1007/978-94-011-3212-1. ISBN 978-0-7923-1240-6. ISSN 0924-4662.
  59. ^ Nascimento, Marco Antonio Chaer (2015). Frontiers in quantum methods and applications in chemistry and physics: selected proceedings of QSCP-XVIII (Paraty, Brazil, December, 2013). Progress in theoretical chemistry and physics. International Workshop on Quantum Systems in Chemistry and Physics. Cham: Springer. p. 255. ISBN 978-3-319-14397-2.
  60. ^ a b c d e f g Chowdhary, K.R. (2020). "Fundamentals of Artificial Intelligence". SpringerLink: 31–34. doi:10.1007/978-81-322-3972-7. ISBN 978-81-322-3970-3.
  61. ^ a b c d e f g h i j k l m n o p q r s t Hunter, Geoffrey (1971). Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of California Press. ISBN 0-520-02356-0.
  62. ^ Aloni, Maria (2023), "Disjunction", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Spring 2023 ed.), Metaphysics Research Lab, Stanford University, retrieved 23 March 2024
  63. ^ Levin, Oscar. Propositional Logic.
  64. ^ a b Smith, Peter (2003), An introduction to formal logic, Cambridge University Press, ISBN 978-0-521-00804-4. (Defines "expressively adequate", shortened to "adequate set of connectives" in a section heading.)
  65. ^ a b c d e f g Rogers, Robert L. (1971). Mathematical Logic and Formalized Theories. Elsevier. pp. 38–39. doi:10.1016/c2013-0-11894-6. ISBN 978-0-7204-2098-2.
  66. ^ "6. Semantics of Propositional Logic — Logic and Proof 3.18.4 documentation". leanprover.github.io. Retrieved 28 March 2024.
  67. ^ "Knowledge Representation and Reasoning: Basics of Logics". www.emse.fr. Retrieved 28 March 2024.
  68. ^ a b "1.4: Tautologies and contradictions". Mathematics LibreTexts. 9 September 2021. Retrieved 29 March 2024.
  69. ^ a b Sylvestre, Jeremy. EF Tautologies and contradictions.
  70. ^ a b DeLancey, Craig; Woodrow, Jenna (2017). Elementary Formal Logic (1 ed.). Pressbooks.
  71. ^ Dix, J.; Fisher, Michael; Novak, Peter, eds. (2010). Computational logic in multi-agent systems: 10th international workshop, CLIMA X, Hamburg, Germany, September 9-10, 2009: revised selected and invited papers. Lecture notes in computer science. Berlin ; New York: Springer. p. 49. ISBN 978-3-642-16866-6. OCLC 681481210.
  72. ^ Prakken, Henry; Bistarelli, Stefano; Santini, Francesco; Taticchi, Carlo, eds. (2020). Computational models of argument: proceedings of comma 2020. Frontiers in artificial intelligence and applications. Washington: IOS Press. p. 252. ISBN 978-1-64368-106-1.
  73. ^ Awodey, Steve; Arnold, Greg Frost-, eds. (2024). Rudolf carnap: studies in semantics: the collected works of rudolf carnap, volume 7. New York: Oxford University Press. pp. xxvii. ISBN 978-0-19-289487-8.
  74. ^ Harel, Guershon; Stylianides, Andreas J., eds. (2018). Advances in Mathematics Education Research on Proof and Proving: An International Perspective. ICME-13 Monographs (1st ed. 2018 ed.). Cham: Springer International Publishing : Imprint: Springer. p. 181. ISBN 978-3-319-70996-3.
  75. ^ DeLancey, Craig (2017). "A Concise Introduction to Logic: §4. Proofs". Milne Publishing. Retrieved 23 March 2024.
  76. ^ Ferguson, Thomas Macaulay; Priest, Graham (23 June 2016), "semantic consequence", A Dictionary of Logic, Oxford University Press, doi:10.1093/acref/9780191816802.001.0001, ISBN 978-0-19-181680-2, retrieved 23 March 2024
  77. ^ Ferguson, Thomas Macaulay; Priest, Graham (23 June 2016), "syntactic consequence", A Dictionary of Logic, Oxford University Press, doi:10.1093/acref/9780191816802.001.0001, ISBN 978-0-19-181680-2, retrieved 23 March 2024
  78. ^ a b c Cook, Roy T. (2009). A dictionary of philosophical logic. Edinburgh: Edinburgh University Press. pp. 82, 176. ISBN 978-0-7486-2559-8.
  79. ^ "Truth table | Boolean, Operators, Rules | Britannica". www.britannica.com. 14 March 2024. Retrieved 23 March 2024.
  80. ^ a b c "MathematicalLogic". www.cs.yale.edu. Retrieved 23 March 2024.
  81. ^ "Analytic Tableaux". www3.cs.stonybrook.edu. Retrieved 23 March 2024.
  82. ^ "Formal logic - Semantic Tableaux, Proofs, Rules | Britannica". www.britannica.com. Retrieved 23 March 2024.
  83. ^ "Axiomatic method | Logic, Proofs & Foundations | Britannica". www.britannica.com. Retrieved 23 March 2024.
  84. ^ "Propositional Logic". mally.stanford.edu. Retrieved 23 March 2024.
  85. ^ a b "Natural Deduction | Internet Encyclopedia of Philosophy". Retrieved 23 March 2024.
  86. ^ a b Weisstein, Eric W. "Sequent Calculus". mathworld.wolfram.com. Retrieved 23 March 2024.
  87. ^ "Interactive Tutorial of the Sequent Calculus". logitext.mit.edu. Retrieved 23 March 2024.
  88. ^ Lucas, Peter; Gaag, Linda van der (1991). Principles of expert systems (PDF). International computer science series. Wokingham, England ; Reading, Mass: Addison-Wesley. p. 26. ISBN 978-0-201-41640-4.
  89. ^ Bachmair, Leo (2009). "CSE541 Logic in Computer Science" (PDF). Stony Brook University.{{cite web}}: CS1 maint: url-status (link)
  90. ^ a b c d e f Restall, Greg (2010). Logic: an introduction. Fundamentals of philosophy. London: Routledge. pp. 5, 55–60, 69. ISBN 978-0-415-40068-8.
  91. ^ Lawson, Mark V. (2019). A first course in logic. Boca Raton: CRC Press, Taylor & Francis Group. pp. example 1.58. ISBN 978-0-8153-8664-3.
  92. ^ Dean, Neville (2003). Logic and language. Basingstoke: Palgrave Macmillan. p. 66. ISBN 978-0-333-91977-4.
  93. ^ Chiswell, Ian; Hodges, Wilfrid (2007). Mathematical logic. Oxford texts in logic. Oxford: Oxford university press. p. 3. ISBN 978-0-19-857100-1.
  94. ^ a b c d e f g h i j k l m n o Hodges, Wilfrid (2001). Logic (2 ed.). London: Penguin Books. pp. 130–131. ISBN 978-0-14-100314-6.
  95. ^ Toida, Shunichi (2 August 2009). "Proof of Implications". CS381 Discrete Structures/Discrete Mathematics Web Course Material. Department of Computer Science, Old Dominion University. Retrieved 10 March 2010.
  96. ^ Wernick, William (1942) "Complete Sets of Logical Functions," Transactions of the American Mathematical Society 51, pp. 117–132.
  97. ^ W. V. O. Quine, Mathematical Logic (1980), p.81. Harvard University Press, 0-674-55451-5

Further reading edit

  • Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY.
  • Chang, C.C. and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands.
  • Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGraw–Hill, 1970. 2nd edition, McGraw–Hill, 1978.
  • Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.
  • Lambek, J. and Scott, P.J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press, Cambridge, UK.
  • Mendelson, Elliot (1964), Introduction to Mathematical Logic, D. Van Nostrand Company.

Related works edit

External links edit