Boolean algebra This article is about the subarea - TopicsExpress



          

Boolean algebra This article is about the subarea of mathematics. For the related algebraic structures, see Boolean algebra (structure) and Boolean ring . In mathematics and mathematical logic , Boolean algebra is the subarea of algebra in which the values of the variables are the truth values true and false , usually denoted 1 and 0 respectively. Instead of elementary algebra where the values of the variables are numbers, and the main operations are addition and multiplication, the main operations of Boolean algebra are the conjunction and, denoted ∧, the disjunction or , denoted ∨, and the negation not , denoted ¬. Boolean algebra was introduced in 1854 by George Boole in his book An Investigation of the Laws of Thought .[1] According to Huntington the term Boolean algebra was first suggested by Sheffer in 1913. [2] Booles first book The Mathematical Analysis of Logic published in 1847 included the original theory. This was proposed as a Mathematical language dealing with the questions of logic which is now needed in the design of modern digital equipment, and now exists as a core data type in all modern programming languages generally abbreviated to as type bool, representing true or false within assertion logic. Boolean algebra has been fundamental in the development of digital electronics. It is also used in set theory and statistics . [3] History Booles algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields. [4] In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons , Schröder , Huntington , and others until it reached the modern conception of an (abstract) mathematical structure. [4] For example, the empirical observation that one can manipulate expressions in the algebra of sets by translating them into expressions in Booles algebra is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article ). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets . In the 1930s, while studying switching circuits , Claude Shannon observed that one could also apply the rules of Booles algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates . Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra . In circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. [5][6][7] Efficient implementation of Boolean functions is a fundamental problem in the design of combinatorial logic circuits. Modern electronic design automation tools for VLSI circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification . [8] Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. [9][10][11] Boolean algebra is not sufficient to capture logic formulas using quantifiers , like those from first order logic . Although the development of mathematical logic did not follow Booles program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic , which also studies the algebraic systems of many other logics. [4] The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science , being the first problem shown to be NP- complete . The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm ) to circuit complexity. Values Whereas in elementary algebra expressions denote mainly numbers , in Boolean algebra they denote the truth values false and true. These values are represented with the bits (or binary digits), namely 0 and 1. They do not behave like the integers 0 and 1, for which 1 + 1 = 2, but may be identified with the elements of the two-element field GF(2) , for which 1 + 1 = 0 with + serving as the Boolean operation XOR. Boolean algebra also deals with functions which have their values in the set {0, 1}. A sequence of bits is a commonly used such function. Another common example is the subsets of a set E: to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. As with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. [12] Operations Basic operations The basic operations of Boolean algebra are as follows. And ( conjunction ), denoted x ∧y (sometimes x AND y or K xy ), satisfies x ∧y = 1 if x = y = 1 and x ∧ y = 0 otherwise. Or ( disjunction), denoted x ∨y (sometimes x OR y or A xy ), satisfies x ∨y = 0 if x = y = 0 and x ∨ y = 1 otherwise. Not ( negation ), denoted ¬x (sometimes NOT x , Nx or ! x ), satisfies ¬x = 0 if x = 1 and ¬x = 1 if x = 0. If the truth values 0 and 1 are interpreted as integers, these operation may be expressed with the ordinary operations of the arithmetic: Alternatively the values of x ∧ y , x ∨y , and ¬x can be expressed by tabulating their values with truth tables as follows. 0 0 0 0 1 0 0 1 0 1 0 1 1 1 1 1 0 1 1 0 One may consider that only the negation and one of the two other operations are basic, because of the following identities that allow to define the conjunction in terms of the negation and the disjunction, and vice versa: Derived operations The three Boolean operations described above are referred to as basic, meaning that they can be taken as a basis for other Boolean operations that can be built up from them by composition, the manner in which operations are combined or compounded. Operations composed from the basic operations include the following examples: These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs. 0 0 1 0 1 1 0 0 1 0 0 1 1 1 0 1 1 1 0 1 The first operation, x → y , or C xy , is called material implication. If x is true then the value of x → y is taken to be that of y . But if x is false then the value of y can be ignored; however the operation must return some truth value and there are only two choices, so the return value is the one that entails less, namely true. ( Relevance logic addresses this by viewing an implication with a false premise as something other than either true or false.) The second operation, x ⊕ y , or Jxy , is called exclusive or to distinguish it from disjunction as the inclusive kind. It excludes the possibility of both x and y . Defined in terms of arithmetic it is addition mod 2 where 1 + 1 = 0. The third operation, the complement of exclusive or, is equivalence or Boolean equality: x ≡ y , or Exy , is true just when x and y have the same value. Hence x ⊕ y as its complement can be understood as x ≠ y , being true just when x and y are different. Its counterpart in arithmetic mod 2 is x + y + 1. Given two operands, each with two possible values, there are 22 = 4 possible combinations of inputs. Because each output can have two possible values, there are a total of 24 = 16 possible binary Boolean operations. Laws A law of Boolean algebra is an identity such as x ∨( y ∨z ) = ( x ∨y )∨ z between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x ∨ ( y ∧ z ) = x ∨( z ∧ y ) from y ∧ z = z ∧y as treated in the section on axiomatization . Monotone laws Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra: [13] Boolean algebra however obeys some additional laws, in particular the following: [13] A consequence of the first of these laws is 1∨1 = 1, which is false in ordinary algebra, where 1+1 = 2. Taking x = 2 in the second law shows that it is not an ordinary algebra law either, since 2×2 = 4. The remaining four laws can be falsified in ordinary algebra by taking all variables to be 1, for example in Absorption Law 1 the left hand side is 1(1+1) = 2 while the right hand side is 1, and so on. All of the laws treated so far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone . Thus the axioms so far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows. [3] Nonmonotone laws The complement operation is defined by the following two laws. All properties of negation including the laws below follow from the above two laws alone. [3] In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, whence in both algebras it satisfies the double negation law (also called involution law) But whereas ordinary algebra satisfies the two laws Boolean algebra satisfies De Morgans laws : Completeness The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The laws Complementation 1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore Boolean algebras can then be defined as the models of these axioms as treated in the section thereon . To clarify, writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras. This axiomatization is by no means the only one, or even necessarily the most natural given that we did not pay attention to whether some of the axioms followed from others but simply chose to stop when we noticed we had enough laws, treated further in the section on axiomatizations . Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as any tautology , understood as an equation that holds for all values of its variables over 0 and 1. All these definitions of Boolean algebra can be shown to be equivalent. Boolean algebra has the interesting property that x = y can be proved from any non- tautology. This is because the substitution instance of any non-tautology obtained by instantiating its variables with constants 0 or 1 so as to witness its non-tautologyhood reduces by equational reasoning to 0 = 1. For example the non-tautologyhood of x ∧y = x is witnessed by x = 1 and y = 0 and so taking this as an axiom would allow us to infer 1∧0 = 1 as a substitution instance of the axiom and hence 0 = 1. We can then show x = y by the reasoning x = x ∧1 = x ∧0 = 0 = 1 = y ∨1 = y ∨0 = y . Duality principle There is nothing magical about the choice of symbols for the values of Boolean algebra. We could rename 0 and 1 to say α and β, and as long as we did so consistently throughout it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose we rename 0 and 1 to 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However it would not be identical to our original Boolean algebra because now we find ∨ behaving the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that weve been fiddling with the notation, despite the fact that were still using 0s and 1s. But if in addition to interchanging the names of the values we also interchange the names of the two binary operations, now there is no trace of what we have done. The end product is completely indistinguishable from what we started with. We might notice that the columns for x ∧y and x ∨y in the truth tables had changed places, but that switch is immaterial. When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, we call the members of each pair dual to each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. The Duality Principle, also called De Morgan duality , asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change we did not need to make as part of this interchange was to complement. We say that complement is a self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is ( x ∧y ) ∨ ( y ∧z ) ∨ ( z ∧x ). There is no self-dual binary operation. A composition of self-dual operations is a self-dual operation. For example, if f(x,y,z) = (x∧y) ∨ (y∧z) ∨ (z∧x) , then f(f(x,y,z),x,t) is a self-dual operation of four arguments x,y,z,t . The principle of duality can be explained from a group theory perspective by fact that there are exactly four functions that are one-to-one mappings ( automorphisms) of the set of Boolean polynomials back to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form a group under function composition , isomorphic to the Klein four-group , acting on the set of Boolean polynomials. [14] Diagrammatic representations Venn diagrams A Venn diagram[15] is a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to the values 1 (true) and 0 (false) for variable x . The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention). The three Venn diagrams in the figure below represent respectively conjunction x ∧ y , disjunction x ∨ y , and complement ¬x . Figure 2. Venn diagrams for conjunction, disjunction, and complement For conjunction, the region inside both circles is shaded to indicate that x ∧y is 1 when both variables are 1. The other regions are left unshaded to indicate that x ∧y is 0 for the other three combinations. The second diagram represents disjunction x ∨ y by shading those regions that lie inside either or both circles. The third diagram represents complement ¬x by shading the region not inside the circle. While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However we could put a circle for x in those boxes, in which case each would denote a function of one argument, x , which returns the same value independently of x , called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes one argument, which it ignores, and is a unary operation. Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchanging x and y would have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry. Idempotence of ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨. To see the first absorption law, x ∧( x ∨ y ) = x , start with the diagram in the middle for x ∨y and note that the portion of the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, x ∨ ( x ∧ y ) = x , start with the left diagram for x ∧y and note that shading the whole of the x circle results in just the x circle being shaded, since the previous shading was inside the x circle. The double negation law can be seen by complementing the shading in the third diagram for ¬x , which shades the x circle. To visualize the first De Morgans law, (¬ x )∧ (¬ y ) = ¬( x ∨y ), start with the middle diagram for x ∨y and complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes. The second De Morgans law, (¬ x )∨(¬ y ) = ¬ ( x ∧ y ), works the same way with the two diagrams interchanged. The first complement law, x ∧¬x = 0, says that the interior and exterior of the x circle have no overlap. The second complement law, x ∨¬ x = 1, says that everything is either inside or outside the x circle. Digital logic gates Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows. [16] The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage on the lead. For so- called active-high logic, 0 is represented by a voltage close to zero or ground, while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports. Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port. The Duality Principle , or De Morgans laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged. More generally one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the odd-bit-out can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x , y , ¬ x , and ¬y ; and the remaining two are x ⊕y (XOR) and its complement x ≡y . Boolean algebras Main article: Boolean algebra (structure) The term algebra denotes both a subject, namely the subject of algebra , and an object, namely an algebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then give the formal definition of the general notion. Concrete Boolean algebras A concrete Boolean algebra or field of sets is any nonempty set of subsets of a given set X closed under the set operations of union , intersection, and complement relative to X .[3] (As an aside, historically X itself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However this exclusion conflicts with the preferred purely equational definition of Boolean algebra, there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be empty.) Example 1. The power set 2X of X , consisting of all subsets of X . Here X may be any set: empty, finite, infinite, or even uncountable . Example 2. The empty set and X . This two- element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets of X must contain the empty set and X . Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so as to make the empty set and X coincide. Example 3. The set of finite and cofinite sets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with finite and cofinite interchanged. Example 4. For a less trivial example of the point made by Example 2, consider a Venn diagram formed by n closed curves partitioning the diagram into 2n regions, and let X be the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset of X , and every point in X is in exactly one region. Then the set of all 22 n possible unions of regions (including the empty set obtained as the union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection, and complement relative to X and therefore forms a concrete Boolean algebra. Again we have finitely many subsets of an infinite set forming a concrete Boolean algebra, with Example 2 arising as the case n = 0 of no curves. Subsets as bit vectors A subset Y of X can be identified with an indexed family of bits with index set X , with the bit indexed by x ∈ X being 1 or 0 according to whether or not x ∈ Y . (This is the so-called characteristic function notion of a subset.) For example a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,…,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if X = { a,b,c } where a, b, c are viewed as bit positions in that order from left to right, the eight subsets {}, { c }, { b }, { b, c }, { a}, { a, c }, { a, b}, and { a, b, c } of X can be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]). From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of bitwise ∧, ∨, and ¬, as in 1010∧0110 = 0010, 1010∨0110 = 1110, and ¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively. The prototypical Boolean algebra Main article: two-element Boolean algebra The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. We call this the prototypical Boolean algebra, justified by the following observation. The laws satisfied by all nondegenerate concrete Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. This observation is easily proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector. The final goal of the next section can be understood as eliminating concrete from the above observation. We shall however reach that goal via the surprisingly stronger observation that, up to isomorphism, all Boolean algebras are concrete. Boolean algebras: the definition The Boolean algebras we have seen so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the laws of Boolean algebra. Instead of showing that the Boolean laws are satisfied, we can instead postulate a set X , two binary operations on X , and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract definition. A Boolean algebra is any set with binary operations ∧ and ∨ and a unary operation ¬ thereon satisfying the Boolean laws. [17] For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axioms by fiat is entirely analogous to the abstract definitions of group, ring , field etc. characteristic of modern or abstract algebra . Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice , a sufficient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition. A Boolean algebra is a complemented distributive lattice. The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent definition. Representable Boolean algebras Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations of greatest common divisor, least common multiple , and division into n (that is, ¬x = n/x ), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not concrete according to our definitions. However if we represent each divisor of n by the set of its prime factors, we find that this nonconcrete Boolean algebra is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n , with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division into n . So this example while not technically concrete is at least morally concrete via this representation, called an isomorphism. This example is an instance of the following notion. A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra. The obvious next question is answered positively as follows. Every Boolean algebra is representable. That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This quite nontrivial result depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice , and is treated in more detail in the article Stones representation theorem for Boolean algebras . This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability. The laws satisfied by all Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras.
Posted on: Sat, 16 Aug 2014 08:48:18 +0000

Trending Topics



Recently Viewed Topics




© 2015