"What is Truth?"
Formal logic is interesting because it captures both the essential qualities and the essential deficiencies of how human beings think. We have a natural tendency to draw sharp lines and draw (frequently binary) distinctions, while nature tends force us to revise the boundaries we use to draw human-navigable maps of the world. Logic is pragmatic: it makes sense, and yields results. However, like all practical expedients, logic is quite fallible, and often entails subtle complexities even in the pursuit of relatively simple goals. Some authors [3] have characterized logic as a way of thinking about thinking. This is a very interesting view. It means that if we soberly and seriously attend to what goes on in our logical constructions, we may learn something about how we think and what our thinking can and can't and tell us. This, however, means we must understand logic as a human construct with both human relevance and human imperfections.
There has been some excellent writing by some superb minds on the subject of formal logic, its relevance and its connection to informal logic, and I leave it to better experts than to elucidate this matter [5]. Instead, because I try to write as much as possible to the level of the lay-person, I would like to give a few simple and informally constructed examples to give the reader a flavor of basic logic, as a lead-in to a somewhat surprising, and unexpectedly colorful analogy between the abstractions of logic and the deceptively simple behavior of a certain class of phenomena that are both intuitively sensible and concretely physical.
Binary distinctions are everywhere in human thought: yes and no, up and down, light and dark, before and after, inside and outside, present and absent. They're a basic staple of how we see the world. It turns out, they're also a very efficient way to encode and store a lot of information. Consider the familiar game "Twenty Questions", where one player thinks of something and the other players try to determine what it is the first player is thinking of by asking him or her a series of no more than twenty yes-no questions. A few years back, some enterprising folks manufactured and marketed a electronic version of this game, packaged in a unit small enough to fit in the palm of one's hand. The game, and its clones, seemed to be astonishingly skilled at guessing what human players were thinking of -- provided, of course, they did not change the thought-of object midway through the game, or choose something extremely specific or idiosyncratic. The secret to the little gadget's success, however was no different than the strategy commonly used by human players: ask very broad questions at the start (e.g. "Is it an animal?"), and gradually narrow the scope until the set of possibilities is small enough to allow successful guessing (e.g. "Is it a cat?").
It appears that a large class of familiar things (and even many unfamiliar things) can be identified by a series of yes-no questions. This is, for instance, why Twenty Questions is not too hard to win, and not even too hard, with the help of modern technology, to implement as an electronic circuit. This observation is also at the heart of classical logic. In such traditional systems of logic, every proposition is either true or false (No exceptions!) and propositions can be connected using a few simple operators to express the truth or falsity of more complex expressions. In this context, 'operator' is perhaps an over-glorified word. The operations I am referring to correspond (tellingly) to very common words that are staples of ordinary reasoning about everyday things: 'and', 'or', and 'not'. Letting '1' stand for 'true' and '0' stand for 'false', we can succinctly express 'and', 'or', and 'not' in the following tables:
AND | 0 | 1 |
---|---|---|
0 | 0 | 0 |
1 | 0 | 1 |
OR | 0 | 1 |
---|---|---|
0 | 0 | 1 |
1 | 1 | 1 |
NOT | |
---|---|
0 | 1 |
1 | 0 |
These correspond to basic intuition: if
p
and q
are statements about something, p AND q
is true only if the truth of p
coincides with the truth of q
. For instance, suppose p
stands for 'eats grass' and q
stands for 'says moo'. If we apply p AND q
to a cow, then p AND q
= 1 certainly, since we've seen cows eating grass, whence p
= 1, and since everybody knows the cow says "moo", whence q
= 1 as well. On the other hand, if we apply p AND q
to a sheep, we have p AND q
= 0, since sheep eat grass but generally have other things to say.We don't, however, have to restrict ourselves to just conjunctions, disjunctions, and negations of simple true-false statements; we can use 'and', 'or', and 'not' to connect formulas to other formulas. Recursively, if
P
is any (arbitrarily complex!) formula, and Q
is any other formula, we can connect them using any of our operators to get a new formula whose value depends on the respective values of P
and Q
in a way that respects our simple truth-tables above. The basic construction of logical formulae thus uses only very simple building-blocks; the formulae themselves, however, can become huge and immensely complicated.An example is instructive. Suppose I come to you and say "I'm a secret super-spy!" Your first inclination might be, "Well, if it looks like a spy and it acts like a spy, it's a spy." That is, you might represent your belief that I'm a spy by the formula:
let
p =
"looks like a spy"q =
"acts like a spy"in
p AND q
However, after a while, you might think to youself, "Gee, I've never met a real spy before, so don't really know how a spy looks or acts", so you decide to refine your idea of the situation a little further. Spies keep a lot of secrets, so you decide that if I don't act secretively enough, I'm probably not a spy, or I'm not a very good spy, or I must really trust you to keep my secrets. You also decide that spies are pretty busy working for someone, and so I need to be off doing spy things as frequently as possible, and not ordinary stuff, and so your idea grows:
let
p =
"looks like a spy"q =
"acts like a spy"r =
"is secretive"s =
"really trusts you"t =
"is busy"in
(p AND q AND (r OR s) AND t)
After some further reflection, though, you realize that it's also possible that I'm deep undercover, and so even though I might be part of some super-secret operation, I might be going to great lengths to appear as if I'm leading an ordinary life so I don't blow my cover That means that either I'm deep undercover or I'm not telling the truth about being a spy, and so things get even more complicated:
let
p =
"looks like a spy"q =
"acts like a spy"r =
"is secretive"s =
"really trusts you"t =
"is busy"u =
"is deep under cover!"v =
"is telling a tall tale ..."in
(p AND q AND (r OR s) AND t AND (u OR (NOT v)))
If I regale you with tales of super-spy exploits, you'll have even more information that you'll have to take into account: if I say I was part of a secret plot to blow up Professor Nightmare's death-ray on Flaming Death Island, then that means that either I was part of a super-awesome adventure and it didn't make the news and death-rays exist, or I'm telling you a tall tale, in which case maybe I'm not trustworthy and I'm making the whole thing up:
let
p =
"looks like a spy"q =
"acts like a spy"r =
"is secretive"s =
"really trusts you"t =
"is busy"u =
"is deep under cover!"v =
"is telling a tall tale ..."x =
"super awesome adventure!"w =
"none of the exciting news is ever fit to print"in
(p AND q AND (r OR s) AND t AND (u OR (NOT v)) AND (x OR (w AND (NOT v))))
As you can see, things may get arbitrarily complicated, and as your idea of me as the super-spy depends upon more and more variables, you find your belief pulled ever more chaotically back and forth between amazement and incredulity. However, I don't have to cook up an incredible story in order to exhibit an instance of a phenomenon with simple parts and simple rules that nonetheless behaves in strange and wildly unpredictable ways.
Sitting on the Fence
"Chaos theory" broadly refers to a large area of research in mathematical physics that originated in the 1960s and flowered in the 1970s and 1980s as science turned its attention to physical systems that exhibit large changes in response to small variations. Like logic, chaos and non-linear dynamics are an active area of study with their own deep and fascinating literature, and so I leave it to those more accomplished to exposit their virtues and mysteries. (The interested reader might consult [4] for an informal but very readable overview of the field, and [6] for a more formal but equally readable presentation of the basic mathematics.) Instead, I would like to borrow one very simple device from the field, in the hopes that perhaps it leads us to a interesting analogy.
A physical system is typically said to be bistable if always tends toward one of two stable states as time passes. Such systems are interesting to non-linear dynamicists because, although they exhibit stability after enough time has elapsed, it is often very difficult to predict which state the system will ultimately end in. A classic example is a ball perched on a very thin divider, as in:
Common experience should be enough to convince the reader that the ball will always fall to one side of the divider, or the other. If the divider is relatively wide -- almost but not quite wide enough to allow the ball to be balanced -- the experimenter should be able to make fairly reliable predictions about which side the ball will fall to when placed. If the ball is sufficiently off-center to cause it to roll to the right or to the left, the imbalance will be visible at the outset. However, if the divider is narrow enough relative to the diameter of the ball, it will be very difficult to predict to which side the ball will fall, no matter how much is taken in placing. (The reader is encouraged to go play with some blocks, and so become really thoroughly convinced.) In this case, the very same physical forces are acting on the ball (e.g. gravity, the normal force exerted by the divider), but slight variations in how the ball is placed will be much harder to detect. As if that didn't make predictions hard enough, the many tiny irregularities in the ambient air currents or in the surface of the ball and the divider have a much larger proportional effect on the motion of the ball than they did when it rested upon a relatively wide divider. By making the divider very narrow relative to the diameter of the ball, a huge number of almost invisibly small variables become relevant to the final outcome, and much smaller inaccuracies in the initial placement of the ball may have a much larger impact on its motion and hence its final state, i.e. whether it comes to rest to the right or the left of the divider.
One interesting thing about this example is that predictions are easy when the relevant variables are few and the forces at work are large and easy to observe, but hard when many variables must be accounted-for and the forces at work obscure. Phrased this way, it doesn't seem excessively imaginative to note the wide-versus-narrow comparison made in the ball example is somewhat like the difference between judging the truth of "x is a cow", which requires relatively little information about relatively few features, and judging the truth of "x is a secret super-spy", which seems to require a great deal of information about very hard to discern features.
Getting the Ball Rolling
Suppose I wanted to construct oracle that answers simple yes-no questions. (Think "Magic 8-Ball", not "Delphi") The ball-and-divider gizmo described above is one very good candidate. All we need to do is let "left" stand for "yes" and "right" stand for "no"; if the ball rolls off the divider to the left, that means the oracle says "yes" to our question, whereas if the ball rolls to the right, that means "no". If we want the oracle's advice (in "yes-no" format, of course), we just utter the appropriate incantation, ask our question, then perch the ball atop the divider and see which side it rolls to. In keeping with the venerable old tradition of superstitious parlor games, we could keep the divider very thin, which would give the ball's motion an appropriately oracular irregularity. (Nobody likes an oracle that always say the same thing.) On the other hand, we could make the divider wide enough that the ball's motion would be easy to predict. (As long as we "clear our minds" sufficiently before playing, getting what we expect might make our oracle more suited to the company of the popular Ouja Board and the Magic 8-Ball, human psychology being what it is.) If our construction was precise and careful enough that we could control which way the ball rolled according to its initial placement, we would have something less like a Magic 8-Ball and more like a transistor.
The transistor is the textbook case of a bistable system. Very loosely speaking, a transistor acts as a material whose overall conductivity is "balanced atop" a semiconductor in a way that can be pushed either to conduct or resist an electrical current. This allows the transistor to be used as a two-state switch. (A light switch is a two-state switch, in that it is generally only "up" or "down".) The importance of semiconducting technology to the development of modern technology cannot be overstated; the invention of the transistor set in motion the explosive advance of the digital computer, which, at its most basic, is nothing more than a very complicated assemblage of two-state switches. Thus, our simple ball-and-divider oracle actually shares and interesting (And not coincidental!) kinship with a basic building block of the computer as we know it.
In its present, very simple state, the ball-and-divider oracle can be used as a machine that computes the answers to exceedingly simple yes-no questions. If we "ask" the oracle to check a statement we know to be false, we place the ball slightly to the right, so that it rolls of the divider onto the "no" side; if our statement is true, we place the ball slightly to the left. At this level, of course, the exercise seems silly: the little gizmo only does what you expect it to do. At the same time, in a world that's full of seemingly random occurrences and surprising things, it is actually extremely noteworthy when something behaves the way we expect it to behave. We can thus use our little oracle as a an external model of our internal judgements.
Nobody's going to be too impressed at a ball that rolls off a wall, but suppose we gave our construction a little more refinement and complexity. Suppose we have at our disposal a team of master craftsmen, and we ask them to modify our oracle as follows:
Essentially, our oracle now has a small replica of itself built onto its left and right sides. Instead of controlling our device by placing the ball, let's also ask our craftsmen to give us some way to modify at will the slope on top of the various dividers, so that we know which side the ball will roll to when it encounters a divider. (Perhaps each divider has a sloped piece that can be snapped on and off the top, so that the direction of motion can be reversed by turning the piece around.) Obviously, the measurements must be (excruciatingly) precise, and the device very carefully constructed, but if all goes according to plan, we can now ask our oracle more complicated questions. But how?
In the original construction, we assigned a truth-value (that is, 'true' or 'false') to each side of the divider. I chose 'left' for 'true' and 'right' for false, but we could have easily chosen the other way. In essence, the original construction corresponds to the simplest logical formula of all, namely, the formula with one variable and no connectives, e.g. just
p
. However, our new oracle now has two smaller copies of the original constructed into it. We can use this! Suppose I arrange the device so that the ball rolls to the left of the center-most (that is, highest) divider. After it rolls to left, it will fall a short distance and (if the device is correctly constructed) encounter another divider. If this second divider can also be arranged to direct the motion of the ball, we can make it stand for a second statement whose truth depends upon the first. For instance, suppose that we have a statement '(x eats grass) and (x says moo)'. We let the first divider stand for 'x eats grass' and let the two other dividers stand for 'x says moo' (we do need to use them both). If we adopt the same semantics for our two secondary dividers as we did for the first, i.e. if the left side of each stands for 'true' and the right side 'false', our device now computes not just p
but p AND q
:It is no mistake that the labels along the bottom of the device correspond to the truth table given for 'and' in the above. We can similarly arrange devices that behave as
OR
and as NOT
:Now things will really start to take off, provided of course that we have the continued support of our craftsmen. Suppose that our team is able to construct machines with as many dividers as we please -- even up to very huge numbers --- and suppose that these machines have the same stable, predictable, reproducible behavior as the simple ball-and-divider construction we began with. The tremendous importance and difficulty of this stability to the function of the overall machine should not be underestimated and cannot be overstated. Small influences cannot be overlooked, and we must ensure that every possible force is very precisely accounted for in the design and construction of the machine. If we don't, there is no way that our thoughts can follow the bouncing ball -- its motion will be chaotic and random! (This challenge is not unlike tremendous effort that has been required -- and continues to be required -- to design and construct reliable solid-state electronics.) If, however, we can overcome the difficulties of physical construction, our little oracle will have grown into a programmable machine that, given a set of truth-judgments, can evaluate the truth of arbitrarily complex logical formulae. How might this be done?
Suppose we've overcome the construction challenges, and we can add as many dividers as we please. Suppose also that we have a logical formula with N many propositions (e.g. "x eats grass", "x says moo") and a collection of N many truth-judgments about our respective propositions (e.g. "it's true that x eats grass", "it's false that x says moo"). Starting with just one divider in the center of the board, we add 2^k additional dividers for each kth additional proposition. That is, we add 2 additional dividers for the first additional variable, 4 for the second, 8 for the third, and so on. We add the dividers in tiers of the same height, and associate to each divider a collection of dividers with the same height. Thus, for example, if
p
, q
and r
are propositions in our formula, we arrange our dividers as:Note that our first tier, which consists of only a single divider, splits the plane of the device in half. The second tier, consisting of two additional dividers, taken together with the first, splits the plane in quarters. The third tier, consisting of four additional dividers, and taken together with the first and second tiers, splits the plane into eighths. Thus, for each proposition, we take a set of dividers all of which stand lower on the board than the last tier, and place them on the board so that split each segment of free space in half.
Now we want our machine to express some relation between the variables. That is, we want to program our machine with a chosen logical formula. This is done by labeling the slots in the board that lie between each pair of dividers. For instance, we obtained 'p and q' as well as 'p or q', both of which have two propositional variables, by changing the labels along the bottom of the board. If the ball falls into a slot labelled '0', this indicates that our formula expresses something false; if it falls into a slot labelled '1', this indicates something true. The machine is now configured in a way that models some logical formula with N propositional variables. For example:
Our machine is now programmed and ready to go. How do we give it input, that is, how do we ask it to compute the truth of our logical formula given a set of true-false judgments about the variables? This is accomplished by tilting the dorsal surfaces of our dividers, in order to govern which way the ball should roll when it encounters a particular divider. If we stick to our true-left, false-right convention (which we might as well, to keep things simple), we slope a given divider to the left or to the right according to whether the proposition corresponding to its tier is true or false. For example, if 'p' and 'r' are true but 'q' is false:
Now we're ready to go! Once we've gathered all of our information and set up the machine, we set the ball in the middle and just let it roll; the label attached to the ball's final state corresponds to the truth or falsity of our original statement.
(It should be noted how strongly the operation of our machine resembles that of the famous and very real "Plinko" device, which was prominently featured on the popular game show "The Price Is Right". The important difference between Plinko and our machine is that Plinko seemed to purposefully admit a fairly high degree of random behavior, as evidenced by the conspicuous bounciness of the pegs. This seemed to be an important part of both its appeal and its challenge.)
If all goes well (Does it ever?) our machine can successfully automated a potentially very elaborate arrangement of logical judgments. Assuming that the work in setting up the machine is not too difficult or time consuming (which, in our example, it almost certainly would be for all but the most trivially simple formulae), we can now model our basic truth-judgments as we understand them and apply them to very complex situations that would otherwise be humanly impossible to reason through. This is effectively what makes the modern digital computer so powerful and useful: it can apply our own basic "and/or" and "not" intuitions to arrangements that are complicated vastly beyond our own human cognitive abilities. Unlike our Plinko-like machine, however, an electronic computer is much easier to program and will execute much more quickly, though, again, this difference is relative and not at all absolute. Any computer programmer or engineer of even modest experience will most certainly agree. (Despite the computer comparisons, I cannot resist a technical note that our deterministic Plnko-machine is not a Turing Machine, since it lacks a random-access memory.) These differences aside, our imaginary machine is now much more than silly toy: it is a very complicated system that behaves the way we expect it to. Again, given the dear scarcity of things in life that we can expect, this very noteworthy.
In the same way that a bridge allows us to use ordinary human ambulation to cross expanses that we could not traverse on foot, our machine is a tool that allows us to apply our ordinary reasoning to systems whose complexity is far beyond human comprehension. This is all good and fine but (again inviting the reader to go play with some blocks), the bigger bridges get, the harder they are to construct, and the same law of increasing difficulty applies to the size and complexity of the ball-and-stick logic machine.
"Truth is stranger than fiction."
If Newton stood on the shoulders of giants to see as far as he did, it would seem that none of us are in a position to refuse the invitation for a boost above eye-level by someone of greater stature. So, let's push our little logic machine its logical conclusion, using a venerable old trick: limits at infinity.
In the preceding section, we saw that we could use our little logic machine to compute logical statements in as many variables as we pleased, simply by adding enough dividers. As we add variables, a forest of dividers springs up on the plane of our device, becoming (exponentially!) more dense with each variable we add, so that this:
rapidly becomes this:
(Actually, with better artistic skills and more powerful drawing tools, this would
come out something like a collapsed Sierpinski Gasket.)
As the number of dividers increases, the outline of our device rapidly converges to a pair of continuous slopes in opposite directions, both leading down from the same elevated point. A few features, both abstract and concrete, are immediately apparent.
Firstly, the functional details of our machine are now humanly unmanageable. How are we supposed to read infinitely small labels, or manipulate infinitely small parts? The parts do not even need to be infinitely small for the machine to become impractical; they just need to be very small relative to human eyesight. However, the machine is only really interesting if it can handle formulae with lots of variables, which necessarily entails exactly this kind of tortured, inhuman precision. If we wanted badly enough to use the machine, we could construct still other machines to program and use it, etching ball-slots with lasers, or watching the progress through a microscope. Be that as it may, even these methods have limits, and the density of trajectories within the machine grows explosively with each additional variable.
Secondly, the size of the ball that actually runs the machine now matters. An infinitely small ball (that is, one that has zero diameter but somehow "still exists") could be dropped on the machine, somehow falling through the infinitely dense forest of dividers to come to rest in an infinitely small notch, thus evaluating the truth or falsity of a logical statement somehow depending upon an infinite amount of information. Of course, this always mattered, but we glossed over it in the construction. The ball obviously must be large enough to fall through the space between dividers. This means, however, that computing a logical formula with more variables requires adding more dividers and hence requires a small ball. Not only that, the size of the ball must shrink exponentially as the number of variables grows. Basically, there is a vicious dependency between components of our machine. This dependency moreover tightens (Worsens?) as the machine grows in power. The ball we drop will rapidly shrink to the size of a dust-speck as we increase the informational output, no matter whether it starts out the size of an orange or the size of a planet.
There's more. Notice that any sufficiently large ball (really, any ball with a diameter greater than zero), when dropped into our infinitely powerful logic-engine, will appear to roll all the way to the left or all the way to the right. Moreover, because our dividers must be very (i.e. infinitely) small in order to fit onto a finite board, the starting divider which sits at the center will always be very small relative to the diameter of the ball. But that means that our infinite logic-engine, when used with a finite-diameter ball, will actually behave exactly like the very chaotic ball-and-divider construction we started with!
Suppose our board has finitely many dividers, but the space between them is sufficiently small relative to the diameter of the ball. When we try to run our logic engine, the ball will eventually reach a stable state but will fail to fall all the way to the bottom of the board because the spaces below will be too narrow for it to fit. What this means is that, given a ball of a certain spatial extent R, there is a limit to the number variables that a given (finite) logic-machine M can model, and hence to the number of logical formulae it can compute. Moreover, the upper bound on the number of variables in the formulae M can compute is proportional R! Essentially, a finite-variable logic-machine is approximated by an infinite-variable logic machine using a ball with non-zero diameter. What happens in such a case is that the ball simply settles into a rut somewhere between the central divider and one of the edges of the board. Everything below this height corresponds to the "random noise" that is assumed away by the modeled formula.
What does all this mean? Infinite information looks like randomness, and classically logical systems can only model systems with finitely many variables, where the difficulty of model-construction grows exponentially as information about the system is added. Somewhat pretending to understanding, this vaguely resembles a certain theory of Chaitin's [2]. Without pretending, this simple, mechanical analogy greatly resembles familiar human reasoning, in which too many details makes things fuzzy, so that hypotheses that require too much information to evaluate are indistinct from an uninformed assertion of "random" behavior.
In an alternate construction, we might notice that infinitely small labels along the bottom of our infinitely dense board somewhat resemble a Cantor dust or a one-dimensional Julia Set [1]: a ball might roll arbitrarily close to the "true" side of the board but come out false, or arbitrarily close to the "false" side and come out true. Thus life always manages to surprise us, and categorizations always prove intractably blurry about the edges, no matter how logical we choose to be.
Logic thus gives us a bridge beyond some of the limitations of working memory and attention span, which mechanism moreover can extend. However, this extent quickly and easily collapses under its own weight when stretched too far, in exactly the way that a real bridge does. Either the ideas we consider abstractions are very much like our supposedly "concrete" perceptions of the "real world", or supposedly abstract constructions, no matter how empyrean or pure, are subject to the same noisy unpredictability of the physical world we actually inhabit.
[1] Barnsley, Michael F. Fractals Everywhere. Academic Press Inc., 1988.
[2] Chaitin, G. J. Algorithmic Information Theory. Cambridge University Press, 1987.
[3] Ernst, Zachary. Free Logic Now! Available here.
[4] Gleick, James. Chaos: The Making of a New Science. Penguin Group, 1987.
[5] Quine, W. V. Philosophy of Logic. Harvard University Press, 1950.
[6] Strogatz, Steven H. Nonlinear Dynamics and Chaos. Perseus Books Publishing, 1994.
1 comment:
ha! i have/have read more than one of your references... shoot me :P
Post a Comment