Wednesday, October 28, 2009

Editor's Notes for October 2009

I've done a great deal of writing lately but, much to my chagrin, none of it has been here. There is good reason for the digression (proposals, submissions), but I am troubled by the appearance of neglect or lifelessness in this corner, and so, to avoid this, a few passing points of interest and some encouraging news:


  1. Return-Oriented Programming: a colleague recently pointed out to me an extremely interesting finding reported at last year's Black Hat USA Briefings. The authors present a exploit that allows them to perform arbitrary computation without injecting code. If an attacker can gain control the stack, the authors show, he can influence program control flow to link together certain segments of code into instruction sequences of his choosing. This is very neat because it is one of those things that seems as if it should possible in principle. However, we so often hear in computer science about humanly impractical things that are "possible in principle"; here is someone who has taken the principle and successfully used it. Well done!


  2. Iterated Function Systems and Control Languages: I was very pleasantly astonished when, in the course of combing the literature aimlessly, I discovered the recent work of Henning Fernau and Ludwig Staiger, which appears to be a link between automata theory and fractal geometry. Intuitively (and offering no justification other than that), I feel as if such correspondence makes sense, and I admit to being thrilled that skilled mathematicians have undertaken to actually investigate such a correspondence. This also suggests the surprising possibility that abstract computer science might eventually have substantive theoretical contributions outside of its own domain. Well done!


  3. Why doesn't Wendell Berry own a computer, and why don't I own a mule?


  4. The Liar Paradox Redux: It occurred to me yesterday that the situation I constructed in the Truth Machine thought experiment was actually an elaborate variant of the famous Twin Paradox. The paradox is not really a paradox so much a puzzle, stated as follows: you meet a pair of identical twins, one of whom always tells the truth, and one of whom always lies. What single question can you ask them to determine which is which? (I would also note that this familiar paradox appears to be a favorite intellectual toy of accomplished animator Genndy Tartakovsky, as the puzzle and its solution has appeared in episodes of both The Powerpuff Girls and Samurai Jack.) That's not to say that there isn't a great deal more to the Truth Machine example than there is to this familiar puzzle, only that I hadn't realized the connection before. Interestingly, it appears that I unwittingly attempted to construct something like the solution to the Twin Paradox as one stage in the elaboration of the Truth Machine.


  5. A Second Editor's Note on the Truth Machine: Another thing I had not realized until recently is that The Truth Machine is also the title of a novel by science fiction writer James Halperin. It appears that Mr. Halperin's truth machine is not quite so general in its function, acting in the restricted (but still remarkable) role of an infallible human lie detector, with horrible dystopian consequences. Why is it always the case that catching falsehoods leads to so many unforeseen complications? This seems like a question worth answering.


  6. What happens to the story of Job if you reverse the "good" and "evil" roles, i.e. make Job an extremely wicked man who only persists in evil because it's so materially rewarding?


  7. Is there such a fallacy as "appeal to chaos"?



I'm afraid that's all the time we have for now. Until next time!

Thursday, October 8, 2009

A Concise Refutation of the So-Called Objectivist Philosophy

Ayn Rand and her followers acknowledge that certain exercises of power may infringe upon the basic libertarian rights of others, but fail to either recognize or to acknowledge that economic power may be concentrated and abused in exactly the way as brute strength or cunning deception. For all their talk about the property crimes of tyrannical governments, these would-be philosophers fail to acknowledge that private entities are capable of exactly the same kinds of abuses as public institutions.

So-called "objectivism" has all the marks of a bad abstraction. The entire body of thought fails not because of what it incorporates, but because of the important issues that it leaves out.

Monday, October 5, 2009

The Truth Machine: A Thought Experiment



A Beautiful Dream



Suppose, one day, you stray off your usual path and wander off into some out-of-the-way place. After a time, you realize you're lost. Looking around for someone who can show you the way back, you come across me. You're about to ask me for directions, but then you notice that in front of me is a fantastic machine the likes of which you have never seen before.

Naturally, you ask me, "What does that machine do?"

"It's a truth machine," I say.

This sounds outrageously impossible, but you're intrigued enough that, with some coaxing from me, you put aside your objective of getting directions and decide to ask the machine a question. So you think of an easy question and we feed it into the machine. Lo and behold, it comes back true.

"Ah, well that was too easy," you say.

"Well, ask as many questions as you want," I reply.

So, we proceed to feed some harder questions into the machine. Each one seems to come back with a truthful and accurate answer. Emboldened, you start to ask the machine some really wild and speculative questions. These come back with answers that are far too big to verify right away, but you're really intrigued now, and you want to make sure this isn't a scam or a prank.

"How does it work?" you ask me.

"It's very simple," I say.

I proceed to give you a thorough and convincing explanation of the basic scientific principles at work, and the engineering methods I've applied to them in order to construct the machine. (The details of this explanation are left as an exercise to the ambitious reader.) It becomes clear that the machine is not just a high-class Mechanical Turk, or a nice bit computational ventriloquism on my part. Moreover, there's no sorcery, no divine intervention, and nothing weird or unexplained. The workings of the machine are unambiguous consequences of straightforward and readily accessible scientific facts, and they are all perfectly clear to you.

"That's amazing!" you say.

You're overcome with a wave of excitement and decide to get back to the rest of the world so you can test out the machine's answers to your really difficult questions. I give you directions back to the beaten path, and you go on your way. Perhaps, even more interestingly, it is the Machine that gives you the directions.

I might try to tell the rest story but, as wild and fantastical daydreams go, the uses and consequences of the Truth Machine would far surpass even the Machine itself. Would all those beautiful utopian dreams of the Enlightenment finally come true? Or would we turn our knowledge destructively in on ourselves and extinguish the entire universe? Would truth become another commodity to be bought, sold and advertised on the market? Or would the accumulated knowledge of humanity finally be free to anyone curious enough to ask? I won't speculate, because speculation is not my purpose here.

Instead, consider a slight variant of the scenario above.


A Variation on a Theme



Suppose that everything happens as before; we meet, and I demonstrate the Truth Machine. After you've had your fill, we part ways, but you have so many more questions that you decide to come back the next day. You find me and the Machine in the same place, and can hardly contain your excitement.

"I have more questions," you say. "Can I ask the machine?"

"Well, I'd love to oblige," I reply, "but it's too hot today. The heat will interfere with the way the machine works, and it will give false output."

I go on to explain why this is, and it's immediately clear that this is not only consistent with the previous day's explanation, but a necessary consequence of the way the machine works.

You turn to leave, but you're very clever and perceptive, and it occurs to you that this kind of situation often gives rise to paradoxes or infinite regressions, or other such weirdness. Deciding to really put the machine to the test, you turn back to me and propose the following:

"Suppose I ask the machine a question today, take the answer I get, and come back on some day when the machine is working and ask 'How would the Truth Machine answer such-and-such question on a day when the temperature such-and-such many degrees?' If you really do have a truth machine here, and if the ambient temperature does affect its function, then even though the machine answers the question wrong today, it should be able to reproduce and explain its wrong answer later, on some day that it is working."

"That's true," I say. "Why don't you ask your question, write down the answer, and come back later when the machine is working again."

And so, you ask your question and write down the answer. A few days later, you come back, and together we ask the machine how it would respond to your earlier question on a hot day. Much to our surprise, nothing weird or paradoxical happens: the machine correctly reproduces its wrong answer, and gives an accounting for its failure. We're both so thoroughly amazed that you become totally convinced that the Truth Machine works as promised, and I suffer a minor injury from patting myself on the back for my accomplishment.


Some Casual Dream Analysis



There are several ways to read this variation of the story. The story's description of the machine replicating and explaining its own mistakes is something like what happens when one scientific theory subsumes another: a new scientific theory must always explain why its predecessor seemed to be true. This is one interesting feature.

Another interesting feature is brought out if we continue to vary the story by adding additional constraints on the correct function of the machine. For instance, suppose that the machine is sensitive not only to temperature but also, say, electrical fields so that a passing storm cloud might disrupt its function. We can go further and suppose that the machine is sensitive to temperature, electrical force, and perhaps noise. Maybe certain levels of background radiation have an effect. Perhaps very slight changes in gravitational force at the point on the Earth's surface where we are running the machine, due to elevation or the moon's orbit, may also disrupt its function. This doesn't seem unreasonable; after all, the Truth Machine must be extremely delicate. It would seem that we could go on adding necessary constants indefinitely (if you say that we couldn't, then it seems you know more about truth engineering than me) until running the truth machine and getting a valid response requires carefully planned and controlled conditions. Of course, the machine still gives us correct answers; it's just that we must be very careful that exactly the right external conditions are in place to ensure that all the necessary processes within the machine work as expected.

What do we get if we add enough constraints? At some point, the Truth Machine will start to look less like a stunning technological oracle and more like an ordinary experimental apparatus, in the spirit of Boyle's air pump or the Large Hadron Collider. Of course, our fictional construction here differs from typical experimental apparatuses in that it seems we can pose it any question we please and, given the right circumstances, get a correct reply. By contrast, we may be able to ply the LHC for answers about elementary particles or black holes, but it is not immediately apparent how it could give us direct answers about how animal cells specialize into organs, whether the Navier-Stokes equations are a faithful model of fluid dynamics, or how exactly a pair of sociopathic teenagers burning a cat to death in an Internet video could give rise to the NEDM phenomenon. This is an important distinction. Most experimental machines are constructed to answer one specific question, or a specific class of questions.

(On the other hand, if it's perfectly clear to you how the details of elementary particle theory relate to Internet memes, you should really be talking to the Nobel Committee.)

What I've sketched out here is something like the outline of traditional scientific knowledge. Truth comes from relating reproducible phenomena that we do understand to other phenomena that we know less about. In this case, we leveraged what we knew about the workings of the machine into new information that we didn't know. I've purposefully resisted the strong temptation to involve exotic, modern abstractions like algorithmic information theory (Would our machine have to contain all the information in the history of the Universe?), or classic tropes like a Turing-style non-termination scenario (What happens if we ask the machine to predict the future of its own execution?), but not because I think such considerations are irrelevant. Mathematical logic has produced some truly strange and remarkable artifacts in the last century, and as deserving of recognition as these are, it is not immediately clear how they relate to the physical world as we know it. If I brought such abstract machinery into play here, I feel that I would be admitting unfounded back-door assumptions about how the physical world in general, and Truth Machines in particular, must work.

However, I'm not done just yet. Supposing that we do obtain a context-sensitive Truth Machine as described above, is the only work left for the thinkers and discoverers of the world to work out a calculus ratiocinator and start dreaming Leibniz's dream?


Dream On, Leibniz



We meet, I demonstrate the Truth Machine. I explain that it doesn't work on hot days (or stormy days, or at high altitudes, etc. etc.), and you're satisfied that this is indeed a necessary consequence of the appropriate Laws of Nature. As before, you ask the Machine a question on a hot day, in order to really put the Machine to the test by checking the consistency of its answers. Realizing that it may be important what question to ask, you think for a while, and then come up with something.

"Let's ask these two questions," you say. "(1) Is A true? [where A is any meaningful proposition] (2) How will a truth machine answer the question 'Is A true?' under conditions suitable for a correct answer'"

"Well that's a weird question," I say, "but alright. Let's feed it into the machine."

We ask the Machine your question. The Truth Machine gives the following answer: "A is false [without loss of generality, since we can transform any true proposition into a false one by negation], and under conditions suitable for a correct answer, a truth machine will find that A is false." I suggest that you write down the answer and come back tomorrow when the machine will be working again, but you're not satisfied just yet. You go on to ask the machine another question:

"What are necessary conditions for a truth machine, as constructed here, to give correct answers, and why are these conditions necessary for a correct answer?"

The machine responds by describing exactly the physical conditions under which it is operating at the moment. It goes on to give a response that explains, at great length and with considerable detail, why these conditions are essential to producing a correct answer. You and I both seem to understand the response, and after looking over it carefully, neither of us is able to find any internal inconsistency to it. Moreover, the truth or falsity of A is not immediately verifiable without the Machine's help.

"Well," I say, "the machine isn't working right anyway. Of course it would say that. Come back tomorrow, and we'll see what it says."

The next day, you return and we repeat your questions to the machine. According to my earlier claims, conditions are favorable to truth machines that day, and the machine gives the following response: "A is true, and under conditions suitable for a correct answer, a truth machine will find that A is true."

"There you have it," I say, " I guess A is true after all."

But you've become a little more skeptical now. You decide to repeat your other question as well, and you ask the Machine a second time:

"What are necessary conditions for a truth machine, as constructed here, to give correct answers, and why are these conditions necessary for a correct answer?"

As before, the machine responds by describing exactly the physical conditions under which it is operating at the moment. It goes on to give a response that explains, at great length and with considerable detail, why these conditions are essential to producing a correct answer.

"And that's what I said before," I volunteer. "It's all settled."

But you're really troubled by all this now. A shadow's fallen over the enterprise, and although you're not prepared to dismiss me or the Machine out of hand, something seems unaccounted-for.

"How do we know that it wasn't working correctly yesterday?" you say. "Maybe you're wrong about the external conditions, and the machine is actually malfunctioning today."

"But who do we trust? You? Me? The Machine?"


"Brother, let us reason."



Whose claims should we trust? Presumably, we're all interested in nothing more than getting to the truth of the matter. We've all sworn "Calculemus!", true to the spirit of ol' Leibniz, and we're willing to trust the Machine so long as we can verify that it's working as expected. Still, there is a clear problem of which source of assertions to trust. The Machine throws things out of balance because, presumably, it is a new and very large source of information that has come into conflict with a body of information that, as far we know, is well-founded and correct.

This conflict is distantly reminiscent of Poincare's skepticism toward Frege's project of formalizing all knowledge and placing it on a clear and unambiguous axiomatic foundation[1]. Frege seemed to believe in the possibility of realizing a genuine calculus ratiocinator, whereas Poincare argued that the soundness of such a system could never be empirically proven, since it would require the verification of infinitely many theorems. One could always argue for a sort of "generative proof" of the sort one sees in formal logic: show that the axioms of the system do not conflict with one another, then show that it is impossible for any application of the proof rules to produce a contradiction, whence any theorem proved from the axioms cannot be a contradiction. This is fine for an abstraction but, Poincare points out, our abstraction is of interest only to the extent that it reflects some material reality, and no proof of internal soundness can guarantee the faithfulness of a model. Phrased another, we don't know if or when an empirical contradiction might jump out and surprise us.

How can we decipher the conflicting responses of our troublesome Machine? The problem arises because the Machine seems to have distinct and incommensurate frames of reference: it answers the same question one way on one day, and the other way on a different day. One way we might get to the bottom of all this would be to keep plying the Machine with questions, both on "on"-days and on "off-"days to see if we could catch it in a contradiction. If the machine gave a reply that was plainly false to us, the observers, or that contradicted an earlier response we would know that the external conditions under which the machine was operating could not be the right circumstance for truth machines to be truthful. However, there seems to be no way to tell how many questions we might have to ask the machine before we caught it in a contradiction; we might question the Machine until the end of time and never see anything but self-consistency.

As long as the machine continues to explain its own operation and to do so in a way that reveals no contradictions that aren't directly tied to changes in operating conditions, we have no reason to believe that its answers are anything other than the necessary consequences of essential Laws of Nature.

Worse still, there's no way we could ask the machine for help in the matter, since we've already cast doubt on its judgments.

Who do we trust? You? Me? The Machine? And how should we understand the Machine's explanations of its own contradictions?

Though I won't pretend that these questions are impossible or intractable, I also don't see an obvious answer. There is, however, one last variant that may say something important, though it may not resolve this particular dilemma.


Ground Zero



Recall that in the paragraphs above, we took a completely idealized and perfect Truth Machine and replaced it with one that was affected by its physical operating conditions. We observed that we could elaborate a succession of factors that might all contribute to the machine's correct operation, and that so doing lead to something reminiscent of an ordinary experimental apparatus. In schematic, we have a succession like so:

(1) Given input A, the machine produces output B and B is true.

(2) Given input A, the machine produces B and B is true if the machine doesn't get too hot.

(3) Given input A, the machine produces B and B is true if the machine doesn't get too hot and if the machine is insulated from strong electric forces.

...

And so on, until we've taken into account all of the factors that might affect the Machine's operation. This is really a sequence of refinements to our idea of of the variables upon which the machine's output depends. That is, we could look at our succession as:

(1) The machine's output depends on its input.

(2) The machine's output depends on its input and the ambient temperature.

(3) The machine's output depends on its input and the ambient temperature and net electric forces acting on its mechanisms.

(4) The machine's output depends on its input and the ambient temperature and net electric forces acting on its mechanisms and ...

And so on. Looked at this way, it seems that there's one variable that we've forgotten, in our rush to account for necessary operational constants. Consider what we get if we take a step backward from (1):

(0) There is output.

Even the ideal machine we started with had one factor that affected its operation, namely, the question that its user asked. We began to doubt the Machine once its operating conditions allowed it to give different answers and also to account for them. However, we would in general expect our machine to give different answers to different questions. (Unless, of course, it just "holds up a finger."[2]) If we hold that (1) humans are physical beings, and (2) their actions comprise physical phenomenon, and (3) asking a question to the Truth Machine is a physical phenomenon, then since (4) the Machine's output depends upon the question it is asked, it must be that a question is an external physical condition affecting the operation of the Machine. This leads us to one last question:

Is is it function or malfunction for the Truth Machine to give different answers to different questions?



[1] Poincare, Henri, Francis Maitland (Trans.). Science and Method, Barnes & Noble Publishing, 2004 (original 1908).

[2] See case 19 of The Blue Cliff Record

Thursday, October 1, 2009

Outputs: When There is Madness to the Method

Considering that algorithms are human artifacts, it is very interesting that there even is such a thing as a bad algorithm. (It's also very interesting that apparently it is possible to make a joke in the form of an algorithm.) The phenomenon would seem to tell us something about the "essence" of an algorithm since even a severely deformed sorting algorithm is still recognized as a sorting algorithm. Consider a classic example:


START: given an array A[1..n]

STEP 1: IF (A is in sorted order) THEN [HALT] ELSE [GOTO STEP 2]

STEP 2: DO [randomly shuffle A[1..n]]

END


, which is more commonly known as "bogo-sort" and can be traced back at least to 1984[1]. There are some amusing quantitative analyses of such algorithms[2], but what stands out to me is that this can be called a sorting algorithm at all. I'm not disputing that bogo-sort is indeed a well-defined procedure, or that it eventually sorts its input array. What seems noteworthy is that we have here something that really is a sorting algorithm, but only just barely.

The only sense in which bogo-sort is a sorting algorithm is that, if and when it stops, the input array will be sorted. If we were to change the condition in STEP 1 to anything other than "Is A sorted?", we might say we have some kind of algorithm, but it would certainly not be a sorting algorithm. Put another way, we could change the condition to (almost) anything else and get an algorithm that is not a sorting algorithm. Bogo-sort is nothing other than a slightly constrained version of a very generic and very disorderly procedure (i.e. moving things around at random); its "sorting" is really nothing other than a discrimination applied to an otherwise indiscriminate process.

Perhaps another appropriate (but less snappy) name for this procedure would be "tautological sort". The array is sorted when the program says they're sorted, or rather, when the programmer says the program should stop. By way of analogy, this would be something like "straightening up" a messy room by going in and hurling objects about at random, and stopping whenever it was that you declared the room to be clean. ("Mission accomplished!") In this case, the output makes the program, and the notions of the programmer make the output. Maybe this is not especially surprising in and of itself, but it makes for a stretch of thin and blurry line separating the orderly and the chaotic, or the desirable and the undesirable.

The authors who have already commented on this curiosity raise a class of interesting questions: for any given objective, what is the worst possible means of attainment? ("The Pyrrhic Problem") This is worth asking if for no other reason than to point out that there are exceptionally bad ways of attaining almost any object. Although it seems little more than an academic curiosity when applied to algorithms, the question becomes strangely illuminating when applied to any number of real-world enterprises. Consider, for instance, how useful toilet paper is, but consider also that huge swaths of forest are cut down to produce it. It seems not only that are ends sometimes used to justify means, but that utter chaos circumscribed by an end can become a means.




[1] Broder, Andrei, Stolfi, Jorge. "Pessimal Algorithms and Simplexity Analysis", ACM SIGACT News, 16:49-53, 1984.

[2] Gruber, Hermann, Markus Holzer, Oliver Ruepp. "Sorting the Slow Way: An Analysis of Perversely Awful Randomized Sorting Algorithms", FUN 2007, LNCS 4475, p183-197, 2007.