... and if you're too intelligent
they'll cut you down to size;
they'll praise you till you're happy,
then they'll fill you full of lies ...
"Cradle to the Grave", album of the same name, Subhumans (1983)
A friend of mine came up to me unexpectedly today, with some obvious concern and agitation. "Can I ask you about something? Could you explain something to me?" he said. I said okay, and he took a piece of paper out of his pocket and began folding it awkwardly back and forth, trying to hide most of the contents while showing me just one small part. Finally, he handed it to me and asked very soberly, "What does this mean?" I looked the paper. It was a score from an IQ test. The score was not good.
It was at that point that I thought about what it must feel like to receive a piece of paper that tells you that you are officially stupid.
I've known this friend for a while. He had a difficult past. He's attending a local community college. He wants to go on to some position where he can counsel troubled kids -- not kids who are troubled about the usual things that trouble kids, but kids who are troubled in a way that's bigger and deeper and harder to articulate, who have been to jail, or will likely end up there soon. He's an honest person. He's a curious person. He works hard. And I could tell that he understood what the paper said, at least in literal terms. I could tell he was hoping, perhaps, that there was some subtle detail of the report that would nullify or at least mitigate the coldly obvious meaning. It was not a question about scales or confidence intervals, even though I explained these ideas at some length so as not to seem flippant or condescending.
There was no way I could just hand the paper back and say, "It means you have a low IQ."
What happened is, I sat back in my chair and said this:
"Look, I don't know much about this sort of thing or about IQ tests or what they're really good for or exactly what they mean. They've been around for a while, and a lot of people have criticized them for a lot of reasons, and they still give them out anyway, but none of that is really the point. The point is that it's all bullshit. The point is that a test score just tells you how well you scored on a test, and nothing more than that. For some reason, we've come to live in a fucked up world where people give us test after test after test to evaluate what use we, as human beings, have to them, as figures of power and authority. It's a bad measure. What's really good and interesting about human beings is how adaptable we are. If we can't do things one way, we can find another. We're never incapable; the only thing we ever lack is persistence or inventiveness. This score doesn't matter. It doesn't tell you what you can or can't do. It doesn't tell you whether or not you'll succeed in life. It doesn't tell you who you are. Those are all things that you determine for yourself. So the meaning is nothing; it's just a test, don't worry about it. Life isn't a test; it's a challenge. It's hard, but there's always another way. There's always a way to live."
Then I handed the paper back, and left.
I sincerely wonder why we insist on so many abstract metrics of people. They may serve an organizational purpose, but they serve no individual purpose, and in some cases even represent and individual harm. Perhaps it's a bit worn and trite to criticize tests and tell people that they can do whatever they want. I acknowledge that not everyone has the same abilities. I acknowledge that some people will try things and succeed, while other people will try things and fail. I acknowledge that we are all stuck playing with the lot we're dealt. What I take exception to is the proposition, implicit in every quantitative metric of a person, that there are only a certain number of clearly circumscribed roles a person may play in life, and all of those can be characterized by a certain number of simple, measurable quantities.
I don't think anyone would dispute the simple proposition that capability is only really proven when the thing is done. Can you write? Write something. Can you fight? Fight someone. Can you think? Come up with an idea. That is all fine and good, but what is often not recognized is that narrow metrics such as standardized questionnaires and puzzles measure only a person's competence in one particular strategy of doing, not that person's overall capacity for thinking, learning, or doing. There is an enormous breadth and variety to the genres of writing, styles of fighting, and certainly to ideas. In the end, the one goal that all of us hold as ultimate is simply to live. As far as I can see, the measure of that is something that we all figure out for ourselves.
I wouldn't be the first person to criticize the IQ test; as far back as Vygotsky and Luria, people were well aware of the influence of a industrialized education on the direction of concept use and formation. The modern practice of measuring by standardized test even resembles, to some substantial degree, the operation of very primitive feature-weighting recognizers (e.g. Rosenblatt's perceptron), which Minksy and Papert famously showed could not even distinguish the presence of such basic relationships as continuity. There is a remarkable synthesis in the way human cognition sees form as function and function as form; one sees it everywhere, in our tools, in our art, in our basic ways of thinking. Batteries of questions aimed at abstracting some particular feature of an individual, however, separate the form of the person from the function being sought. The issue is not one of which kinds of intelligence we should measure, or which quantities are really important in determining health, fitness, or success. The issue is that the measurement of finite quantities is a procedure fundamentally insufficient to the task of determining what we should do with ourselves, or how.
If that sounds like a trope or a triviality, stop and really imagine for yourself how exactly would it feel to receive an scientific report documenting how stupid you are.
One of the really interesting things about being human is that we make purposes and meanings for ourselves. This is not an inspirational appeal that takes us away from the compelling argument that simple biological impulses underly our lives and activities. This is assertion of the brilliant complexity with which those impulses manifest when placed in a brilliantly complex world. What's amazing is that we all end up acting as differently as we do even though we all start with same small handful of biological goals and directives. A clever strategy can turn a weakness into a strength. Sometimes, the meek really do inherit the Earth, and nobody sees it coming.
Sometimes, however, is not the same as always. Sometimes the test speaks the truth. Sometimes might is right. Possibility, however, is an essential organizing principle of how we think about ourselves and the directions of our lives. To borrow notions from the cognitive scientists, our identity is fundamentally tied to a self-ideal, that is, to a persistent thought not just of who we but who we want to be. Without the ability to imagine possible selves and possible futures, the whole sense of self collapses. If tomorrow's outcomes are all completely and fully known today, there is no human sense in bothering to live them all out. (This assertion is pregnant with all kinds of epistemological interpretations.) Of course we need to know, in plain, unsparing terms, how the world is. Of course we need to know, frankly and directly, what our weakness are. But each of us, if we are to go on living at all, also has to be able to imagine better.
I have an above-average IQ. I know this because in fourth grade I went to a quiet, out-of-the-way room in my school and took a strange-looking test, after which they sent me to a so-called "gifted program" once a week. There, we got to do things like tinker with fractal-generating computer programs and assemble-your-own-robot kits. I liked the things we learned, and I liked the absence of the overbearing regimentation that pervades ordinary public schooling, but to tell the truth, I never liked any of the other children there. They all knew that they had taken a test, and that adults approved. They knew that they had been declared officially intelligent. As such, they were all filled with insufferable smugness and self-satisfaction. Knowing that they could do things to please adults, they competed viciously among one another for praise and attention. Based on my later contact with intelligent, talented, and highly educated persons in my adult life, these are features that, I am almost certain, many if not most such children retain for the rest of their lives. Knowing that you have been blessed with "the gift", it is difficult to resist fascination with your own wonderful ability. Having this fascination, it becomes difficult to arouse much interest in what your work means for the other humans you share the world with, what shortcomings or limitations you might have despite your talents, or what people without "the gift" might think or have to say. Having a belief in your own excellent function, you lose the form of yourself as human being, with all the frailties and blights that entails.
Blame or praise only really help when they suggest new directions. There is no "good function" or "bad function", only "good for this" or "bad for that". The only real function is to live out our lives, and that is something we do in whatever way we choose. So what if you have a low IQ? Human chess players are helpless to beat powerful chess playing algorithm, but people still play chess. The joy of a game is just in the playing; in just the same way, the joy of living is not in solving any single problem, or meeting any one goal. So what if you have a high IQ? Your shit still stinks, and somebody still has to clean it up. The world is unimaginably huge and complicated, but everyone has a place in it. Contrary to some opinions, there is no placement test to tell you where.
Thursday, February 18, 2010
Saturday, February 6, 2010
Implications of Universality for Computer Security
"In the United States (U.S.), we have items available for all our needs. Many of these items are cheap and easy to replace when damaged. Our easy-come, easy-go, easy-to-replace culture makes it unnecessary for us to improvise. This inexperience in 'making do' can be an enemy in a survival situation. Learn to improvise. Take a tool designed for a specific purpose and see how many other uses you can make of it." (U.S. Army Field Manual FM 3-05.70, "Survival", May 2002)
One of the most noteworthy features of computer security is its overwhelmingly defensive stance. While there are certain programming practices that mitigate threats (e.g. sanitize your inputs, check your array bounds), it is practically impossible to preclude all possible attempts to corrupt or co-opt a system. Granted, there are always a few stubborn hold-outs willing to step forward and claim that the system that they have constructed is constructed in that elusive "right way" that everyone so far has failed at. It's possible that one of these people may be right, but evidence seems to weigh against such claims.
All the way back in 1993, a landmark report from the Naval Research Laboratory [3] found that almost half of 50 serious security breaches involved code that correctly implemented its specification. Of course, that was well over 15 years ago, and so one might protest that perhaps some better model of security has come forward since that time. Even so, one has to take notice when large teams of talented programmers correctly implement a meticulously specified system, and the system nonetheless succumbs to attack. Cases such as this lead illustrious security researcher William Wulf to call for a revolutionized approach to security that abandons the old paradigms of "perimeter defense" [7] as late as last year. Wulf proposes a model of security inspired by the success of the Internet, wherein a small foundation with minimal but very general functionality allows for systems to adapt to localized and quickly evolving conditions as needed and desired. This seems a very pragmatic and promising approach, and the success of the Internet is nothing to sneeze at. Even so, one cannot help but wonder what exactly it is about the traditional way of doing things that leads it to fail.
Let's consider a very intuitive idea from the traditional theory of security. Noninterference is a rather early and very successful idea due originally to J. A. Goguen and J. Meseguer [2] that can be informally phrased as follows: if two users (or processes) share a system, then the action of one should have no effect on the other. In some sense, this is like what we would expect in many kinds of situations where security is important: if you're on a time-sharing system, you don't want your files, or what you do to them, to be visible to other people; if you're buying something online, you don't want the credit card number you send to the vendor to somehow end up on the computer of some third party not part of the transaction. Of course, such assurances are very hard to make on large systems that a lot of people use; it would be pure whimsy to suggest that one could ever make such an assurance about the Internet (especially considering that large-scale data mining by merchants and advertisers already counts, arguably, as producing unwanted side-effects), and it would even seem like a bit of stretch to make absolute promises about a large, practical system that a lot of different people had to use to readily exchange information. However, suppose that someone were to successfully construct a shared system conforming to Goguen-Meseguer noninterference; if we were really that confident in its conception and construction, we could be assured that no one would be meddling in anyone else's data, nor would the kernel be unwittingly divulging any secrets to attackers bent on trying to compromise it.
Here's the catch: even a perfectly conceived and constructed system has to interact with the rest of the world in order to be used by anyone. This of course introduces the vector of social engineering, whereby the human element becomes part of the system. Even so, we don't have to rely on operator fallibility to show how problems immediately and essentially arise once the theoretical construct is out of its own solitary universe. One such very simple and very clever example is due to Daryl McCullough [4], and works essentially as follows:
Imagine we don't want any information flowing from the red parts of the system to the blue parts, although blue-to-red is okay. (This is indicated by the directions of the arrows.) Considered separately, the blue and red parts of the system both satisfy noninterference, since the red portion consists only of a single process, while the blue portion is defined in such a way that its two processes never interact. Does the entire system obey noninterference, once its parts are put together? It turns out that it does not.
The reason for this comes from a subtle but important detail of machines, namely, their finiteness. Suppose that processes A and B fill up their respective input buffers -- which they certainly can do in any kind of realistic machine. Should they start throwing away messages, or should they wait? We don't want to lose any messages, so we insist that the processes wait once their respective buffers fill up. However, in order to know to wait, the buffer or some mechanism attached to it must be able to send signals back to its process, telling either to go ahead or wait. That is, we want our processes to perform blocking reads and writes on the buffers, wherein their execution is suspended until the necessary read or write becomes possible. Now suppose, that our red process has a one-bit input from S that is uses to decide which of buffers A or B to read from, i.e. red multiplexes A and B according to the input it receives from S. The red process waits until acknowledgment arrives in buffer C before reading the next input from S. Suppose processes A and B immediately fill their respective buffers; once our red process reads from one of them, the buffer will have space for a new message, and will signal back to its respective process that it can go ahead with its execution. This is all good and fine and perfectly reasonable. What's the problem? The problem is this: suppose also that A writes some innocuous notice to the output on the left, say '0', each time it sends to buffer A, and that B writes '1' to the same source each time it sends to buffer B. Why is this problem? Because, given this setup, the stream of bits being written to the (blue) output on the left now exactly matches the stream of inputs being read from the (red) input on the right, which is precisely what we did not want to happen.
One immediate conclusion that one can draw from this result is that some security properties simply are not composable, that is, even if two different systems exhibit the property, there is no guarantee that putting them together will result in a system that exhibits the same property. However, I think that there is another informative way of looking at what's happening in McCullough's example: connecting the blue to the red system gives the blue system an opportunity to emulate red.
Emulation is an idea that goes all the way back to Alan Turing's universal computer. In essence, one machine emulates another whenever it performs computations equivalent in rule and structure to another. This is what gives Turing's machine its universality; it has the capability to emulate any of a very large class of other machines, and hence to perform any computation of which any machine in that class is capable. Chip developers use emulators to test against design errors. Nostalgic gamers use emulators to play titles from long-defunct platforms on modern personal computers. Interpreted programming languages (e.g. Java) use an emulated abstract computer to achieve some measure of portability. However, the basic principle that one machine (or tool) can serve in the stead of any of large class of others seems to me much broader and deeper than any of the niche purposes to which it has so far been put, especially as regards security.
Most prisons in the United States place rigid constraints on what sorts of items prisoners are allowed to keep in their cells. The reason for this is that a small piece of metal or a hard, sturdy object can easily be improvised into an excavating tool or a weapon, given the proper motivation -- of which prison inmates typically have an abundance. Ingenuity in making and using tools is one of the distinguishing and noteworthy characteristics of humanity, and the dramatic success of the tool-weilding primate is a testament to the power of a usable tool, however crude. This is not a new or surprising observation. However, it is truly astonishing to observe the huge disparity between the smallness of the means necessary to cause a disruption and the explosive magnitude of the disruption itself. Who would have ever imagined that using a bludgeon instead of fists and teeth would have catapulted humans to the position of dominant predator, or that millennia later those same humans would be improvising artifacts for personal hygiene into deadly melee weapons?
What we have here is a very broad principle. How does it apply?
A number of attempts were made during the 1960s and 1970s to construct a universal Turing machine with the smallest number of possible states; Marvin Minsky constructed one with only 7 states and 4 symbols in 1962 [5]. Successive attempts collected and published by Yuri Rogozhin in 1998 produced at least one smaller machine. In 1985, Stephen Wolfram claimed that his famous Rule 110 one-dimensional cellular automaton was able to emulate a Turing machine with only two states and five symbols, sketching a proof in his widely read and controversial 2002 opus [6]. Though such constructions always give some whiff of obsession with arcana, they also give very concrete evidence of a genuinely startling conclusion: it takes very, very little in order to make almost anything possible.
This should be a very potent lesson for the discipline computer security. Systems can be very tightly walled off from the rest of the world, but it takes only a very narrow gap for an attacker to worm his way in. A first reflex is to try to wall off the system even tighter but this, ultimately, turns out to be a failing solution: one can only constrain the system's use so much before it becomes completely unusable, whereas the attacker needs only the slightest concession in order to devise an exploit. This seems to confirm Wulf's argument that a pitched battle over the integrity of a rigid boundary is doomed to defeat. However, it also gives a potentially very useful language in which to phrase the problems of security. Consider the following informal conjecture:
If a system S allows its users functionality F, then S is vulnerable to an exploit E if and only if F is computationally expressive enough to emulate E.
This is plainly visible in McCullough's construction above; the blocking signal from a full buffer gives the blue portion of the system exactly enough information to act as if it were reading off of the secret input buffer. Erik Buchanan, Ryan Roemer, and Stefan Savage presented a method of constructing exploits without the use of code injection (which I also cited here several months ago) at Black Hat 2008 [1], which seems to confirm this very same intuition. The authors showed that subverting normal control flow was sufficient to coerce "trusted" code into arbitrary computations, thus producing "bad" code without the need to introduce any new code.
Explaining security breaches in terms of "emulating your inferiors" also provides a graded metric of susceptibility. (This may be useful if it turns out that Mr. Wolfram is indeed right and the world is rife with computational universality.) If one system is capable of emulating another, then the time complexity of the emulated computations will always differ from those of the native, non-emulated computations by a constant factor. However, a given system may be better suited to emulating some systems than others, and the program required to set up the emulation may be substantially more or less complicated. A rock might serve as both a hammer or a crude knife, but it makes a much better hammer than knife; an actual knife, by contrast, can be improvised to a wide variety of very practical purposes. The same is undeniably true of computational systems; although the Rule 110 automaton can emulate a Universal Turing Machine, and that universal Turing Machine can emulate the operation of the latest multi-core processor running your favorite operating system, the time performance of your system would decrease by a very large (but nonetheless constant) factor, and require quite a bit of memory besides. In the same vein, dynamic database access through a webpage makes it relatively easy to steal privileged information from the database because (if the website is badly designed) the user has the chance to pass arbitrary SQL queries to the server, but would be much more difficult (by itself) to leverage into an attempt to seize control of the operating system kernel.
Malware, then is just software coerced into emulating an unwanted computation; attack vectors are essentially just abstract buses over which the victim machine receives instructions from its attacker; exploits are essentially programs running on a maliciously purposed abstract machine. This is, of course, all highly speculative, but it seems to give an expressive language in which to formulate many of the problems of security, and to tie together many well-developed branches of computer science.
[1] Buchanan, Erik, Ryan Roemer, and Stefan Savage. "Return-Oriented Programming: Exploits Without Code Injection". Presented at Black Hat 2008, slides available here
[2] Goguen, J. A. and J. Meseguer. "Security Policies and Security Models". IEEE Symposium on Security and Privacy, 1982.
[3] Landwehr, Carl E., Alan R. Bull, John P. McDermott and William S. Choi. "A Taxonomy of Computer Program Security Flaws, with Examples". ACM Computing Surveys, 26:3, September 1994.
[4] McCullough, Daryl. "Noninterference and the Composability of Security Properties". IEEE Symposium on Security and Privacy, 1988.
[5] Minsky, Marvin. "Size and Structure of Universal Turing Machines". Recursive Function Theory, Proceedings of the Symposium in Pure Mathematics, 5, American Mathematical Society 1962.
[6] Wolfram, Stephen. "A New Kind of Science". Wolfram Media, 2002.
[7] Wulf, William A. and Anita K. Jones. "Reflections on Cybersecurity". Science, vol. 326, 13 November 2009.
One of the most noteworthy features of computer security is its overwhelmingly defensive stance. While there are certain programming practices that mitigate threats (e.g. sanitize your inputs, check your array bounds), it is practically impossible to preclude all possible attempts to corrupt or co-opt a system. Granted, there are always a few stubborn hold-outs willing to step forward and claim that the system that they have constructed is constructed in that elusive "right way" that everyone so far has failed at. It's possible that one of these people may be right, but evidence seems to weigh against such claims.
All the way back in 1993, a landmark report from the Naval Research Laboratory [3] found that almost half of 50 serious security breaches involved code that correctly implemented its specification. Of course, that was well over 15 years ago, and so one might protest that perhaps some better model of security has come forward since that time. Even so, one has to take notice when large teams of talented programmers correctly implement a meticulously specified system, and the system nonetheless succumbs to attack. Cases such as this lead illustrious security researcher William Wulf to call for a revolutionized approach to security that abandons the old paradigms of "perimeter defense" [7] as late as last year. Wulf proposes a model of security inspired by the success of the Internet, wherein a small foundation with minimal but very general functionality allows for systems to adapt to localized and quickly evolving conditions as needed and desired. This seems a very pragmatic and promising approach, and the success of the Internet is nothing to sneeze at. Even so, one cannot help but wonder what exactly it is about the traditional way of doing things that leads it to fail.
Let's consider a very intuitive idea from the traditional theory of security. Noninterference is a rather early and very successful idea due originally to J. A. Goguen and J. Meseguer [2] that can be informally phrased as follows: if two users (or processes) share a system, then the action of one should have no effect on the other. In some sense, this is like what we would expect in many kinds of situations where security is important: if you're on a time-sharing system, you don't want your files, or what you do to them, to be visible to other people; if you're buying something online, you don't want the credit card number you send to the vendor to somehow end up on the computer of some third party not part of the transaction. Of course, such assurances are very hard to make on large systems that a lot of people use; it would be pure whimsy to suggest that one could ever make such an assurance about the Internet (especially considering that large-scale data mining by merchants and advertisers already counts, arguably, as producing unwanted side-effects), and it would even seem like a bit of stretch to make absolute promises about a large, practical system that a lot of different people had to use to readily exchange information. However, suppose that someone were to successfully construct a shared system conforming to Goguen-Meseguer noninterference; if we were really that confident in its conception and construction, we could be assured that no one would be meddling in anyone else's data, nor would the kernel be unwittingly divulging any secrets to attackers bent on trying to compromise it.
Here's the catch: even a perfectly conceived and constructed system has to interact with the rest of the world in order to be used by anyone. This of course introduces the vector of social engineering, whereby the human element becomes part of the system. Even so, we don't have to rely on operator fallibility to show how problems immediately and essentially arise once the theoretical construct is out of its own solitary universe. One such very simple and very clever example is due to Daryl McCullough [4], and works essentially as follows:
Imagine we don't want any information flowing from the red parts of the system to the blue parts, although blue-to-red is okay. (This is indicated by the directions of the arrows.) Considered separately, the blue and red parts of the system both satisfy noninterference, since the red portion consists only of a single process, while the blue portion is defined in such a way that its two processes never interact. Does the entire system obey noninterference, once its parts are put together? It turns out that it does not.
The reason for this comes from a subtle but important detail of machines, namely, their finiteness. Suppose that processes A and B fill up their respective input buffers -- which they certainly can do in any kind of realistic machine. Should they start throwing away messages, or should they wait? We don't want to lose any messages, so we insist that the processes wait once their respective buffers fill up. However, in order to know to wait, the buffer or some mechanism attached to it must be able to send signals back to its process, telling either to go ahead or wait. That is, we want our processes to perform blocking reads and writes on the buffers, wherein their execution is suspended until the necessary read or write becomes possible. Now suppose, that our red process has a one-bit input from S that is uses to decide which of buffers A or B to read from, i.e. red multiplexes A and B according to the input it receives from S. The red process waits until acknowledgment arrives in buffer C before reading the next input from S. Suppose processes A and B immediately fill their respective buffers; once our red process reads from one of them, the buffer will have space for a new message, and will signal back to its respective process that it can go ahead with its execution. This is all good and fine and perfectly reasonable. What's the problem? The problem is this: suppose also that A writes some innocuous notice to the output on the left, say '0', each time it sends to buffer A, and that B writes '1' to the same source each time it sends to buffer B. Why is this problem? Because, given this setup, the stream of bits being written to the (blue) output on the left now exactly matches the stream of inputs being read from the (red) input on the right, which is precisely what we did not want to happen.
One immediate conclusion that one can draw from this result is that some security properties simply are not composable, that is, even if two different systems exhibit the property, there is no guarantee that putting them together will result in a system that exhibits the same property. However, I think that there is another informative way of looking at what's happening in McCullough's example: connecting the blue to the red system gives the blue system an opportunity to emulate red.
Emulation is an idea that goes all the way back to Alan Turing's universal computer. In essence, one machine emulates another whenever it performs computations equivalent in rule and structure to another. This is what gives Turing's machine its universality; it has the capability to emulate any of a very large class of other machines, and hence to perform any computation of which any machine in that class is capable. Chip developers use emulators to test against design errors. Nostalgic gamers use emulators to play titles from long-defunct platforms on modern personal computers. Interpreted programming languages (e.g. Java) use an emulated abstract computer to achieve some measure of portability. However, the basic principle that one machine (or tool) can serve in the stead of any of large class of others seems to me much broader and deeper than any of the niche purposes to which it has so far been put, especially as regards security.
Most prisons in the United States place rigid constraints on what sorts of items prisoners are allowed to keep in their cells. The reason for this is that a small piece of metal or a hard, sturdy object can easily be improvised into an excavating tool or a weapon, given the proper motivation -- of which prison inmates typically have an abundance. Ingenuity in making and using tools is one of the distinguishing and noteworthy characteristics of humanity, and the dramatic success of the tool-weilding primate is a testament to the power of a usable tool, however crude. This is not a new or surprising observation. However, it is truly astonishing to observe the huge disparity between the smallness of the means necessary to cause a disruption and the explosive magnitude of the disruption itself. Who would have ever imagined that using a bludgeon instead of fists and teeth would have catapulted humans to the position of dominant predator, or that millennia later those same humans would be improvising artifacts for personal hygiene into deadly melee weapons?
What we have here is a very broad principle. How does it apply?
A number of attempts were made during the 1960s and 1970s to construct a universal Turing machine with the smallest number of possible states; Marvin Minsky constructed one with only 7 states and 4 symbols in 1962 [5]. Successive attempts collected and published by Yuri Rogozhin in 1998 produced at least one smaller machine. In 1985, Stephen Wolfram claimed that his famous Rule 110 one-dimensional cellular automaton was able to emulate a Turing machine with only two states and five symbols, sketching a proof in his widely read and controversial 2002 opus [6]. Though such constructions always give some whiff of obsession with arcana, they also give very concrete evidence of a genuinely startling conclusion: it takes very, very little in order to make almost anything possible.
This should be a very potent lesson for the discipline computer security. Systems can be very tightly walled off from the rest of the world, but it takes only a very narrow gap for an attacker to worm his way in. A first reflex is to try to wall off the system even tighter but this, ultimately, turns out to be a failing solution: one can only constrain the system's use so much before it becomes completely unusable, whereas the attacker needs only the slightest concession in order to devise an exploit. This seems to confirm Wulf's argument that a pitched battle over the integrity of a rigid boundary is doomed to defeat. However, it also gives a potentially very useful language in which to phrase the problems of security. Consider the following informal conjecture:
If a system S allows its users functionality F, then S is vulnerable to an exploit E if and only if F is computationally expressive enough to emulate E.
This is plainly visible in McCullough's construction above; the blocking signal from a full buffer gives the blue portion of the system exactly enough information to act as if it were reading off of the secret input buffer. Erik Buchanan, Ryan Roemer, and Stefan Savage presented a method of constructing exploits without the use of code injection (which I also cited here several months ago) at Black Hat 2008 [1], which seems to confirm this very same intuition. The authors showed that subverting normal control flow was sufficient to coerce "trusted" code into arbitrary computations, thus producing "bad" code without the need to introduce any new code.
Explaining security breaches in terms of "emulating your inferiors" also provides a graded metric of susceptibility. (This may be useful if it turns out that Mr. Wolfram is indeed right and the world is rife with computational universality.) If one system is capable of emulating another, then the time complexity of the emulated computations will always differ from those of the native, non-emulated computations by a constant factor. However, a given system may be better suited to emulating some systems than others, and the program required to set up the emulation may be substantially more or less complicated. A rock might serve as both a hammer or a crude knife, but it makes a much better hammer than knife; an actual knife, by contrast, can be improvised to a wide variety of very practical purposes. The same is undeniably true of computational systems; although the Rule 110 automaton can emulate a Universal Turing Machine, and that universal Turing Machine can emulate the operation of the latest multi-core processor running your favorite operating system, the time performance of your system would decrease by a very large (but nonetheless constant) factor, and require quite a bit of memory besides. In the same vein, dynamic database access through a webpage makes it relatively easy to steal privileged information from the database because (if the website is badly designed) the user has the chance to pass arbitrary SQL queries to the server, but would be much more difficult (by itself) to leverage into an attempt to seize control of the operating system kernel.
Malware, then is just software coerced into emulating an unwanted computation; attack vectors are essentially just abstract buses over which the victim machine receives instructions from its attacker; exploits are essentially programs running on a maliciously purposed abstract machine. This is, of course, all highly speculative, but it seems to give an expressive language in which to formulate many of the problems of security, and to tie together many well-developed branches of computer science.
[1] Buchanan, Erik, Ryan Roemer, and Stefan Savage. "Return-Oriented Programming: Exploits Without Code Injection". Presented at Black Hat 2008, slides available here
[2] Goguen, J. A. and J. Meseguer. "Security Policies and Security Models". IEEE Symposium on Security and Privacy, 1982.
[3] Landwehr, Carl E., Alan R. Bull, John P. McDermott and William S. Choi. "A Taxonomy of Computer Program Security Flaws, with Examples". ACM Computing Surveys, 26:3, September 1994.
[4] McCullough, Daryl. "Noninterference and the Composability of Security Properties". IEEE Symposium on Security and Privacy, 1988.
[5] Minsky, Marvin. "Size and Structure of Universal Turing Machines". Recursive Function Theory, Proceedings of the Symposium in Pure Mathematics, 5, American Mathematical Society 1962.
[6] Wolfram, Stephen. "A New Kind of Science". Wolfram Media, 2002.
[7] Wulf, William A. and Anita K. Jones. "Reflections on Cybersecurity". Science, vol. 326, 13 November 2009.
Subscribe to:
Posts (Atom)