Wednesday, August 26, 2009

A Bridge to a Bridge to Nowhere: A Short Critique of Transhumanism

One thing that I must credit to the Transhumanist movement is its advancement of ideas that are extremely bold and extremely forward-looking. Not everyone may see this boldness as a credit, but certainly any sensible observer has to acknowledge it. One of the central and most radical ideas of the movement might be termed the "Objection to Death" or "Suffering as an Engineering Problem." (Though I mean for these terms to be neutrally descriptive, they are not, to the best of my knowledge, used within Transhumanist circles.) Concisely, the Transhumanist view is that many, or perhaps all kinds of human suffering can be ameliorated or even eradicated through the proper application of technology. It should be pointed out that, as uncomfortable as the aims of Transhumanism seem to make a lot of people feel, the express Transhumanist bias toward technological applications whose motives are unambiguously benevolent is at least admirable. This view is especially noteworthy when contrasted with more conventional attitudes toward technology, which purpose it simply as a means to power or profit, whether for good or for ill. Even more noteworthy, however, is the potential of the Transhumanist program to upset centuries of social and cultural organization motivated around certain basic constants of biological human existence.


The Transhumanist initiative is quite earnest and already under way. By his own account, accomplished scientist and entrepreneur, and prominent Transhumanist Ray Kurzweil takes several hundred pills a day ("The Singularity Is Near", Kurzweil, 2005) in a bid to extend his natural life long enough to benefit from even more radical life-extension technologies whose development he anticipates in the near future. Mr. Kurzweil terms this the "Bridge to a Bridge" strategy (ibid), an allusion to expectations that the hastening pace of scientific paradigm shift and engineering know-how will allow human lives to be extended through a succession of different technologies. Immortality of any degree has long been a human fixation, and so one can hardly doubt Mr. Kurzweil's sincerity. What is remarkable is the coherence and proactive organization of the initiative: Mr. Kurzweil's program of radical technological life extension, organized in collaboration with medical doctor Terry Grossman, is apparently so well developed that it is being marketed to the general public. Although I would argue that Transhumanism is a distinctly apocalyptic cultural movement, I cannot help but give Mr. Kurzweil credit for taking immortality into his own hands. However, rather than debate the relative virtues or vices of technological life extension, I would like to examine the Transhumanist response to death when death cannot be avoided.

I agree with the thesis that radical life extension may become possible in the near future. I also agree with the thesis that technological advancement has the potential to greatly enhance the quality and the extent of human life, perhaps even to the degree that the human lifespan does become indefinitely long. However, I fail to see how either of these theses in any way exclude catastrophe. Suppose I am in excellent health, benefitting from all of the latest advancements in health-, youth-, and vitality-technology, but while crossing the street on my way to an appointment with my life-extension consultant, I am hit by a speeding bus and instantly killed. You may object and say that, if I had been really serious in my desire to stay alive indefinitely, I would have been more scrupulous in looking both ways before I cross the street, or perhaps avoided busy streets altogether. Suppose I do just that, and live to an age contemporary with the development of full-body prostheses. But suppose then that, in my hurry to adopt the latest advancements, I come to inhabit a robotic body with a serious design flaw, as a result of which my life-sustaining functions suddenly and unexpectedly fail and I expire before repairs can be made. You may object that such dangerous technology would never be rushed to market, and that we should expect a thoroughly developed infrastructure for the care and maintenance of such bodies, to ensure that such episodes do not happen. That's not unthinkable. So suppose then that I live into a ripe old advanced technological age, and all information constituting my identity is uploaded into a global computer network where it can reproduce arbitrarily, instantiating itself in as many and as widely varied forms as it sees fit, whether corporeal or otherwise. This is certainly consistent with visions that Mr. Kurzweil himself has put forward. But suppose that in spite of all this, war breaks out and an electromagnetic pulse weapon fries all computer hardware on which my various infomorph copies live, or suppose that a runaway hoarde of pathologically replicating nanobots devours our civilization, clones and cyborgs and machine-ghosts and all, or suppose that no source of energy sufficiently cost- or labor-efficient to power our advanced technological civilization ever materializes, and all our gadgets, including our high-tech selves, simply expire? It's easy to see that this list could go on indefinitely. I think it is equally reasonable to say that each unlikely, but possible, doomsday scenario could easily be met with some no-less-likely, and no less possible, solution. Sure, something could go terribly wrong, but on the other hand, we could find a way to set it right. Sure, things might greatly improve, but that doesn't mean nothing bad will ever happen again.

This problem-solution dialectic highlights an essentially reactive feature of technological solutions: new technologies always arise in response to real or perceived problems. We may develop technology in anticipation of a problem, for example, techniques for diverting an asteroid on a collision course with Earth. It may also happen that unexpected benefits accrue in addition to those expected with the development of something new -- perhaps you could use a computer printer to build living organs (see Boland, Damon, and Cui, "Applications of Inkjet Printing to Tissue Engineering", Biotechnology Journal, 2006). Nonetheless, it seems unreasonable to expect that we can anticipate all problems before they arise, or that unforeseen secondary applications of existing technologies could account for all contingencies. The Transhumanist position is very proactive, in that is emphasizes a sort of "eternal hope" in the form of ever more ingenious solutions to ever deeper and ever more insidious problems. Personally, I find this attitude commendable. An underlying assumption of the Transhumanist position seems to be that the problem-solution dialectic could go on as long as humankind sees fit. I see no reason for challenging this assumption. At the same time, an inexhaustible supply of technological solutions in no way implies an exhaustible supply of human problems, technological or otherwise.

I would tend to side with Mr. Kurzweil that claiming certain problems as unsolvable, without empirical proof or rigorous deduction to back up the claim, is staking out a regressive position. I'd like to take special care here to distinguish unqualified impossibility from the more precise notion of relative impossibility. For example, it can be rigorously shown that no machine functionally equivalent to Turing's famous abstraction is capable of determining whether any given Diophantine equation has an integer solution. However, this is quite different from claiming that there no possible way to determine whether the very same equation has the sought-after solution. (For a less airy example of what I mean, consider that you can't get blood from a turnip, but that doesn't mean that it's impossible to get blood from anywhere.) To argue that A is impossible by method B is a sensible argument that can proven or disproven, and is at least exact enough to debate in a reasonable way. To claim that A is unconditionally impossible is to advance an incomplete argument, and oftentimes implicitly presumes but doesn't state a certain method or collection of methods for accomplishing A. As such, any argument to the effect that indefinite human life extension is simply impossible would need to show that no conceivable technology could ever accomplish this end, and thus entails the daunting task of attempting to characterize 'all conceivable technologies'. At this point, many Transhumanist critics resort to the argument that there is something fundamentally misguided or evil about human augmentations or artificial life extension, but it is beyond my present scope to discuss this class or moralistic objections. Instead, let's presume that radical technological life extension is entirely possible, and examine the consequences. What would be the consequences to Mr. Kurzweil and his fellows living for as many centuries as they pleased, aided by a succession of dazzling scientific and technological advancements?

I contend that there are no meaningful consequences to humans "transcending biology" (Kurzweil, 2005). This may seem a very strange and surprising twist of the argument, so please let me explain. In the preceding paragraph, we constructed an argument that effectively voids any objections that the Transhumanist program is impossible in principle. (Note, however, that impossibility in practice is an important matter in its own right.) Essentially, this argument says, "You can't foresee technological solutions that haven't been invented yet, so you can't claim that they won't be able to solve problems that we're already aware of." However, this argument has a natural and equally valid dual, obtained by simply swapping the roles of 'human problem' and 'technological solution'. Essentially, the same line of argument that allows for a continuous train of successive technological solutions also has that you cannot foresee the full extent of solutions that haven't been implemented yet. In some sense, our material problems are invented artifacts, in exactly the same way that our technological solutions are invented artifacts. It's true that you can't dismiss the effectiveness of a solution that hasn't been conceived and implemented yet, but it's equally true that you can't dismiss the reality or seriousness of a problem that hasn't yet arisen and confonted you either. There are always problems. That's just life. Radical life extension and cyborgs and strong AI or not, human civilization is going to just keep doing what it's been doing.

This is a basic underpinning of Mr. Kurzweil's argument in Singularity, as his argument for the coming technological revolution is at least partially historical and inductive. This distinction I make between my own views and those of the Transhumanist community is that there seem to be deeply personal motivations behind much of the enthusiasm for radical life extension. This is not surprising; zest for life and fear of death are both very natural, and it is not strange that people should dearly want to keep on living and to keep from dying. What I would be interested to hear addressed in greater detail is what these zealous exponents of the future have to say about the specters of disaster that have always hovered at the periphery of human life. It is a fine thing to wish to live for centuries and do it by means of advanced technology. But how will you cope with the unavoidable possibility that this might not happen, or that your aspirations might be unexpectedly cut short? I feel that the arguments I have laid out here make the convincing point that, even if humans as a civilization do conquer aging and death the way we have conquered illness and scarcity, substantial problems will always remain for us. Take note that, in spite of dramatic improvements in overall quality of life for humanity in aggregate, people still get sick and people still go hungry, and a great many still get sick enough or go hungry enough to die from it. Transhumanism as a coherent movement is cultural, but not spiritual. This distinction may be deliberate on the part of Transhumanists, but it is very important to make note of. There are strong undercurrents of a sort of spirituality in much of the Transhumanist literature out there, and it would be very interesting to see them come forward and made explicit in the writings of some prominent exponent of the moment. What bearing do these more esoteric, less empirical views have on the inescapable shadows of catastrophe and failure?

If there is any danger in the Transhumanist movement, it is not that its success will somehow rip away everything that we hold dear in our culture, but that it will change the face of our whole way of life at great expense, leaving us with something essentially the same as what we had before. It would be interesting to apply similar arguments to technological revolutions of the past. What about the Industrial Revolution? What about the Agricultural Revolution? Certainly, it would bring more of the relevant issues into sharp focus. I am not arguing here that we should fear or resist change. Rather, I am arguing for a measured and perhaps more reasoned attitude toward change, one that makes a clear distinction between enthusiasm for personal motives and arguments about the future developments of the vast, social, cultural, and technological edifice that is our civilization. Romantic critics should not worry themselves too much. Transhumanists are not going to solve all our problems. We can always invent more.

In the end, this reduces to the fundamental philosophical question of technology: What exactly is it that we're trying to accomplish?

No comments: