“Can an object say to its maker, ‘Why did you make me like this?’” – Paul, c. 50-55 c.e.
“[The writer,] while he is writing on good subjects, is by the very act of writing introduced in a certain measure into the knowledge of the mysteries and greatly illuminated in his innermost soul; for those things which we write we more firmly impress upon the mind…While he is ruminating on the Scriptures he is frequently inflamed by them. […] He who ceases from zeal for writing because of printing is no true lover of the Scriptures.” – Johannes Trimethius, c. 1494 c.e.
I’m ambivalent about some of this week’s readings in my digital humanities course for reason of having read largely science fiction as an adolescent. While I agree, for example, with another student’s comment regarding what I’ll refer to (for the sake of convenience) the overall coolness of Vannevar Bush’s idea for what is essentially a desktop computer (in 1945! In Atlantic Monthly!), I’m a little leery of attributing predictive power even to someone so obviously in touch with the progress of technology during his time.
My big question is about causality. Did the chicken come from the egg, or did the chicken think, “hey, you know what would be a cool way to reproduce and make a delicious if high-cholesterol breakfast?” Well, maybe not in those terms, and no, I’m not suggesting teleological evolution – I’ve read just enough Gould for that. I’m not even suggesting the more prima facie reading of my own terms, that the fact someone with the sheer determining power that can come with Bush’s status could cause changes in computing development by writing an article like that (though that’s closer to my sense). Rather, I’m suggesting that way we discuss causality in technological developments may be backward.
I hear you already: “But Dan, a.) Bush was a brilliant scientist, b.) nearly all of the technologies he suggested have come to pass (voice recognition and transcription, desktop… or desk-in computing, as the case may be, photography that doesn’t require a development process and can be fitted to someone’s head who isn’t afraid of looking rather goofy, etc.), and c.) he repeatedly acknowledged his own event horizon.”
To which I respond, “Yes. But he’s not here, and I am. Because I love you. So don’t interrupt.”
What I mean, briefly, is that I think there is a tendency to consider technology an object that we create, one which may have the potential for changing our way of life but essentially originating in human development. Think of the Terminator movies. Ignoring that the whole premise was bastardized by the continuity error of the second (explain it away all you want; a big part of what made it cool in the first one was that only organic material could travel through time, unlike the liquid metal guy in the second, and thus weapons couldn’t come through), the premise of the films revolved around the invention of skynet* because of a single technological breakthrough that had to be destroyed.
Here’s the problem with that scenario: that’s not, generally, how breakthroughs occur. We discuss the idea of digital humanities as a collaborative effort and as computers as promoting connectivity as a whole, but our language and thought processes still revolve around the idea of Cartesian subjects connecting to others through various mediating influences. As Alex [Reid] asked, how could we deal with the idea of humanities papers coauthored by ten, or twenty, or ten thousand people? (the Bureau of Labor gives the number of postsecondary teachers of English and literature in the U.S. at around 75,000). To extend that: how do we separate the thoughts of ten thousand people whose work comes to the same conclusion from the standpoint of ideas as discrete entities?
Or, to take it one step further in that direction, how do we account for the parallel evolution of technologies? Bush dwells on the technological labor intensity required to produce machines, noting (for example) that Babbage, even with a great deal of support, “could not produce his great arithmetical machine,” and, further, “[h]ad a Pharaoh been given detailed and explicit designs of an automobile, and had he understood them completely, it would have taxed the resources of his kingdom to have fashioned the thousands of parts for a single car, and that car would have broken down on the first trip to Giza” (38). (Ed. aside: take THAT, Hank Morgan!). From a different angle: Elisha Gray and Alexander Graham Bell filed for patents for the first telephone (independently, it’s believed) on the same day. Galileo was one of four people to “discover” sunspots during the same year. Some people talk of “Moore’s Law,” a rule governing technological development at a level that virtually ignores the human actors – in a reductive way, humanity (viewed as a whole, rather than a collection of ego-I’s) develops things in ways that could be mathematically modeled and predicted.
Regardless of what one thinks of this law (I’m pretty skeptical), what I’m driving at in these examples is the idea that even J.C.R. Licklider didn’t go quite far enough in his “Man-Computer Symbiosis” (1960), that his division between machines as, more or less, prostheses versus machines as symbiotic entities was pretty much outmoded when humans began to live in England during the winter months. Humans are not the only species to use tools, it’s true: some species of primates and dolphins have been known to use things in tool capacities. But humans, at least in our age and at our numbers, are dependent on these technologies in ways that other species generally are not on those they create. Moreover, following Sartre’s idea of labor in CDR, I would argue that humans are shaped by their tools in ways more immediate than animals are. In Sartre, the concept is referred to as the “practico-inert” (the best post I’ve ever seen about the concept refers to it as “antipraxis, which I don’t find in Sartre but may be a more useful way of conceptualizing it). Basically, when we work on something, we infuse it with our labor/some element of ourselves, and this worked matter gains a life of its own. On the one hand, our labor-investment in the object allows it to repeat that same labor over and over, extending in incredible ways the potential labor available for new projects: It may take a long time to create all of the things that allow us to build a robot that makes cars, but these robots will keep making cars, with a certain amount of maintenance, thus replicating that labor without an apparent diminishing return on the labor infused.
The second aspect of this, however, is that some element of this praxis (see why that other term is handy?) with which the machine (or whatever) is infused gives it a life of its own. In Sartre’s terms, we create finalities in these machines, which then become finalities of their own, which form counter-finalities that act against (generally) or independently (my thought – Sartre is still, after all, fairly anthropocentric) of our activities. Sartre gives the example of the years of farming and clearcutting in China that have paved the way for lethal mudslides as a rather negative example, but I think we could also envision it as roads determining destinations (Deleuze), washing machines determining hygiene (Bryant), etc.
Bryant makes the (very insightful, it seems to me) observation that the goals developed from new technology are posited for us, rather than being enunciated or set by us (really, you should read the entry). What I would suggest, in line with what I’ve been trying to hit at in general, is that this is no less true of how we consider ourselves (in any sense). If we investigate the use of the “I” in “I didn’t set this goal; I was given this goal by the emergence of technological webs,” what I think we find is that there is a disconcerting effect on what we could mean. Do we mean, for example, that “I” as a body and mind duality, “I” as a Wittgenstein idea of the subject of the predicate, “I” as a transcendental soul? If we consider it in the psychophysical sense of Husserl, a point of observation embedded or inscribed in a body extending to those things where we can experience senses, then there are going to be rather profound implications for how we consider the surplus of kidney that we generally can’t feel – and what qualia are we implicitly postulating in differentiating the user seeing words on a screen versus seeing an “object” in life?
Moreover, the idea that “symbiosis” is contingent, as seems to be the suggestion, on dependence raises a similar concern. I volunteered for a while on an ambulance in a predominately agricultural area, and in brief: yes, people still consider themselves an “I” and can function with some parts removed, yet they do not seem (though I admittedly did not think to ask) to regard those parts, even once gone, as having been extraneous technologies.
Which brings me back to the Pharaoh and his car. The problem is not that they couldn’t have made the metals or equipments (though that is, perhaps, true). The problem is, I think, that we think that there is a linear direction of technology, time, and production. We envision people thinking up new things, perhaps as enabled by the technology of their times, and moving forward, contributing in a way that implies some teleology to the technological landscape, when I think that the distributive function of intelligence and creation might suggest, instead, that what is at stake are the terms by which we define finitude and the possibility of its existence in a meaningful way into the future.
*a real name, evidently, not only for a UK military communications satellite system but also for a Belgian multimedia company whose marketing firm, I submit, should rethink things