I feel the need to re-read Douglas Engelbart’s “Augmenting Human Intellect” to more fully digest both what he is saying and its implications. Pressed for time, I fear I read it too much with what Nicholas Carr refers to in his book, The Shallows, as “The Juggler’s Brain”–skimming rather than deeply engaging. However, I’ll do my best here to employ the inadequacies of the “symbol structure” of the English language to consider some aspects of his text.
Specifically, I’d like to think a little bit about this passage from part III:
“Conceptually speaking, however, an argument is not a serial affair. It is sequential, I grant you, because some statements have to follow others, but this doesn’t imply that its nature is necessarily serial. We usually string Statement B after Statement A, with Statements C, D, E, F, and so on following in that order–this is a serial structuring of our symbols. Perhaps each statement logically followed from all those which preceded it on the serial list, and if so, then the conceptual structuring would also be serial in nature, and it would be nicely matched for us by the symbol structuring.
“But a more typical case might find A to be an independent statement, B dependent upon A, C and D independent, E depending upon D and B, E dependent upon C, and F dependent upon A, D, and E. See, sequential but not serial? A conceptual network but not a conceptual chain. The old paper and pencil methods of manipulating symbols just weren’t very adaptable to making and using symbol structures to match the ways we make and use conceptual structures. With the new symbol-manipulating methods here, we have terrific flexibility for matching the two, and boy, it really pays off in the way you can tie into your work” (103).
As one who has for years taught his students how to read and to write arguments in a sequential and serial fashion by using the symbol structure of language to construct an essay, I find this passage and Englebart’s subsequent discussion of it both interesting and troubling.
In some respects, the serial way of organizing an argument is indeed limiting. In the linear mode of writing to which we are accustomed, various links, antecedents, and overlapping chains of thought can only be alluded to, or cast as an aside, or left out entirely, or else we risk losing our readers. To capture a subject in all of its complexity, the advantages of a “conceptual network” of overlapping linkages and chains quickly become apparent, as Englebart notes.
So then what happens if we augment our intellect by using a computer’s considerable power and memory to track these links and call up quick associations and chains of thought so that we can, theoretically, see the conceptual structures more clearly than we could through the older (written, linear, symbol-structured) mode of argument?
The old concern arises: that the complexity of mapping and tracking these structures in this manner, and the ease of accessing those structures on our computers, causes us to cede the act of thinking to our machines. We risk losing track of the threads. Perhaps we begin to make (more serious versions of) mistakes like one made by one of my very bright students a few weeks ago, when he looked up material on-line about E.M. Forster and then noted that he’d gotten the information from Forster’s biographer, Howard’s End.
In brief, we might risk inverting Englebart’s construction, where the “problem solver” isn’t the human being but is instead the computer, and we, not the computer, are the “clerks.” And at what point might we, like Herman Melville’s famous clerk, Bartleby, when faced with the need to think for ourselves, simply say: “I would prefer not to.”