The way we choose to organize our world dictates our own place within it—in Gothic times the cathedral, for example, stood at the center of town, inherently helping us perceive what was important, where we stood in relation to it, and how we should and could interact with the rest of the space surrounding it.
The first generation of interface designers had to decide, then, how to organize the computer space. They had, essentially, an entire world at their fingertips, which they could mold and design and organize in any way possible—the space could look like anything. It was important, however, especially given he limitations of technology of the time, that the space was easy to represent.
In this week’s reading of Interface Culture, Johnson takes us through the creation and evolution of the desktop from its early stages to the interface we know today. Throughout his discussion in this chapter, he emphasizes consistently the idea of the “desktop metaphor.” Similar to the metaphor we discussed last class, it encompasses the way in which reality is represented and even simulated on the desktop interface and how those representations help us to understand the way we use and navigate it.
The desktop metaphor was born in 1972, at a Xerox research center in Palo Alto (PARC). Working off of Engelbart’s ideas about mice, bitmapping, and windows, a researcher named Alan Kay stumbled upon the first implementation of such a metaphor in his hesitation over Engelbart’s windows. He said that the windows were difficult to use because they lay side by side and the screen could get crowded easily. Kay suggested that they “regard the screen as a desk, and each project, or piece of a project, as paper on the desk” (Johnson, 47). He decided that the windows should overlap, just as pieces of paper in real life would. A fitting analogy for a paper company, no? Windows gave the computer space while Kay’s overlapping of them gave the computer depth. And so, the original desktop (metaphor) was born.
The original metaphor was weak, but as the Xerox PARC team continued to develop the interface, they began to tighten it up. They realized that if the computer could look like anything, and since the computer was on its way to replacing the world of filing cabinets and stacks of paper, it may as well imitate that world. This expansion of the metaphor to digital files, folders and trash cans ensured that a user’s navigation of the computer was made that much easier, and that much more familiar.
Why a “desktop” though? If the space could look like anything, why didn’t it look like a park, or a house? I mean, there was already the window metaphor, so why not hallways and doors? The desktop most likely seemed like the most obvious and relatable way to represent the interface because it reflected what the computer was used for. In the 70’s the computer was mostly just being used in place of paper, and a desktop simply reflected that. As we will soon see, something like a “house metaphor” doesn’t really work as well.
Xerox PARC completed the interface and packaged it as Smalltalk, an experimental operating system. Xerox never did anything with it, but a few years later a man named Steve Jobs got his hands on it and created the first successfully marketable personal computer in 1984, the Macintosh—“the computer for the rest of us.” The computer, with the use of the Smalltalk technology, became a medium. It was no longer a flat vehicle. Now, it was creative and had character, complete with folders, trashcans, and icons. The creation of the Macintosh was the first time that a computer interface was genuinely user friendly, and was a revolutionary shift from a concentration on hardware to a fascination with the software. Here is Apple’s one-time Superbowl ad that illustrates this countercultural tone.
And another to show how the Macintosh desktop was marketed thereafter.
Bill Gate’s Windows system, slightly different in design but still using the same original metaphor, outdid Apple and became the more dominant in the marketplace for whatever reason. The triumph of Microsoft Windows confirmed the effectiveness of the desktop and its ability to translate well to the average user. Still, many were initially critical of the new interface, writing it off as an unnecessary toy. It was deemed too silly a design for a serious corporate environment, which was happy with simple drop down menus rather than icons.
Johnson goes on to talk about the importance of subtlety when implementing the metaphor. He describes an interface called Bob, released by Microsoft in 1997, which took the use of metaphor too literally, thus simulating a 3D living environment modeled after a living room. The interface wasn’t just a representation of real life objects but a complete simulation of them. A calendar hung on the wall, a mailbox with envelopes sat on the coffee table, and to enter the interface you had to knock on the door. Needless to say the system was a failure, despite its intent to make the user interface more relatable and user-friendly.
Johnson claims that Microsoft Bob wound up preventing novice users from exploring beyond the simple interface. Users would rely on the comfortable look and feel of a home and never really explore the computer’s capabilities and move beyond the novice level of computer use. It might push the user further from the technology. The desktop metaphor works because it is simply that: a metaphor. Here is a tour of the Bob interface.
Johnson’s insights only take us as far as 1997, the year in which Interface Culture was written, but here is another, more recent graphic interface, BumpTop, that turns the desktop metaphor into something of a desktop simulation. Does Anand Agarawala take the metaphor too literally?
Johnson sums up by wondering what the future of interfaces might hold in a world of public life on the Internet. Well, we know just what does happen with the introduction of online interfaces like MySpace and Facebook and even WordPress. This notion of “interface culture” is a real one, now even more than in Johnson’s time.
The next reading, which I will discuss briefly, is Lev Manovich’s discussion of “teleaction.” Teleaction literally means “acting at a distance.” When we talk about telaction, we are talking about our ability to be telepresent (present at a distance) and at the same time use controls to manipulate and affect the environment in which we are telepresent.
We can be telepresent through the use of a webcam. We can see, in real time, a very important concept here, what is happening in another remote location anywhere in the world, or essentially the universe. We are not actually present in these remote locations, but it is as though we are. Teleaction, then, is enabled through certain image-instruments that allow us to act in that distant location, such as a switchboard operator controlling a vehicle under water to explore the bottom of the ocean (the opening scene of Titanic) or pushing a button in a small room to shoot a missile from one remote location and aim it at another. To teleact is to manipulate reality through representations.
These ideas of telepresence and teleaction are not restricted to the real world, however. We can be telepresent in a computer generated world as well, a world commonly known as virtual reality. I would like to regard a desktop as a virtual reality, especially as its interface becomes more and more three–dimensional and interactive. As I said in the very beginning of this post, interface designers had an entire world to create from scratch, which is essentially what they did in the simplest way. By using the desktop, it is like we are telepresent in this digital workspace. The desktop interface is a representation—we are not actually inside of this virtual computer world. Yet by using other interfaces such as the mouse we are able to control and manipulate it. We are essentially teleacting.
And thus concludes our discussion of interface culture.
For your amusement.