Part E - Interaction


Describe the evolution of interface design through three paradigms

Man-Machine | Mind-Computer | Phenomenological | Exercises

Interactions with computers cannot be described in terms of a single conceptual model.  The study of HCI does however fall within several paradigms. 

A paradigm is a pattern that has acquired some common usage and provides some cohesion for the evolution of a discipline.  In HCI, the term, if it applies at all, refers to the waves of influence that have gained ground over time.  One paradigm need not disprove another, but can provide an alternative way of viewing the discipline. 

Harrison etal (2007) have identified three paradigms in HCI

  1. the man-machine coupling paradigm
  2. the mind-computer paradigm
  3. the phenomenological paradigm

Each paradigm has included

  • a common understanding of the salient properties of interaction
  • questions that appear interesting and answerable about those properties
  • procedures that could be used to provide answers to those questions
  • common understanding of how to interpret the results of those procedures

In this chapter, we review these three paradigms. 

Man-Machine Coupling Paradigm

The man-machine coupling paradigm was inspired by industrial engineering and ergonomics.  This paradigm focused on optimizing the fit between humans and machines.  The questions that arose centered on this coupling.  The proposed solutions were for the most part pragmatic. 

The field of Human Factors and Ergonomics matured under this paradigm.  The division between man and machine was clear, with the machine being the dominant unit.  Computers were batch processors that occupied large rooms and processed data.  Adjustments were made to facilitate interaction with these machines. 

Mind-Computer Paradigm

The mind-machine paradigm moved the computer from its dominant role and placed the user firmly at the center. 

This paradigm was inspired by cognitive psychology.  It identified the human mind and the computer as symmetric information processors and focused on optimizing the accuracy and efficiency of information transfer in both directions.  The questions that arose centered on how information was transferred, what transformations occured, how communications could be made more efficient and how we could model accurately what users actually do.  The mind-computer paradigm matured through time-sharing and personal computing.

Time Sharing

The paradigm surfaced with time-sharing.  During the 1960s advances in computing power had made time sharing possible.  Several individuals could now use a single computer.  The individual could interact with a central computer through telephone lines and teletypewriters. 

The teletypewriter gave way to the video display unit (VDU).  Ivan Sutherland invented Sketchpad, a program that allowed the user to draw on the screen and have the image stored in memory.  This formed the basis of the windowing systems that we have today and established VDUs as the main display technology. 

Douglas Engelbart proposed using the computer not just for data processing, but to augment human capabilities.  He created systems that are now seen as the forerunners of modern word processors.  He advocated the creation of software toolkits for building ever more complex software. 

Personal Computing

The development of personal computing started in the 1970s.  The LOGO system allowed children to draw simple graphics using the tail of a turtle.  Alan Kay at Xerox Parc proposed the Dynabook, a handheld personal computer which is more advanced than what we have today.  The tablet PC is the closest thing to a Dynabook.

The success of personal computing created a need for richer interaction.  Kay developed the WIMP interface at Xerox, which led to the production of the Xerox Star in 1981.  Apple hired one of the programmers from the Star project and the Mac was born.  The WIMP interface quickly came to predominate most of personal computing. 

Metaphors became a common way of introducing users to the new technologies.  Metaphors allow us to understand something new based upon something they already understood.  LOGO used the metaphor of a turtle dragging its tail on the ground.  Many systems were based on the metaphor of the office desktop. 

Although a metaphor is useful in introducing users to a system, each metaphor has its limits.  The word processor looks and seems to act like a typewriter.  However, few typists knew that they should format paragraphs rather than put an extra space between them.  Few typists realized that space is a character, and not just an empty space as on a typewriter. 

The most extreme example of the metaphor is virtual reality.  Virtual reality places the user inside the metaphor, creating an alternative world.  The metaphor interprets the user's normal actions as instructions to manipulate virtual objects in the virtual world. 

Direct manipulation introduced direct feedback on every action performed by the user.  This made it appear that the user was actually manipulating the object in real time.  This is a very satisfying experience that mirrors what happens in the real world. 

What you see is what you get (WYSIWYG) interfaces showed what the end result would look like in real time.  These interfaces are ubiquitous in word processing.  The trouble with them is that they require a significant amount of processing power.  Sometimes, it is less convenient to include diagrams this way than using other techniques. 

While direct manipulation is good for many tasks, it can be less efficient when applied to repetitive tasks.  Languages are better for repetitive tasks.  The user can describe the action to be performed once and the system will interpret the description and perform the action as many times as desired.  This is one reason why command languages are still used for certain types of tasks. 

Phenomenological Paradigm

The phenomenological paradigm, which is the present one, has been inspired by work in ubiquous computing, social interaction, learning environments, non-task oriented computing, and emotions.  This paradigm focuses on situated action throughout the world.  The questions that arise include which activities are important, how we appropriate technologies, how we support these technologies, how we support interaction without constraining it to what the computer can do, and how we support politics and values at the interaction site. 

This third paradigm is based on the view that the way that we come to understand the world, ourselves, and interaction derives crucially from our location in a physical and social world as embodied actors.  This paradigm

  • focuses attention on collaboration and communication through physically shared objects
  • introduces risk
  • adopts multiple theories and considers them non-exclusively

This paradigm has been maturing as the Internet matures and through the proliferation of applications. 

The Internet

Vannevar Bush proposed the idea of interlinked text in 1945.  We now use interlinking to link text in help systems and to link documents across the Internet. 

Amongst the problems of hypertext is the problem of users tending to get lost and being unable to navigate back to a specific point or forgetting their location. 

The introduction of the web in 1989 is one of the most important events in the development of this third paradigm.  The web moved the internet out of the research labs and made it available to all.  It unified computing and communications.  It made the public at large aware of computers. 

Cooperative work is on the rise.  Computer Supported Cooperative Work (CSCW) includes users working on shared documents remotely.  This includes

  • email
  • chat systems
  • repositories
  • shared whiteboards
  • systems for delivery of lectures remotely

This area is also called groupware.

We used to communicate with computers using two modes: typing - our haptic sense - and reading - our visual sense.  Multi-modality uses our auditory sense also.  This allows more information to be conveyed at once. 


Ubiquitous or pervasive computing is moving computing out of beige and black boxes into our environment.  Some refrigerators can surf the net for menus.  Some appliances can be programmed from a PC anywhere on the Internet.  Radio-Frequency Identification (RFID) tags report their location to a central computer. 

Computers can now gather information from the environment.  For instance, lights can detect a person entering a room.  A computer can have sensors that let it find out about the real world and respond appropriately. 

Once a computer knows the state of the environment, it can make decisions as to what is appropriate for that environment.  Such computers can use artificial intelligence to take actions that are appropriate to what the user is currently doing.

There are now software applications that act as agents on your behalf.  email filters identify and flag junk mail.  Help agents monitor your actions and make suggestions or provide help.  Some agents search the Iinternet to find specific items.  Other agents can be dispatched to sites on the Internet to perform actions on your behalf.