Frameworks for HCI

Applied Framework Illustration – Barnard (1991) Bridging between Basic Theories and the Artifacts of Human-Computer Interaction 150 150 John

Applied Framework Illustration – Barnard (1991) Bridging between Basic Theories and the Artifacts of Human-Computer Interaction

 

Bridging between Basic Theories and the Artifacts of Human-Computer Interaction

Phil Barnard

In: Carroll, J.M. (Ed.). Designing Interaction: psychology at the human-computer interface.

New York: Cambridge University Press, Chapter 7, 103-127. This is not an exact copy of paper

as it appeared but a DTP lookalike with very slight differences in pagination.

Psychological ideas on a particular set of topics go through something very much

like a product life cycle. An idea or vision is initiated, developed, and

communicated. It may then be exploited, to a greater or lesser extent, within the

research community. During the process of exploitation, the ideas are likely to

be the subject of critical evaluation, modification, or extension. With

developments in basic psychology, the success or penetration of the scientific

product can be evaluated academically by the twin criteria of citation counts and

endurance. As the process of exploitation matures, the idea or vision stimulates

little new research either because its resources are effectively exhausted or

because other ideas or visions that incorporate little from earlier conceptual

frameworks have taken over. At the end of their life cycle, most ideas are

destined to become fossilized under the pressure of successive layers of journals

opened only out of the behavioral equivalent of paleontological interest.

In applied domains, research ideas are initiated, developed, communicated,

and exploited in a similar manner within the research community. Yet, by the

very nature of the enterprise, citation counts and endurance are of largely

academic interest unless ideas or knowledge can effectively be transferred from

research to development communities and then have a very real practical impact

on the final attributes of a successful product.

Comment 1

The transfer of research to development communities here constitutes the very idea of Applied Psychology.

If we take the past 20-odd years as representing the first life cycle of research

in human-computer interaction, the field started out with few empirical facts and

virtually no applicable theory. During this period a substantial body of work

was motivated by the vision of an applied science based upon firm theoretical

foundations.

Comment 2

The primary applied science in the case of HCI is Psychology, although this does not exclude others, for example, Sociology, Ethnomethodology etc.

As the area was developed, there can be little doubt, on the twin

academic criteria of endurance and citation, that some theoretical concepts have

been successfully exploited within the research community. GOMS, of course,

is the most notable example (Card, Moran, & Newell, 1983; Olson & Olson,

1990; Polson, 1987).

Comment 3

These examples contain lower-level descriptions of applied frameworks for HCI, some woth and some without overlaps.

Yet, as Carroll (e.g., l989a,b) and others have pointed

out, there are very few examples where substantive theory per se has had a major

and direct impact on design. On this last practical criterion, cognitive science can

more readily provide examples of impact through the application of empirical

methodologies and the data they provide and through the direct application of

psychological reasoning in the invention and demonstration of design concepts

(e.g., see Anderson & Skwarecki, 1986; Card & Henderson, 1987; Carroll,

1989a,b; Hammond & Allinson, 1988; Landauer, 1987).

Comment 4

The application of empirical methodologies and the data of Applied Psychology have also contributed to the development of HCI research and practice.

As this research life cycle in HCI matures, fundamental questions are being

asked about whether or not simple deductions based on theory have any value at

all in design (e.g. Carroll, this volume), or whether behavior in human-computer

interactions is simply too complex for basic theory to have anything other than a

minor practical impact (e.g., see Landauer, this volume). As the next cycle of

research develops, the vision of a strong theoretical input to design runs the risk

of becoming increasingly marginalized or of becoming another fossilized

laboratory curiosity. Making use of a framework for understanding different

research paradigms in HCI, this chapter will discuss how theory-based research

might usefully evolve to enhance its prospects for both adequacy and impact.

Bridging Representations

In its full multidisciplinary context, work on HCI is not a unitary enterprise.

Rather, it consists of many different sorts of design, development, and research

activities. Long (1989) provides an analytic structure through which we can

characterize these activities in terms of the nature of their underlying concepts

and how different types of concept are manipulated and interrelated. Such a

framework is potentially valuable because it facilitates specification of,

comparison between, and evaluation of the many different paradigms and

practices operating within the broader field of HCI.

Screen shot 2016-05-05 at 16.38.41

With respect to the relationship between basic science and its application,

Comment 5

This relationship is the nub of Application Framework Illustration presented in this paper.

Long makes three points that are fundamental to the arguments to be pursued in

this and subsequent sections. First, he emphasizes that the kind of

understanding embodied in our science base is a representation of the way in

which the real world behaves. Second, any representation in the science base

can only be mapped to and from the real world by what he called “intermediary”

representations. Third, the representations and mappings needed to realize this

kind of two-way conceptual traffic are dependent upon the nature of the activities

they are required to support. So the representations called upon for the purposes

of software engineering will differ from the representations called upon for the

purposes of developing an applicable cognitive theory.

Comment 6

Applicable Cognitive Theory here is the basic science and Software Engineering the object of its application. See also Comments 2 and 4.

Long’s framework is itself a developing one (1987, 1989; Long & Dowell,

1989). Here, there is no need to pursue the details; it is sufficient to emphasize

that the full characterization of paradigms operating directly with artifact design

differs from those characterizing types of engineering support research, which,

in turn, differ from more basic research paradigms.

Comment 7

Basic (Psychology) research and artifact (interactive system) design and their relationship is of the primary concern here.

This chapter will primarily

be concerned with what might need to be done to facilitate the applicability and

impact of basic cognitive theory.

 

Comment 8

The need to facilitate the applicability and impact of basic cognitive theory on artifact design suggests its current applicability is unacceptable.

In doing so it will be argued that a key role

needs to be played by explicit “bridging” representations. This term will be used

to avoid any possible conflict with the precise properties of Long’s particular

conceptualization.

Following Long (1989), Figure 7.1 shows a simplified characterization of an

applied science paradigm for bridging from the real world of behavior to the

science base and from these representations back to the real world.

Comment 9

Long’s framework is itself an applied framework.

The blocks are intended to characterize different sorts of representation and the arrows stand

for mappings between them (Long’s terminology is not always used here). The

real world of the use of interactive software is characterized by organisational,

group, and physical settings; by artifacts such as computers, software, and

manuals; by the real tasks of work; by characteristics of the user population; and

so on. In both applied and basic research, we construct our science not from the

real world itself but via a bridging representation whose purpose is to support

and elaborate the process of scientific discovery.

Comment 10

Lower-level descriptions of the Applied Framework are to be found in the different representations referenced here.

Obviously, the different disciplines that contribute to HCI each have their

own forms of discovery representation that reflect their paradigmatic

perspectives, the existing contents of their science base, and the target form of

their theory. In all cases the discovery representation incorporates a whole range

of explicit, and more frequently implicit, assumptions about the real world and

methodologies that might best support the mechanics of scientific abstraction. In

the case of standard paradigms of basic psychology, the initial process of

analysis leading to the formation of a discovery representation may be a simple

observation of behavior on some task. For example, it may be noted that

ordinary people have difficulty with particular forms of syllogistic reasoning. In

more applied research, the initial process of analysis may involve much more

elaborate taxonomization of tasks (e.g., Brooks, this volume) or of errors

observed in the actual use of interactive software (e.g., Hammond, Long, Clark,

Barnard, & Morton, 1980).

Conventionally, a discovery representation drastically simplifies the real

world. For the purposes of gathering data about the potential phenomena, a

limited number of contrastive concepts may need to be defined, appropriate

materials generated, tasks selected, observational or experimental designs

determined, populations and metrics selected, and so on. The real world of

preparing a range of memos, letters, and reports for colleagues to consider

before a meeting may thus be represented for the purposes of initial discovery by

an observational paradigm with a small population of novices carrying out a

limited range of tasks with a particular word processor (e.g., Mack, Lewis, &

Carroll, 1983). In an experimental paradigm, it might be represented

noninteractively by a paired associate learning task in which the mappings

between names and operations need to be learned to some criterion and

subsequently recalled (e.g., Scapin, 1981). Alternatively, it might be

represented by a simple proverb-editing task carried out on two alternative

versions of a cut-down interactive text editor with ten commands. After some

form of instructional familiarization appropriate to a population of computernaive

members of a Cambridge volunteer subject panel, these commands may be

used an equal number of times with performance assessed by time on task,

errors, and help usage (e.g., Barnard, Hammond, MacLean, & Morton, 1982).

Each of the decisions made contributes to the operational discovery

representation.

Comment 11

Note that the operationalisation of basic Psychology theory for the purposes of design is not the same as the operational discovery representation of that theory.

The resulting characterizations of empirical phenomena are potential

regularities of behavior that become, through a process of assimilation,

incorporated into the science base where they can be operated on, or argued

about, in terms of the more abstract, interpretive constructs. The discovery

representations constrain the scope of what is assimilated to the science base and

all subsequent mappings from it.

Comment 12

Note that the scope of the science base and the scope of the applied base can be the same, although the purposes (and so the knowledge) differ.

The conventional view of applied science also implies an inverse process

involving some form of application bridge whose function is to support the

transfer of knowledge in the science base into some domain of application.

Classic ergonomics-human factors relied on the handbook of guidelines. The

relevant processes involve contextualizing phenomena and scientific principles

for some applications domain – such as computer interfaces, telecommunications

apparatus, military hardware, and so on. Once explicitly formulated, say in

terms of design principles, examples and pointers to relevant data, it is left up to

the developers to operate on the representation to synthesize that information

with any other considerations they may have in the course of taking design

decisions. The dominant vision of the first life cycle of HCI research was that

this bridging could effectively be achieved in a harder form through engineering

approximations derived from theory (Card et al., 1983). This vision essentially

conforms to the full structure of Figure 7.1

The Chasm to Be Bridged

The difficulties of generating a science base for HCI that will support effective

bridging to artifact design are undeniably real. Many of the strategic problems

theoretical approaches must overcome have now been thoroughly aired. The life

cycle of theoretical enquiry and synthesis typically postdates the life cycle of

products with which it seeks to deal; the theories are too low level; they are of

restricted scope; as abstractions from behavior they fail to deal with the real

context of work and they fail to accommodate fine details of implementations and

interactions that may crucially influence the use of a system (see, e.g.,

discussions by Carroll & Campbell, 1986; Newell & Card, 1985; Whiteside &

Wixon, 1987). Similarly, although theory may predict significant effects and

receive empirical support, those effects may be of marginal practical consequence

in the context of a broader interaction or less important than effects not

specifically addressed (e.g., Landauer, 1987).

Our current ability to construct effective bridges across the chasm that

separates our scientific understanding and the real world of user behavior and

artifact design clearly falls well short of requirements. In its relatively short

history, the scope of HCI research on interfaces has been extended from early

concerns with the usability of hardware, through cognitive consequences of

software interfaces, to encompass organizational issues (e.g., Grudin, 1990).

Against this background, what is required is something that might carry a

volume of traffic equivalent to an eight-lane cognitive highway. What is on offer

is more akin to a unidirectional walkway constructed from a few strands of rope

and some planks.

In Taking artifacts seriously Carroll (1989a) and Carroll, Kellogg, and

Rosson in this volume, mount an impressive case against the conventional view

of the deductive application of science in the invention, design, and development

of practical artifacts. They point both to the inadequacies of current informationprocessing

psychology, to the absence of real historical justification for

deductive bridging in artifact development, and to the paradigm of craft skill in

which knowledge and understanding are directly embodied in artifacts.

Likewise, Landauer (this volume) foresees an equally dismal future for theory-based

design.

Whereas Landauer stresses the potential advances that may be achieved

through empirical modeling and formative evaluation. Carroll and his colleagues

have sought a more substantial adjustment to conventional scientific strategy

(Carroll, 1989a,b, 1990; Carroll & Campbell, 1989; Carroll & Kellogg, 1989;

Carroll et al., this volume). On the one hand they argue that true “deductive”

bridging from theory to application is not only rare, but when it does occur, it

tends to be underdetermined, dubious, and vague. On the other hand they argue

that the form of hermaneutics offered as an alternative by, for example,

Whiteside and Wixon (1987) cannot be systematized for lasting value. From

Carroll’s viewpoint, HCI is best seen as a design science in which theory and

artifact are in some sense merged. By embodying a set of interrelated

psychological claims concerning a product like HyperCard or the Training

Wheels interface (e.g., see Carroll & Kellogg, 1989), the artifacts themselves

take on a theorylike role in which successive cycles of task analysis,

interpretation, and artifact development enable design-oriented assumptions

about usability to be tested and extended.

This viewpoint has a number of inviting features. It offers the potential of

directly addressing the problem of complexity and integration because it is

intended to enable multiple theoretical claims to be dealt with as a system

bounded by the full artifact. Within the cycle of task analysis and artifact

development, the analyses, interpretations, and theoretical claims are intimately

bound to design problems and to the world of “real” behavior. In this context,

knowledge from HCI research no longer needs to be transferred from research

into design in quite the same sense as before and the life cycle of theories should

also be synchronized with the products they need to impact. Within this

framework, the operational discovery representation is effectively the rationale

governing the design of an artifact, whereas the application representation is a

series of user-interaction scenarios (Carroll, 1990).

The kind of information flow around the task – artifact cycle nevertheless

leaves somewhat unclear the precise relationships that might hold between the

explicit theories of the science base and the kind of implicit theories embodied in

artifacts. Early on in the development of these ideas, Carroll (1989a) points out

that such implicit theories may be a provisional medium for HCI, to be put aside

when explicit theory catches up. In a stronger version of the analysis, artifacts

are in principle irreducible to a standard scientific medium such as explicit

theories. Later it is noted that “it may be simplistic to imagine deductive relations

between science and design, but it would be bizarre if there were no relation at

all” (Carroll & Kellogg, 1989). Most recently, Carroll (1990) explicitly

identifies the psychology of tasks as the relevant science base for the form of

analysis that occurs within the task-artifact cycle (e.g. see Greif, this volume;

Norman this volume). The task-artifact cycle is presumed not only to draw upon

and contextualize knowledge in that science base, but also to provide new

knowledge to assimilate to it. In this latter respect, the current view of the task

artifact cycle appears broadly to conform with Figure 7.1. In doing so it makes

use of task-oriented theoretical apparatus rather than standard cognitive theory

and novel bridging representations for the purposes of understanding extant

interfaces (design rationale) and for the purposes of engineering new ones

(interaction scenarios).

In actual practice, whether the pertinent theory and methodology is grounded

in tasks, human information-processing psychology or artificial intelligence,

those disciplines that make up the relevant science bases for HCI are all

underdeveloped. Many of the basic theoretical claims are really provisional

claims; they may retain a verbal character (to be put aside when a more explicit

theory arrives), and even if fully explicit, the claims rarely generalize far beyond

the specific empirical settings that gave rise to them. In this respect, the wider

problem of how we go about bridging to and from a relevant science base

remains a long-term issue that is hard to leave unaddressed. Equally, any

research viewpoint that seeks to maintain a productive role for the science base in

artifact design needs to be accompanied by a serious reexamination of the

bridging representations used in theory development and in their application.

Science and design are very different activities. Given Figure 7.1., theorybased

design can never be direct; the full bridge must involve a transformation of

information in the science base to yield an applications representation, and

information in this structure must be synthesized into the design problem. In

much the same way that the application representation is constructed to support

design, our science base, and any mappings from it, could be better constructed

to support the development of effective application bridging. The model for

relating science to design is indirect, involving theoretical support for

Basic Theories and the Artifacts of HCI 109

engineering representations (both discovery and applications) rather than one

involving direct theoretical support in design.

The Science Base and Its Application

In spite of the difficulties, the fundamental case for the application of cognitive

theory to the design of technology remains very much what it was 20 years ago,

and indeed what it was 30 years ago (e.g., Broadbent, 1958). Knowledge

assimilated to the science base and synthesized into models or theories should

reduce our reliance on purely empirical evaluations. It offers the prospect of

supporting a deeper understanding of design issues and how to resolve them.

Comment 13

Note that the (Psychology) science base in the form of Cognitive Theory seeks both understanding of human-computer interaction design issues (presumably as explanation of known phenomena and the prediction of unknown phenomena) and the resolution of design problems.

Indeed, Carroll and Kellogg’s (1989) theory nexus has developed out of a

cognitive paradigm rather than a behaviorist one. Although theory development

lags behind the design of artifacts, it may well be that the science base has more

to gain than the artifacts. The interaction of science and design nevertheless

should be a two-way process of added value.

Comment 14

Hence the requirement for both a science and an applied framework for HCI. The former seeks to understand the phenomena, associated with humans interacting with computers,while the later seeks to support the design of human-computer interactions.

Much basic theoretical work involves the application of only partially explicit

and incomplete apparatus to specific laboratory tasks. It is not unreasonable to

argue that our basic cognitive theory tends only to be successful for modeling a

particular application. That application is itself behavior in laboratory tasks. The

scope of the application is delimited by the empirical paradigms and the artifacts

it requires – more often than not these days, computers and software for

presentation of information and response capture. Indeed, Carroll’s task-artifact

and interpretation cycles could very well be used to provide a neat description of

the research activities involved in the iterative design and development of basic

theory. The trouble is that the paradigms of basic psychological research, and

the bridging representations used to develop and validate theory, typically

involve unusually simple and often highly repetitive behavioral requirements

atypical of those faced outside the laboratory.

Comment 15

Note that the validation of basic psychological theory does not of itself guarantee its successful resolution of design problems. See also Comment 14.

Although it is clear that there are many cases of invention and craft where the

kinds of scientific understanding established in the laboratory play little or no

role in artifact development (Carroll, 1989b), this is only one side of the story.

Comment 16

Hence the need here for separate frameworks for both Innovation (as invention) and Craft. See the relevant framework sections.

The other side is that we should only expect to find effective bridging when what

is in the science base is an adequate representation of some aspect of the real

world that is relevant to the specific artifact under development. In this context it

is worth considering a couple of examples not usually called into play in the HCI

domain.

Psychoacoustic models of human hearing are well developed. Auditory

warning systems on older generations of aircraft are notoriously loud and

unreliable. Pilots don’t believe them and turn them off. Using standard

techniques, it is possible to measure the noise characteristics of the environment

on the flight deck of a particular aircraft and to design a candidate set of warnings

based on a model of the characteristics of human hearing. This determines

whether or not pilots can be expected to “hear” and identify those warnings over

the pattern of background noise without being positively deafened and distracted

(e.g., Patterson, 1983). Of course, the attention-getting and discriminative

properties of members of the full set of warnings still have to be crafted. Once

established, the extension of the basic techniques to warning systems in hospital

intensive-care units (Patterson, Edworthy, Shailer, Lower, & Wheeler, 1986)

and trains (Patterson, Cosgrove, Milroy, & Lower, 1989) is a relatively routine

matter.

Developed further and automated, the same kind of psychoacoustic model

can play a direct role in invention. As the front end to a connectionist speech

recognizer, it offers the prospect of a theoretically motivated coding structure that

could well prove to outperform existing technologies (e.g., see ACTS, 1989).

As used in invention, what is being embodied in the recognition artifact is an

integrated theory about the human auditory system rather than a simple heuristic

combination of current signal-processing technologies.

Comment 17

See Comment 15.

Another case arises out of short-term memory research. Happily, this one

does not concern limited capacity! When the research technology for short-term

memory studies evolved into a computerized form, it was observed that word

lists presented at objectively regular time intervals (onset to onset times for the

sound envelopes) actually sounded irregular. In order to be perceived as regular

the onset to onset times need to be adjusted so that the “perceptual centers” of the

words occur at equal intervals (Morton, Marcus, & Frankish, 1976). This

science base representation, and algorithms derived from it, can find direct use in

telecommunications technology or speech interfaces where there is a requirement

for the automatic generation of natural sounding number or option sequences.

Comment 18

See also Comments 15 and 17.

Of course, both of these examples are admittedly relatively “low level.” For

many higher level aspects of cognition, what is in the science base are

representations of laboratory phenomena of restricted scope and accounts of

them. What would be needed in the science base to provide conditions for

bridging are representations of phenomena much closer to those that occur in the

real world. So, for example, the theoretical representations should be topicalized

on phenomena that really matter in applied contexts (Landauer, 1987). They

should be theoretical representations dealing with extended sequences of

cognitive behavior rather than discrete acts. They should be representations of

information-rich environments rather than information-impoverished ones. They

should relate to circumstances where cognition is not a pattern of short repeating

(experimental) cycles but where any cycles that might exist have meaning in

relation to broader task goals and so on.

Comment 19

The behaviours required to undertake and to complete such tasks as desired would need to be included at lower levels of any  applied framework.

It is not hard to pursue points about what the science base might incorporate

in a more ideal world. Nevertheless, it does contain a good deal of useful

knowledge (cf. Norman, 1986), and indeed the first life cycle of HCI research

has contributed to it. Many of the major problems with the appropriateness,

scope, integration, and applicability of its content have been identified. Because

major theoretical prestroika will not be achieved overnight, the more productive

questions concern the limitations on the bridging representations of that first

cycle of research and how discovery representations and applications

representations might be more effectively developed in subsequent cycles.

An Analogy with Interface Design Practice

Not surprisingly, those involved in the first life cycle of HCI research relied very

heavily in the formation of their discovery representations on the methodologies

of the parent discipline. Likewise, in bridging from theory to application, those

involved relied heavily on the standard historical products used in the verification

of basic theory, that is, prediction of patterns of time and/or errors.

Comment 20

See also Comments 15, 17 and 18.

There are relatively few examples where other attributes of behavior are modeled, such as

choice among action sequences (but see Young & MacLean, 1988). A simple

bridge, predictive of times of errors, provides information about the user of an

interactive system. The user of that information is the designer, or more usually

the design team. Frameworks are generally presented for how that information

might be used to support design choice either directly (e.g., Card et al., 1983) or

through trade-off analyses (e.g., Norman, 1983). However, these forms of

application bridge are underdeveloped to meet the real needs of designers.

Given the general dictum of human factors research, “Know the user”

(Hanson, 1971), it is remarkable how few explicitly empirical studies of design

decision making are reported in the literature. In many respects, it would not be

entirely unfair to argue that bridging representations between theory and design

have remained problematic for the same kinds of reasons that early interactive

interfaces were problematic. Like glass teletypes, basic psychological

technologies were underdeveloped and, like the early design of command

languages, the interfaces (application representations) were heuristically

constructed by applied theorists around what they could provide rather than by

analysis of requirements or extensive study of their target users or the actual

context of design (see also Bannon & BØdker, this volume; Henderson, this

volume).

Equally, in addressing questions associated with the relationship between

theory and design, the analogy can be pursued one stage further by arguing for

the iterative design of more effective bridging structures. Within the first life

cycle of HCI research a goodly number of lessons have been learned that could

be used to advantage in a second life cycle. So, to take a very simple example,

certain forms of modeling assume that users naturally choose the fastest method

for achieving their goal. However there is now some evidence that this is not

always the case (e.g., MacLean, Barnard, & Wilson, 1985). Any role for the

knowledge and theory embodied in the science base must accommodate, and

adapt to, those lessons. For many of the reasons that Carroll and others have

elaborated, simple deductive bridging is problematic. To achieve impact,

behavioral engineering research must itself directly support the design,

development, and invention of artifacts. On any reasonable time scale there is a

need for discovery and application representations that cannot be fully justified

through science-base principles or data. Nonetheless, such a requirement simply

restates the case for some form of cognitive engineering paradigm. It does not in

and of itself undermine the case for the longer-term development of applicable

theory.

Comment 21

A cognitive engineering paradigm would not have the same discipline general problem as a cognitive scientific paradigm – the former would be understanding and the latter design. In addition, both would have its own different knowledge in the form of models and methods and practices (the former the diagnosis of design problems of humans interacting with computers and the prescription of their associated design solutions, the latter the explanation and prediction of phenomena, associated with humans interacting with computers.

Just as impact on design has most readily been achieved through the

application of psychological reasoning in the invention and demonstration of

artifacts, so a meaningful impact of theory might best be achieved through the

invention and demonstration of novel forms of applications representations. The

development of representations to bridge from theory to application cannot be

taken in isolation. It needs to be considered in conjunction with the contents of

the science base itself and the appropriateness of the discovery representations

that give rise to them.

Without attempting to be exhaustive, the remainder of this chapter will

exemplify how discovery representations might be modified in the second life

cycle of HCI research; and illustrate how theory might drive, and itself benefit

from, the invention and demonstration of novel forms of applications bridging.

Enhancing Discovery Representations

Although disciplines like psychology have a formidable array of methodological

techniques, those techniques are primarily oriented toward hypothesis testing.

Here, greatest effort is expended in using factorial experimental designs to

confirm or disconfirm a specific theoretical claim. Often wider characteristics of

phenomena are only charted as and when properties become a target of specific

theoretical interest. Early psycholinguistic research did not start off by studying

what might be the most important factors in the process of understanding and

using textual information. It arose out of a concern with transformational

grammars (Chomsky, 1957). In spite of much relevant research in earlier

paradigms (e.g., Bartlett, 1932), psycholinguistics itself only arrived at this

consideration after progressing through the syntax, semantics, and pragmatics of

single-sentence comprehension.

As Landauer (1987) has noted, basic psychology has not been particularly

productive at evolving exploratory research paradigms. One of the major

contributions of the first life cycle of HCI research has undoubtedly been a

greater emphasis on demonstrating how such empirical paradigms can provide

information to support design (again, see Landauer, 1987). Techniques for

analyzing complex tasks, in terms of both action decomposition and knowledge

requirements, have also progressed substantially over the past 20 years (e.g.,

Wilson, Barnard, Green, & MacLean, 1988).

Comment 22

Any applied framework must ultimately include levels of description, which capture the behaviours performed in complex tasks and associated with the action decomposition and knowledge requirements, referenced here.

A significant number of these developments are being directly assimilated

into application representations for supporting artifact development. Some can

also be assimilated into the science base, such as Lewis’s (1988) work on

abduction. Here observational evidence in the domain of HCI (Mack et al.,

1983) leads directly to theoretical abstractions concerning the nature of human

reasoning. Similarly, Carroll (1985) has used evidence from observational and

experimental studies in HCI to extend the relevant science base on naming and

reference. However, not a lot has changed concerning the way in which

discovery representations are used for the purposes of assimilating knowledge to

the science base and developing theory.

In their own assessment of progress during the first life cycle of HCI

research, Newell and Card (1985) advocate continued reliance on the hardening

of HCI as a science. This implicitly reinforces classic forms of discovery

representations based upon the tools and techniques of parent disciplines. Heavy

reliance on the time-honored methods of experimental hypothesis testing in

experimental paradigms does not appear to offer a ready solution to the two

problems dealing with theoretical scope and the speed of theoretical advance.

Likewise, given that these parent disciplines are relatively weak on exploratory

paradigms, such an approach does not appear to offer a ready solution to the

other problems of enhancing the science base for appropriate content or for

directing its efforts toward the theoretical capture of effects that really matter in

applied contexts.

The second life cycle of research in HCI might profit substantially by

spawning more effective discovery representations, not only for assimilation to

applications representations for cognitive engineering, but also to support

assimilation of knowledge to the science base and the development of theory.

Two examples will be reviewed here. The first concerns the use of evidence

embodied in HCI scenarios (Young & Barnard, 1987, Young, Barnard, Simon,

& Whittington, 1989). The second concerns the use of protocol techniques to

systematically sample what users know and to establish relationships between

verbalizable knowledge and actual interactive performance.

Test-driving Theories

Young and Barnard (1987) have proposed that more rapid theoretical advance

might be facilitated by “test driving” theories in the context of a systematically

sampled set of behavioral scenarios. The research literature frequently makes

reference to instances of problematic or otherwise interesting user-system

exchanges. Scenario material derived from that literature is selected to represent

some potentially robust phenomenon of the type that might well be pursued in

more extensive experimental research. Individual scenarios should be regarded

as representative of the kinds of things that really matter in applied settings. So

for example, one scenario deals with a phenomenon often associated with

unselected windows. In a multiwindowing environment a persistent error,

frequently committed even by experienced users, is to attempt some action in

inactive window. The action might be an attempt at a menu selection. However,

pointing and clicking over a menu item does not cause the intended result; it

simply leads to the window being activated. Very much like linguistic test

sentences, these behavioral scenarios are essentially idealized descriptions of

such instances of human-computer interactions.

If we are to develop cognitive theories of significant scope they must in

principle be able to cope with a wide range of such scenarios. Accordingly, a

manageable set of scenario material can be generated that taps behaviors that

encompass different facets of cognition. So, a set of scenarios might include

instances dealing with locating information in a directory entry, selecting

alternative methods for achieving a goal, lexical errors in command entry, the

unselected windows phenomenon, and so on (see Young, Barnard, Simon, &

Whittington, 1989). A set of contrasting theoretical approaches can likewise be

selected and the theories and scenarios organized into a matrix. The activity

involves taking each theoretical approach and attempting to formulate an account

of each behavioral scenario. The accuracy of the account is not at stake. Rather,

the purpose of the exercise is to see whether a particular piece of theoretical

apparatus is even capable of giving rise to a plausible account. The scenario

material is effectively being used as a set of sufficiency filters and it is possible to

weed out theories of overly narrow scope. If an approach is capable of

formulating a passable account, interest focuses on the properties of the account

offered. In this way, it is also possible to evaluate and capitalize on the

properties of theoretical apparatus and do provide appropriate sorts of analytic

leverage over the range of scenarios examined.

Comment 23

The notion of ‘sufficiency filter’ is an interesting one. However, ultimately it needs to be integrated with other concepts, supporting  the notion of validation – conceptualisation; operationalisation; test; and generalisation with respect to understanding or design (or both). See also Comment 15.

Traditionally, theory development places primary emphasis on predictive

accuracy and only secondary emphasis on scope.

Comment 24

 

Prediction and scope, at the end of the day, cannot be separated. Prediction cannot be tsted in the absence of a stated scope.

 

This particular form of discovery representation goes some way toward redressing that

balance. It offers the prospect of getting appropriate and relevant theoretical apparatus in

place on a relatively short time cycle. As an exploratory methodology, it at least

addresses some of the more profound difficulties of interrelating theory and

application. The scenario material makes use of known instances of human-computer

interaction. Because these scenarios are by definition instances of

interactions, any theoretical accounts built around them must of necessity be

appropriate to the domain.

Comment 25

To be known as appropriate to the domain, the latter needs to be explicitly included in the theory.

Because scenarios are intended to capture significant

aspects of user behavior, such as persistent errors, they are oriented toward what

matters in the applied context.

Comment 26

What matters in an applied context is, more generally, how well a task is performed. That may, or may not, be reflected by errors, persistent or not.

As a quick and dirty methodology, it can make

effective use of the accumulated knowledge acquired in the first life cycle of HCI

research, while avoiding some of the worst “tar pits” (Norman, 1983) of

traditional experimental methods.

Comment 27

‘Quick and dirty methodology’ and ‘tar pit’ avoidance need to be integrated with notions of validation, such as – conceptualisation; operationalisation; test; and generalisation. See also Comments 15 and 23.

As a form of discovery bridge between application and theory, the real world

is represented, for some purpose, not by a local observation or example, but by a

sampled set of material. If the purpose is to develop a form of cognitive

architecture , then it may be most productive to select a set of scenarios that

encompass different components of the cognitive system (perception, memory,

decision making, control of action). Once an applications representation has

been formed, its properties might be further explored and tested by analyzing

scenario material sampled over a range of different tasks, or applications

domains (see Young & Barnard, 1987). At the point where an applications

representation is developed, the support it offers may also be explored by

systematically sampling a range of design scenarios and examining what

information can be offered concerning alternative interface options (AMODEUS,

1989). By contrast with more usual discovery representations, the scenario

methodology is not primarily directed at classic forms of hypothesis testing and

validation. Rather, its purpose is to support the generation of more readily

applicable theoretical ideas.

Comment 28

Barnard’s point is well taken here. However, the ‘more applicable theoretical ideas have still to be validated with respect to design. See also Comments 15, 17 and 27.

Verbal Protocols and Performance

One of the most productive exploratory methodologies utilized in HCI research

has involved monitoring user action while collecting concurrent verbal protocols

to help understand what is actually going on. Taken together these have often

given rise to the best kinds of problem-defining evidence, including the kind of

scenario material already outlined. Many of the problems with this form of

evidence are well known. Concurrent verbalization may distort performance and

significant changes in performance may not necessarily be accompanied by

changes in articulatable knowledge. Because it is labor intensive, the

observations are often confined to a very small number of subjects and tasks. In

consequence, the representatives of isolated observations is hard to assess.

Furthermore, getting real scientific value from protocol analysis is crucially

dependent on the insights and craft skill of the researcher concerned (Barnard,

Wilson, & MacLean, 1986; Ericsson & Simon, 1980).

Techniques of verbal protocol analysis can nevertheless be modified and

utilized as a part of a more elaborate discovery representation to explore and

establish systematic relationships between articulatable knowledge and

performance. The basic assumption underlying much theory is that a

characterization of the ideal knowledge a user should possess to successfully

perform a task can be used to derive predictions about performance. However,

protocol studies clearly suggest that users really get into difficulty when they

have erroneous or otherwise nonideal knowledge. In terms of the precise

relationships they have with performance, ideal and nonideal knowledge are

seldom considered together.

In an early attempt to establish systematic and potentially generalizable

relationships between the contents of verbal protocols and interactive

performance, Barnard et al., (1986) employed a sample of picture probes to elicit

users’ knowledge of tasks, states, and procedures for a particular office product

at two stages of learning. The protocols were codified, quantified, and

compared. In the verbal protocols, the number of true claims about the system

increased with system experience, but surprisingly, the number of false claims

remained stable. Individual users who articulated a lot of correct claims

generally performed well, but the amount of inaccurate knowledge did not appear

related to their overall level of performance. There was, however, some

indication that the amount of inaccurate knowledge expressed in the protocols

was related to the frequency of errors made in particular system contexts.

A subsequent study (Barnard, Ellis, & MacLean, 1989) used a variant of the

technique to examine knowledge of two different interfaces to the same

application functionality. High levels of inaccurate knowledge expressed in the

protocols were directly associated with the dialogue components on which

problematic performance was observed. As with the earlier study, the amount of

accurate knowledge expressed in any given verbal protocol was associated with

good performance, whereas the amount of inaccurate knowledge expressed bore

little relationship to an individual’s overall level of performance. Both studies

reinforced the speculation that is is specific interface characteristics that give rise

to the development of inaccurate or incomplete knowledge from which false

inferences and poor performance may follow.

Just as the systematic sampling and use of behavioral scenarios may facilitate

the development of theories of broader scope, so discovery representations

designed to systematically sample the actual knowledge possessed by users

should facilitate the incorporation into the science base of behavioral regularities

and theoretical claims that are more likely to reflect the actual basis of user

performance rather than a simple idealization of it.

Enhancing Application Representations

The application representations of the first life cycle of HCI research relied very

much on the standard theoretical products of their parent disciplines.

Grammatical techniques originating in linguistics were utilized to characterize the

complexity of interactive dialogues; artificial intelligence (A1)-oriented models

were used to represent and simulate the knowledge requirements of learning;

and, of course, derivatives of human information-processing models were used

to calculate how long it would take users to do things. Although these

approaches all relied upon some form of task analysis, their apparatus was

directed toward some specific function. They were all of limited scope and made

numerous trade-offs between what was modeled and the form of prediction made

(Simon, 1988).

Some of the models were primarily directed at capturing knowledge

requirements for dialogues for the purposes of representing complexity, such as

BNF grammars (Reisner, 1982) and Task Action Grammars (Payne & Green,

1986). Others focused on interrelationships between task specifications and

knowledge requirements, such as GOMS analyses and cognitive-complexity

theory (Card et al. 1983; Kieras & Polson, 1985). Yet other apparatus, such as

the model human information processor and the keystroke level model of Card et al.

(1983) were primarily aimed at time prediction for the execution of error-free

routine cognitive skill. Most of these modeling efforts idealized either the

knowledge that users needed to possess or their actual behavior. Few models

incorporated apparatus for integrating over the requirements of knowledge

acquisition or use and human information-processing constraints (e.g., see

Barnard, 1987). As application representations, the models of the first life cycle

had little to say about errors or the actual dynamics of user-system interaction as

influenced by task constraints and information or knowledge about the domain of

application itself.

Two modeling approaches will be used to illustrate how applications

representations might usefully be enhanced. They are programmable user

models (Young, Green, & Simon, 1989) and modeling based on Interacting

Cognitive Subsystems (Barnard, 1985). Although these approaches have

different origins, both share a number of characteristics. They are both aimed at

modeling more qualitative aspects of cognition in user-system interaction; both

are aimed at understanding how task, knowledge, and processing constraint

intersect to determine performance; both are aimed at exploring novel means of

incorporating explicit theoretical claims into application representations; and both

require the implementation of interactive systems for supporting decision making

in a design context. Although they do so in different ways, both approaches

attempt to preserve a coherent role for explicit cognitive theory. Cognitive theory

is embodied, not in the artifacts that emerge from the development process, but

in demonstrator artifacts that might emerge from the development process, but in

demonstrator artifacts that might support design. This is almost directly

analogous to achieving an impact in the marketplace through the application of

psychological reasoning in the invention of artifacts. Except in this case, the

target user populations for the envisaged artifacts are those involved in the design

and development of products.

Programmable User Models (PUMs)

The core ideas underlying the notion of a programmable user model have their

origins in the concepts and techniques of AI. Within AI, cognitive architectures

are essentially sets of constraints on the representation and processing of

knowledge. In order to achieve a working simulation, knowledge appropriate to

the domain and task must be represented within those constraints. In the normal

simulation methodology, the complete system is provided with some data and,

depending on its adequacy, it behaves with more or less humanlike properties.

Using a simulation methodology to provide the designer with an artificial

user would be one conceivable tactic. Extending the forms of prediction offered

by such simulations (cf. cognitive complexity theory; Polson, 1987) to

encompass qualitative aspects of cognition is more problematic. Simply

simulating behavior is of relatively little value. Given the requirements of

knowledge-based programming, it could, in many circumstances, be much more

straightforward to provide a proper sample of real users. There needs to be

some mechanism whereby the properties of the simulation provide information

of value in design. Programmable user models provide a novel perspective on

this latter problem. The idea is that the designer is provided with two things, an

“empty” cognitive architecture and an instruction language for providing with all

the knowledge it needs to carry out some task. By programming it, the designer

has to get the architecture to perform that task under conditions that match those

of the interactive system design (i.e., a device model). So, for example, given a

particular dialog design, the designer might have to program the architecture to

select an object displayed in a particular way on a VDU and drag it across that

display to a target position.

The key, of course, is that the constraints that make up the architecture being

programmed are humanlike. Thus, if the designer finds it hard to get the

architecture to perform the task, then the implication is that a human user would

also find the task hard to accomplish. To concretize this, the designer may find

that the easiest form of knowledge-based program tends to select and drag the

wrong object under particular conditions. Furthermore, it takes a lot of thought

and effort to figure out how to get round this problem within the specific

architectural constraints of the model. Now suppose the designer were to adjust

the envisaged user-system dialog in the device model and then found that

reprogramming the architecture to carry out the same task under these new

conditions was straightforward and the problem of selecting the wrong object no

longer arose. Young and his colleagues would then argue that this constitutes

direct evidence that the second version of the dialog design tried by the designer

is likely to prove more usable than the first.

The actual project to realize a working PUM remains at an early stage of

development. The cognitive architecture being used is SOAR (Laird, Newell, &

Rosenbloom, 1987). There are many detailed issues to be addressed concerning

the design of an appropriate instruction language. Likewise, real issues are

raised about how a model that has its roots in architectures for problem solving

(Newell & Simon, 1972) deals with the more peripheral aspects of human

information processing, such as sensation, perception, and motor control.

Nevertheless as an architecture, it has scope in the sense that a broad range of

tasks and applications can be modeled within it. Indeed, part of the motivation

of SOAR is to provide a unified general theory of cognition (Newell, 1989).

In spite of its immaturity, additional properties of the PUM concept as an

application bridging structure are relatively clear (see Young et al., 1989). First,

programmable user models embody explicit cognitive theory in the form of the

to-be-programmed architecture. Second, there is an interesting allocation of

function between the model and the designer. Although the modeling process

requires extensive operationalization of knowledge in symbolic form, the PUM

provides only the constraints and the instruction language, whereas the designer

provides the knowledge of the application and its associated tasks. Third,

knowledge in the science base is transmitted implicitly into the design domain via

an inherently exploratory activity. Designers are not told about the underlying

cognitive science; they are supposed to discover it. By doing what they know

how to do well – that is, programming – the relevant aspects of cognitive

constraints and their interactions with the application should emerge directly in

the design context.

Fourth, programmable user models support a form of qualitative predictive

evaluation that can be carried out relatively early in the design cycle. What that

evaluation provides is not a classic predictive product of laboratory theory, rather

it should be an understanding of why it is better to have the artifact constructed

one way rather than another. Finally, although the technique capitalizes on the

designer’s programming skills, it clearly requires a high degree of commitment

and expense. The instruction language has to be learned and doing the

programming would require the development team to devote considerable

resources to this form of predictive evaluation.

Approximate Models of Cognitive Activity

Interacting Cognitive Subsystems (Barnard, 1985) also specifies a form of

cognitive architecture. Rather than being an AI constraint-based architecture,

ICS has its roots in classic human information-processing theory. It specifies

the processing and memory resources underlying cognition, the organization of

these resources, and principles governing their operation. Structurally, the

complete human information-processing system is viewed as a distributed

architecture with functionally distinct subsystems each specializing in, and

supporting, different types of sensory, representational, and effector processing

activity. Unlike many earlier generations of human information-processing

models, there are no general purpose resources such as a central executive or

limited capacity working memory. Rather the model attempts to define and

characterize processes in terms of the mental representations they take as input

and the representations they output. By focusing on the mappings between

different mental representations, this model seeks to integrate a characterization

of knowledge-based processing activity with classic structural constraints on the

flow of information within the wider cognitive system.

A graphic representation of this architecture is shown in the right-hand panel

of Figure 7.2, which instantiates Figure 7.1 for the use of the ICS framework in

an HCI context. The architecture itself is part of the science base. Its initial

development was supported by using empirical evidence from laboratory studies

of short-term memory phenomena (Barnard, 1985). However, by concentrating

on the different types of mental representation and process that transform them,

rather than task and paradigm specific concepts, the model can be applied across

a broad range of settings (e.g., see Barnard & Teasdale, 1991). Furthermore,

for the purposes of constructing a representation to bridge between theory and

application it is possible to develop explicit, yet approximate, characterizations of

cognitive activity.

In broad terms, the way in which the overall architecture will behave is

dependent upon four classes of factor. First, for any given task it will depend on

the precise configuration of cognitive activity. Different subsets of processes

and memory records will be required by different tasks. Second, behavior will

be constrained by the specific procedural knowledge embodied in each mental

process that actually transforms one type of mental representation to another.

Third, behavior will be constrained by the form, content, and accessibility of any

memory records that are need in that phase of activity. Fourth, it will depend on

the overall way in which the complete configuration is coordinated and

controlled.

Because the resources are relatively well defined and constrained in terms of

their attributes and properties, interdependencies between them can be motivated

on the basis of known patterns of experimental evidence and rendered explicit.

So, for example, a complexity attribute of the coordination and control of

cognitive activity can be directly related to the number of incompletely

proceduralized processes within a specified configuration. Likewise, a strategic

attribute of the coordination and control of cognitive activity may be dependent

upon the overall amount of order uncertainty associated with the mental

representation of a task stored in a memory record. For present purposes the

precise details of these interdependencies do not matter, nor does the particularly

opaque terminology shown in the rightmost panel of Figure 7.2 (for more

details, see Barnard, 1987). The important point is that theoretical claims can be

specified within this framework at a high level of abstraction and that these

abstractions belong in the science base.

Although these theoretical abstractions could easily have come from classic

studies of human memory and performance, there were in fact motivated by

experimental studies of command naming in text editing (Grudin & Barnard,

1984) and performance on an electronic mailing task (Barnard, MacLean, &

Hammond, 1984). The full theoretical analyses are described in Barnard (1987)

and extended in Barnard, Grudin, and MacLean (1989). In both cases the tasks

were interactive, involved extended sequences of cognitive behavior, involved

information-rich environments, and the repeating patterns of data collection were

meaningful in relation to broader task goals not atypical of interactive tasks in the

real world. In relation to the arguments presented earlier in this chapter, the

information being assimilated to the science base should be more appropriate and

relevant to HCI than that derived from more abstract laboratory paradigms. It

will nonetheless be subject to interpretive restrictions inherent in the particular

form of discovery representation utilized in the design of these particular

experiments.

Armed with such theoretical abstractions, and accepting their potential

limitations, it is possible to generate a theoretically motivated bridge to

application. The idea is to build approximate models that describe the nature of

cognitive activity underlying the performance of complex tasks. The process is

actually carried out by an expert system that embodies the theoretical knowledge

required to build such models. The system “knows” what kinds of

configurations are associated with particular phases of cognitive activity; it

“knows” something about the conditions under which knowledge becomes

proceduralized, and the properties of memory records that might support recall

and inference in complex task environments. It also “knows” something about

the theoretical interdependencies between these factors in determining the overall

patterning, complexity, and qualities of the coordination and dynamic control of

cognitive activity. Abstract descriptions of cognitive activity are constructed in

terms of a four-component model specifying attributes of configurations,

procedural knowledge, record contents, and dynamic control. Finally, in order

to produce an output, the system “knows” something about the relationships

between these abstract models of cognitive activity and the attributes of user

behaviour.

Figure 7.2. The applied science paradigm instantiated for the use of interacting cognitive subsystems as a theoretical basis for the development of expert system design aid.

Obviously, no single model of this type can capture everything that goes on

in a complex task sequence. Nor can a single model capture different stages of

user development or other individual differences within the user population. It is

therefore necessary to build a set of interrelated models representing different

phases of cognitive activity, different levels and forms of user expertise, and so

The basic modeling unit uses the four-component description to characterize

cognitive activity for a particular phase, such as establishing a goal, determining

the action sequence, and executing it. Each of these models approximates over

the very short-term dynamics of cognition. Transitions between phases

approximate over the short-term dynamics of tasks, whereas transitions between

levels of expertise approximate over different stages of learning. In Figure 7.2,

the envisaged application representation thus consists of a family of interrelated

models depicted graphically as a stack of cards.

Like the concept of programmable user models, the concept of approximate

descriptive modeling is in the course of development. A running demonstrator

system exists that effectively replicates the reasoning underlying the explanation

of a limited range of empirical phenomena in HCI research (see Barnard,

Wilson, & MacLean, 1987, 1988). What actually happens is that the expert

system elicits, in a context-sensitive manner, descriptions of the envisaged

interface, its users, and the tasks that interface is intended to support. It then

effectively “reasons about” cognitive activity, its properties, and attributes in that

applications setting for one or more phases of activity and one or more stages of

learning. Once the models have stabilized, it then outputs a characterization of

the probable properties of user behavior. In order to achieve this, the expert

system has to have three classes of rules: those that map from descriptions of

tasks, users, and systems to entities and properties in the model representation;

rules that operate on those properties; and rules that map from the model

representation to characterizations of behavior. Even in its somewhat primitive

current state, the demonstrator system has interesting generalizing properties.

For example, theoretical principles derived from research on rather antiquated

command languages support limited generalization to direct manipulation and

iconic interfaces.

As an applications representation, the expert system concept is very different

from programmable user models. Like PUMs, the actual tool embodies explicit

theory drawn from the science base. Likewise, the underlying architectural

concept enables a relatively broad range of issues to be addressed. Unlike

PUMs, it more directly addresses a fuller range of resources across perceptual,

cognitive, and effector concerns. It also applies a different trade-off in when and

by whom the modeling knowledge is specified. At the point of creation, the

expert system must contain a complete set of rules for mapping between the

world and the model. In this respect, the means of accomplishing and

expressing the characterizations of cognition and behavior must be fully and

comprehensively encoded. This does not mean that the expert system must

necessarily “know” each and every detail. Rather, within some defined scope,

the complete chain of assumptions from artifact to theory and from theory to

behavior must be made explicit at an appropriate level of approximation.

Comment 29

This cycle is the basis and justification for the inclusion of Barnard’s paper as an applied framework.

Equally, the input and output rules must obviously be grounded in the language

of interface description and user-system interaction. Although some of the

assumptions may be heuristic, and many of them may need crafting, both

theoretical and craft components are there. The how-to-do-it modeling

knowledge is laid out for inspection.

Comment 30

See Comments 19 and 22, concerning the framework requirements for the low levels of the description of humans interacting with computers involved in their design.

However, at the point of use, the expert system requires considerably less

precision than PUMs in the specification and operationalization of the knowledge

required to use the application being considered. The expert system can build a

family of models very quickly and without its user necessarily acquiring any

great level of expertise in the underlying cognitive theory. In this way, it is

possible for that user to explore models for alternative system designs over the

course of something like one afternoon. Because the system is modular, and the

models are specified in abstract terms, it is possible in principle to tailor the

systems input and output rules without modifying the core theoretical reasoning.

The development of the tool could then respond to requirements that might

emerge from empirical studies of the real needs of design teams or of particular

application domains.

In a more fully developed form, it might be possible to address the issue of

which type of tool might prove more effective in what types of applications

context. However, strictly speaking, they are not direct competitors, they are

alternative types of application representation that make different forms of tradeoff

about the characteristics of the complete chain of bridging from theory to

application. By contrast with the kinds of theory-based techniques relied on in

the first life cycle of HCI research, both PUMs and the expert-system concept

represent more elaborate bridging structures. Although underdeveloped, both

approaches are intended ultimately to deliver richer and more integrated

information about properties of human cognition into the design environment in

forms in which it can be digested and used. Both PUMs and the expert system

represent ways in which theoretical support might be usefully embodied in future

generations of tools for supporting design. In both cases the aim is to deliver

within the lifetime of the next cycle of research a qualitative understanding of

what might be going on in a user’s head rather than a purely quantitative estimate

of how long the average head is going to be busy (see also Lewis, this volume).

Summary

The general theme that has been pursued in this chapter is that the relationship

between the real world and theoretical representations of it is always mediated by

bridging representations that subserve specific purposes. In the first life cycle of

research on HCI, the bridging representations were not only simple, they were

only a single step away from those used in the parent disciplines for the

development of basic theory and its validation. If cognitive theory is to find any

kind of coherent and effective role in forthcoming life cycles of HCI research, it

must seriously reexamine the nature and function of these bridging

representations as well as the content of the science base itself.

This chapter has considered bridging between specifically cognitive theory

and behavior in human-computer interaction. This form of bridging is but one

among many that need to be pursued. For example, there is a need to develop

bridging representations that will enable us to interrelate models of user cognition

with the formed models being developed to support design by software

engineers (e.g., Dix, Harrison, Runciman, & Thimbleby, 1987; Harrison,

Roast, & Wright, 1989; Thimbleby, 1985). Similarly there is a need to bridge

between cognitive models and aspects of the application and the situation of use

(e.g., Suchman, 1987). Truly interdisciplinary research formed a large part of

the promise, but little of the reality of early HCI research. Like the issue of

tackling nonideal user behavior, interdisciplinary bridging is now very much on

the agenda for the next phase of research (e.g., see Barnard & Harrison, 1989).

The ultimate impact of basic theory on design can only be indirect – through

an explicit application representation. Alternative forms of such representation

that go well beyond what has been achieved to date have to be invented,

developed, and evaluated. The views of Carroll and his colleagues form one

concrete proposal for enhancing our application representations. The design

rationale concept being developed by MacLean, Young, and Moran (1989)

constitutes another potential vehicle for expressing application representations.

Yet other proposals seek to capture qualitative aspects of human cognition while

retaining a strong theoretical character (Barnard et al., 1987; 1988; Young,

Green, & Simon, 1989).

On the view advocated here, the direct theory-based product of an applied

science paradigm operating in HCI is not an interface design. It is an application

representation capable of providing principled support for reasoning about

designs. There may indeed be very few examples of theoretically inspired

software products in the current commercial marketplace. However, the first life

cycle of HCI research has produced a far more mature view of what is entailed in

the development of bridging representations that might effectively support design

reasoning. In subsequent cycles, we may well be able to look forward to a

significant shift in the balance of added value within the interaction between

applied science and design. Although future progress will in all probability

remain less than rapid, theoretically grounded concepts may yet deliver rather

more in the way of principled support for design than has been achieved to date.

Acknowledgments

The participants at the Kittle Inn workshop contributed greatly to my

understanding of the issues raised here. I am particularly indebted to Jack

Carroll, Wendy Kellogg, and John Long, who commented extensively on an

earlier draft. Much of the thinking also benefited substantially from my

involvement with the multidisciplinary AMODEUS project, ESPRIT Basic

Research Action 3066.

References

ACTS (1989). Connectionist Techniques for Speech (Esprit Basic Research

ACtion 3207), Technical Annex. Brussels: CEC.

Basic Theories and the Artifacts of HCI 125

AMODEUS (1989). Assimilating models of designers uses and systems (ESprit

Basic Research Action 3066), Technical Aneex. Brussels; CEC.

Anderson, J. R., & Skwarecki, E. 1986. The automated tutoring of

introductory computer programming. Communications of the ACM, 29,

842-849.

Barnard, P. J. (1985). Interacting cognitive subsystems: A psycholinguistic

approach to short term memory. In A. Ellis, (Ed.), Progress in the

psychology of language (Vol. 2, chapter 6, pp. 197-258. London:

Lawrenece Erlbaum Associates.

Barnard, P. J. (1987). Cognitive resources and the learning of human-computer

dialogs. In J.M. Carroll (Ed.), Interfacing thought: Cognitive aspects of

human-computer interaction (pp. 112-158). Cambridge MA: MIT Press.

Barnard, P. J., & Harrison, M. D. (1989). Integrating cognitive and system

models in human-computer interaction. In A. Sutcliffe & L. Macaulay,

(Ed.), People and computers V (pp. 87-103). Cambridge: Cambridge

University Press.

Barnard, P. J., Ellis, J., & MacLean, A. (1989). Relating ideal and non-ideal

verbalised knowledge to performance. In A. Sutcliffe & L. Macaulay

(Eds.), People and computers V (pp. 461-473). Cambridge: Cambridge

University Press.

Barnard, P. J., Grudin, J., & MacLean, A. (1989). Developing a science base

for the naming of computer commands. In J. B. Long & A. Whitefield

(Eds.), Cognitive ergonomics and human-computer interaction (pp. 95-

133). Cambridge: Cambridge University Press.

Barnard, P. J., Hammond, N., MacLean, A., & Morton, J. (1982). Learning

and remembering interactive commands in a text-editing task. Behaviour

and Information Technology, 1, 347-358.

Barnard, P. J., MacLean, A., & Hammond, N. V. (1984). User representations

of ordered sequences of command operations. In B. Shackel (Ed.),

Proceedings of Interact ’84: First IFIP Conference on Human-Computer

Interaction, (Vol. 1, pp. 434-438). London: IEE.

Barnard, P. J., & Teasdale, J. (1991). Interacting cognitive subsystems: A

systematic approach to cognitive-affective interaction and change.

Cognition and Emotion, 5, 1-39.

Barnard, P. J., Wilson, M., & MacLean, A. (1986). The elicitation of system

knowledge by picture probles. In M. Mantei & P. Orbeton (Eds.),

Proceedings of CHI ’86: Human Factors in Computing Systems (pp.

235-240). New York: ACM.

Barnard, P. J., Wilson, M., & MacLean, A. (1987). Approximate modelling of

cognitive activity: Towards an expert system design aid. In J. M. Carroll

& P. P. Tanner (Eds.), Proceedings of CHI + GI ’87: Human Factors in

Computing Systems and Graphics Interface (pp. 21-26). New York:

ACM.

Barnard, P. J., Wilson, M., & MacLean, A. (1988). Approximate modelling of

cognitive activity with an Expert system: A theory based strategy for

developing an interactive design tool. The Computer Journal, 31, 445-

456.

Bartlett, F. C. (1932). Remembering: A study in experimental and social

psychology. Cambridge: Cambridge University Press.

Broadbent, D. E. (1958). Perception and communication. London: Pergamon

Press.

Card, S. K., & Henderson, D. A. (1987). A multiple virtual-workspace

interface to support user task-switching. In J. M. Carroll & P. P. Tanner

(Eds.), Proceedings of CHI + GI ’87: Human Factors in Computing

Systems and Graphics Interface (pp. 53-59). New York: ACM.

Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of humancomputer

interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.

Carroll, J. M. (1985). What’s in a name? New York: Freeman.

Carroll, J. M. (1989a). Taking artifacts seriously. In S. Maas & H. Oberquelle

(Eds.), Software-Ergonomie ’89 (pp. 36-50). Stuttgart: Teubner.

Carroll, J. M. (1989b). Evaluation, description and invention: Paradigms for

human-computer interaction. In M. C. Yovits (Ed.), Advances in

computers (Vol. 29, pp. 44-77). London: Academic Press.

Carroll, J. M. (1990). Infinite detail and emulation in an ontologically

minimized HCI. In J. Chew & J. Whiteside (Eds.), Proceedings of CHI

’90: Human Factors in Computing Systems (pp. 321-327). New York:

ACM.

Carroll, J. M., & Campbell, R. L. (1986). Softening up hard science: Reply to

Newell and Card. Human-Computer Interaction, 2, 227-249.

Carroll, J. M., & Campbell, R. L. (1989). Artifacts as psychological theories:

The case of human-computer interaction. Behaviour and Information

Technology, 8, 247-256.

Carroll, J. M., & Kellogg, W. A. (1989). Artifact as theory-nexus:

Hermaneutics meets theory-based design. In K. Bice & C. H. Lewis

(Eds.), Proceedings of CHI ’89: Human Factors in Computing Systems

(pp. 7-14). New York: ACM.

Chomsky, N. (1957). Syntactic structures. The Hague: Mouton.

Dix, A. J., Harrison, M. D., Runciman, C., & Thimbleby, H. W. (1987).

Interaction models and the principled design or interactive systems. In

Nicholls & D. S. Simpson (Eds.), European software engineering
conference, (pp. 127-135). Berlin: Springer Lecture Notes.

Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data.

Psychological Review, 87, 215-251.

Grudin, J. T. (1990). The computer reaches out: The historical continuity of

interface design. In J. Chew & J. Whiteside (Eds.), Proceedings of CHI

’90: Human Factors in Computing Systems (pp. 261-268). New York:

ACM.

Grudin, J. T., & Barnard, P. J. (1984). The cognitive demands of learning

command names for text editing. Human Factors, 26, 407-422.

Hammond, N., & Allinson, L. 1988. Travels around a learning support

environment: rambling, orienteering or touring? In E. Soloway, D.

Basic Theories and the Artifacts of HCI 127

Frye, & S. B. Sheppard (Eds.), Proceedings of CHI ’88: Human

Factors in Computing Systems (pp. 269-273). New York: ACM.

Hammond, N. V., Long, J., Clark, I. A., Barnard, P. J., & Morton, J. (1980).

Documenting human-computer mismatch in interactive systems. In

Proceedings of the Ninth International Symposium on Human Factors in

Telecommunications (pp. 17-24). Red Bank, NJ.

Hanson, W. (1971). User engineering principles for interactive systems.

AFIPS Conference Proceedings , 39, 523-532.

Harrison, M. D., Roast, C. R., & Wright, P. C. (1989). Complementary

methods for the iterative design of interactive systems. In G. Salvendy

& M. J. Smith (Eds.), Proceedings of HCI International ’89 (pp. 651-

658). Boston: Elsevier Scientific.

Kieras, D. E., & Polson, P. G. (1985). An approach to formal analysis of user

complexity. International Journal of Man- Machine Studies, 22, 365-

394.

Laird, J.E., Newell, A., & Rosenbloom, P. S. (1987). SOAR: An architecture

for general intelligence. Artificial Intelligence, 33, 1-64.

Landauer, T. K. (1987). Relations between cognitive psychology and computer

systems design. In J. M. Carroll (Ed.), Interfacing thought: Cognitive

aspects of human-computer interaction (pp. 1-25). Cambridge, MA:

MIT Press.

Lewis, C. H. (1988). Why and how to learn why: Analysis-based

generalization of procedures. Cognitive Science, 12, 211-256.

Long, J. B. (1987). Cognitive ergonomics and human-computer interaction. In

Warr (Ed.), Psychology at Work (erd ed.). Harmondsworth,
Middlesex: Penguin.

Long, J. B. (1989). Cognitive ergonomics and human-computer interaction: An

introduction. In J. B. Long & A. Whitefield (Eds.), Cognitive

ergonomics and human-computer interaction (pp. 4-34). Cambridge:

Cambridge University Press.

Long, J. B., & Dowell, J. (1989). Conceptions of the discipline of HCI: Craft,

applied science and engineering. In A. Sutcliffe & L. Macaulay (Eds.),

People and computers V (pp. 9-32). Cambridge: Cambridge University

Press.

MacLean, A., Barnard, P., & Wilson, M. (1985). Evaluating the human

interface of a data entry system: User choice and performance measures

yield different trade-off functions. In P. Johnson & S. Cook (Eds.),

People and computers: Designing the interface (pp. 172-185).

Cambridge: Cambridge University Press.

MacLean, A., Young, R. M., & Moran, T. P. (1989). Design rationale: The

argument behind the artefact. In K. Bice & C.H. Lewis (Eds.),

Proceedings of CHI ’89: Human Factors in Computing Systems (pp.

247-252). New York: ACM.

Mack, R., Lewis, C., & Carroll, J.M. (1983). Learning to use word

processors: Problems and prospects. ACM Transactions on Office

information Systems, 1, 254-271.

Morton, J., Marcus, S., & Frankish, C. (1976). Perceptual centres: P-centres.

Psychological Review, 83, 405-408.

Newell, A. (1989). Unified Theories of Cognition: The 1987 William James

Lectures . Cambridge, MA: Harvard University Press.

Newell, A., & Card, S. K. (1985). The prospects for psychological science in

human computer interaction. Human-Comuter Interaction, 1, 209.242.

Newell, A., & Simon, H. A. (1972). Human Problem Solving. Englewood

Cliffs, NJ: Prentice-Hall.

Norman, D. A. (1983). Design principles for human-computer interaction. In

Proceedings of CHI ’83: Human Factors in Computing Systems (pp. 1-

10). New York: ACM.

Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W.

Draper (Eds.), User centered system design: New perspectives on

human-computer interaction (pp. 31-61). Hillsdale, NJ: Lawrence

Erlbaum Associates.

Olson, J. R., & Olson, G. M. (1990). The growth of cognitive modelling since

GOMS. Human Computer Interaction 5, 221-265.

Patterson, R. D. (1983). Guidelines for auditory warnings on civil aircraft: A

summary and prototype. In G. Rossi (Ed.), Noise as a Public Health

Problem (Vol. 2, pp. 1125-1133). Milan: Centro Richerche e Studi

Amplifon.

Patterson, R. D., Cosgrove, P., Milroy, R., & Lower, M.C. (1989). Auditory

warnings for the British Rail inductive loop warning system. In

Proceedings of the Institute of Acoustics, Spring Conference (Vol. 11,

5-51-58). Edinburgh: Institute of Acoustics.
Patterson, R. D., Edworthy, J., Shailer, M.J., Lower, M.C., & Wheeler, P. D.

(1986). Alarm sounds for medical equipment in intensive care areas and

operting theatres. Institute of Sound and Vibration (Research Report AC

598).

Payne, S., & Green, T. (1986). Task action grammars: A model of the mental

representation of task languages. HumanComputer Interaction, 2, 93-

133.

Polson, P. (1987). A quantitative theory of human-computer interaction. In J .

Carroll (Ed.), Interfacing thought: Cognitive aspects of humancomputer
interaction (pp. 184-235). Cambridge, MA: MIT Press.

Reisner, P. (1982). Further developments towards using formal grammar as a

design tool. In Proceedings of Human Factors in Computer Systems

Gaithersburg (pp. 304-308). New York: ACM.

Scapin, D. L. (1981). Computer commands in restricted natural language: Some

aspects of memory and experience. Human Factors, 23, 365-375.

Simon, T. (1988). Analysing the scope of cognitive models in human-computer

interaction. In D. M. Jones & R. Winder (Eds.), People and computers

IV (pp. 79-93). Cambridge: Cambridge University Press.

Suchman, L. (1987). Plans and situated actions: The problem of humanmachine

communication. Cambridge: Cambridge University Press.

Basic Theories and the Artifacts of HCI 129

Thimbleby, H. W. (1985). Generative user-engineering principles for user

interface design. In B. Shackel (Ed.), Human computer interaction:

Interact ’84 (pp. 661-665). Amsterdam: North-Holland.

Whiteside, J., & Wixon, D. (1987). Improving human-computer interaction: A

quest for cognitive science. In J. M. Carroll (Ed.), Interfacing thought:

Cognitive aspects of human-computer interaction (pp. 353-365).

Cambridge, MA: MIT Press.

Wilson, M., Barnard, P. J., Green, T. R. G., & MacLean, A. (1988).

Knowedge-based task analysis for human-computer systems. In G. Van

der Veer, J-M Hoc, T. R. G. Green, & D. Murray (Eds.), Working with

computers (pp. 47-87). London: Academic Press.

Young, R. M., & Barnard, P. J. (1987). The use of scenarios in humancomputer

interaction research: Turbocharging the tortoise of cumulative

science. In J. M. Carroll & P. P. Tanner (Eds.), Proceedings of CHI +

GI ’87: Human Factors in Computing Systems and Graphics Interface

(Toronto, April 5-9) (pp. 291-296). New York: ACM.

Young, R. M., Barnard, P.J., Simon, A., & Whittington, J. (1989). How

would your favourite user model cope with these scenarios? SIGCHI

Bulletin, 20( 4), 51-55.

Young, R. M., Green, T. R. G., & Simon, T. (1989). Programmable user

models for predictive ev aluation of interface designs. In K. Bice and C.

Lewis (Eds.), Proceedings of CHI ’89: Human Factors in Computing
Systems (pp. 15-19). New York: ACM.

Young, R.M., & MacLean, A. (1988). Choosing between methods: Analysing

the user’s decision space in terms of schemas and linear models. In E.

Soloway, D. Frye, & S. B. Sheppard (Eds.), Proceedings of CHI ’88:

Human Factors in Computing Systems (pp. 139-143). New York:

ACM.

 

 

Science Framework Illustration – Barnard (1991) Bridging between Basic Theories and the Artifacts of Human-Computer Interaction 150 150 John

Science Framework Illustration – Barnard (1991) Bridging between Basic Theories and the Artifacts of Human-Computer Interaction

Bridging between Basic Theories and the Artifacts of Human-Computer Interaction

Phil Barnard

In: Carroll, J.M. (Ed.). Designing Interaction: psychology at the human-computer interface.

New York: Cambridge University Press, Chapter 7, 103-127. This is not an exact copy of paper

as it appeared but a DTP lookalike with very slight differences in pagination.

Psychological ideas on a particular set of topics go through something very much

like a product life cycle. An idea or vision is initiated, developed, and

communicated. It may then be exploited, to a greater or lesser extent, within the

research community. During the process of exploitation, the ideas are likely to

be the subject of critical evaluation, modification, or extension. With

developments in basic psychology, the success or penetration of the scientific

product can be evaluated academically by the twin criteria of citation counts and

endurance. As the process of exploitation matures, the idea or vision stimulates

little new research either because its resources are effectively exhausted or

because other ideas or visions that incorporate little from earlier conceptual

frameworks have taken over. At the end of their life cycle, most ideas are

destined to become fossilized under the pressure of successive layers of journals

opened only out of the behavioral equivalent of paleontological interest.

In applied domains, research ideas are initiated, developed, communicated,

and exploited in a similar manner within the research community. Yet, by the

very nature of the enterprise, citation counts and endurance are of largely

academic interest unless ideas or knowledge can effectively be transferred from

research to development communities and then have a very real practical impact

on the final attributes of a successful product.

If we take the past 20-odd years as representing the first life cycle of research

in human-computer interaction, the field started out with few empirical facts and

virtually no applicable theory. During this period a substantial body of work

was motivated by the vision of an applied science based upon firm theoretical

foundations. As the area was developed, there can be little doubt, on the twin

academic criteria of endurance and citation, that some theoretical concepts have

been successfully exploited within the research community. GOMS, of course,

is the most notable example (Card, Moran, & Newell, 1983; Olson & Olson,

1990; Polson, 1987). Yet, as Carroll (e.g., l989a,b) and others have pointed

out, there are very few examples where substantive theory per se has had a major

and direct impact on design. On this last practical criterion, cognitive science can

more readily provide examples of impact through the application of empirical

methodologies and the data they provide and through the direct application of

psychological reasoning in the invention and demonstration of design concepts

(e.g., see Anderson & Skwarecki, 1986; Card & Henderson, 1987; Carroll,

1989a,b; Hammond & Allinson, 1988; Landauer, 1987).

As this research life cycle in HCI matures, fundamental questions are being

asked about whether or not simple deductions based on theory have any value at

all in design (e.g. Carroll, this volume), or whether behavior in human-computer

interactions is simply too complex for basic theory to have anything other than a

minor practical impact (e.g., see Landauer, this volume). As the next cycle of

research develops, the vision of a strong theoretical input to design runs the risk

of becoming increasingly marginalized or of becoming another fossilized

laboratory curiosity. Making use of a framework for understanding different

research paradigms in HCI, this chapter will discuss how theory-based research

might usefully evolve to enhance its prospects for both adequacy and impact.

Bridging Representations

In its full multidisciplinary context, work on HCI is not a unitary enterprise.

Rather, it consists of many different sorts of design, development, and research

activities. Long (1989) provides an analytic structure through which we can

characterize these activities in terms of the nature of their underlying concepts

and how different types of concept are manipulated and interrelated. Such a

framework is potentially valuable because it facilitates specification of,

comparison between, and evaluation of the many different paradigms and

practices operating within the broader field of HCI.

With respect to the relationship between basic science and its application,

Long makes three points that are fundamental to the arguments to be pursued in

this and subsequent sections. First, he emphasizes that the kind of

understanding embodied in our science base is a representation of the way in

which the real world behaves. Second, any representation in the science base

can only be mapped to and from the real world by what he called “intermediary”

representations. Third, the representations and mappings needed to realize this

kind of two-way conceptual traffic are dependent upon the nature of the activities

they are required to support. So the representations called upon for the purposes

of software engineering will differ from the representations called upon for the

purposes of developing an applicable cognitive theory.

Long’s framework is itself a developing one (1987, 1989; Long & Dowell,

1989). Here, there is no need to pursue the details; it is sufficient to emphasize

that the full characterization of paradigms operating directly with artifact design

differs from those characterizing types of engineering support research, which,

in turn, differ from more basic research paradigms. This chapter will primarily

be concerned with what might need to be done to facilitate the applicability and

impact of basic cognitive theory. In doing so it will be argued that a key role

needs to be played by explicit “bridging” representations. This term will be used

to avoid any possible conflict with the precise properties of Long’s particular

conceptualization.

Following Long (1989), Figure 7.1 shows a simplified characterization of an

applied science paradigm for bridging from the real world of behavior to the

science base and from these representations back to the real world. The blocks

are intended to characterize different sorts of representation and the arrows stand

for mappings between them (Long’s terminology is not always used here). The

real world of the use of interactive software is characterized by organisational,

group, and physical settings; by artifacts such as computers, software, and

manuals; by the real tasks of work; by characteristics of the user population; and

so on. In both applied and basic research, we construct our science not from the

real world itself but via a bridging representation whose purpose is to support

and elaborate the process of scientific discovery.

Obviously, the different disciplines that contribute to HCI each have their

own forms of discovery representation that reflect their paradigmatic

perspectives, the existing contents of their science base, and the target form of

their theory. In all cases the discovery representation incorporates a whole range

of explicit, and more frequently implicit, assumptions about the real world and

methodologies that might best support the mechanics of scientific abstraction. In

the case of standard paradigms of basic psychology, the initial process of

analysis leading to the formation of a discovery representation may be a simple

observation of behavior on some task. For example, it may be noted that

ordinary people have difficulty with particular forms of syllogistic reasoning. In

more applied research, the initial process of analysis may involve much more

elaborate taxonomization of tasks (e.g., Brooks, this volume) or of errors

observed in the actual use of interactive software (e.g., Hammond, Long, Clark,

Barnard, & Morton, 1980).

Conventionally, a discovery representation drastically simplifies the real

world. For the purposes of gathering data about the potential phenomena, a

limited number of contrastive concepts may need to be defined, appropriate

materials generated, tasks selected, observational or experimental designs

determined, populations and metrics selected, and so on. The real world of

preparing a range of memos, letters, and reports for colleagues to consider

before a meeting may thus be represented for the purposes of initial discovery by

Screen shot 2016-05-05 at 16.38.41

 

an observational paradigm with a small population of novices carrying out a

limited range of tasks with a particular word processor (e.g., Mack, Lewis, &

Carroll, 1983). In an experimental paradigm, it might be represented

noninteractively by a paired associate learning task in which the mappings

between names and operations need to be learned to some criterion and

subsequently recalled (e.g., Scapin, 1981). Alternatively, it might be

represented by a simple proverb-editing task carried out on two alternative

versions of a cut-down interactive text editor with ten commands. After some

form of instructional familiarization appropriate to a population of computernaive

members of a Cambridge volunteer subject panel, these commands may be

used an equal number of times with performance assessed by time on task,

errors, and help usage (e.g., Barnard, Hammond, MacLean, & Morton, 1982).

Each of the decisions made contributes to the operational discovery

representation.

The resulting characterizations of empirical phenomena are potential

regularities of behavior that become, through a process of assimilation,

incorporated into the science base where they can be operated on, or argued

about, in terms of the more abstract, interpretive constructs. The discovery

representations constrain the scope of what is assimilated to the science base and

all subsequent mappings from it.

The conventional view of applied science also implies an inverse process

involving some form of application bridge whose function is to support the

transfer of knowledge in the science base into some domain of application.

Classic ergonomics-human factors relied on the handbook of guidelines. The

relevant processes involve contextualizing phenomena and scientific principles

for some applications domain – such as computer interfaces, telecommunications

apparatus, military hardware, and so on. Once explicitly formulated, say in

terms of design principles, examples and pointers to relevant data, it is left up to

the developers to operate on the representation to synthesize that information

with any other considerations they may have in the course of taking design

decisions. The dominant vision of the first life cycle of HCI research was that

this bridging could effectively be achieved in a harder form through engineering

approximations derived from theory (Card et al., 1983). This vision essentially

conforms to the full structure of Figure 7.1

The Chasm to Be Bridged

The difficulties of generating a science base for HCI that will support effective

bridging to artifact design are undeniably real. Many of the strategic problems

theoretical approaches must overcome have now been thoroughly aired. The life

cycle of theoretical enquiry and synthesis typically postdates the life cycle of

products with which it seeks to deal; the theories are too low level; they are of

restricted scope; as abstractions from behavior they fail to deal with the real

context of work and they fail to accommodate fine details of implementations and

interactions that may crucially influence the use of a system (see, e.g.,

discussions by Carroll & Campbell, 1986; Newell & Card, 1985; Whiteside &

Basic Theories and the Artifacts of HCI 107

Wixon, 1987). Similarly, although theory may predict significant effects and

receive empirical support, those effects may be of marginal practical consequence

in the context of a broader interaction or less important than effects not

specifically addressed (e.g., Landauer, 1987).

Our current ability to construct effective bridges across the chasm that

separates our scientific understanding and the real world of user behavior and

artifact design clearly falls well short of requirements. In its relatively short

history, the scope of HCI research on interfaces has been extended from early

concerns with the usability of hardware, through cognitive consequences of

software interfaces, to encompass organizational issues (e.g., Grudin, 1990).

Against this background, what is required is something that might carry a

volume of traffic equivalent to an eight-lane cognitive highway. What is on offer

is more akin to a unidirectional walkway constructed from a few strands of rope

and some planks.

In Taking artifacts seriously Carroll (1989a) and Carroll, Kellogg, and

Rosson in this volume, mount an impressive case against the conventional view

of the deductive application of science in the invention, design, and development

of practical artifacts. They point both to the inadequacies of current informationprocessing

psychology, to the absence of real historical justification for

deductive bridging in artifact development, and to the paradigm of craft skill in

which knowledge and understanding are directly embodied in artifacts.

Likewise, Landauer (this volume) foresees an equally dismal future for theorybased

design.

Whereas Landauer stresses the potential advances that may be achieved

through empirical modeling and formative evaluation. Carroll and his colleagues

have sought a more substantial adjustment to conventional scientific strategy

(Carroll, 1989a,b, 1990; Carroll & Campbell, 1989; Carroll & Kellogg, 1989;

Carroll et al., this volume). On the one hand they argue that true “deductive”

bridging from theory to application is not only rare, but when it does occur, it

tends to be underdetermined, dubious, and vague. On the other hand they argue

that the form of hermaneutics offered as an alternative by, for example,

Whiteside and Wixon (1987) cannot be systematized for lasting value. From

Carroll’s viewpoint, HCI is best seen as a design science in which theory and

artifact are in some sense merged. By embodying a set of interrelated

psychological claims concerning a product like HyperCard or the Training

Wheels interface (e.g., see Carroll & Kellogg, 1989), the artifacts themselves

take on a theorylike role in which successive cycles of task analysis,

interpretation, and artifact development enable design-oriented assumptions

about usability to be tested and extended.

This viewpoint has a number of inviting features. It offers the potential of

directly addressing the problem of complexity and integration because it is

intended to enable multiple theoretical claims to be dealt with as a system

bounded by the full artifact. Within the cycle of task analysis and artifact

development, the analyses, interpretations, and theoretical claims are intimately

bound to design problems and to the world of “real” behavior. In this context,

knowledge from HCI research no longer needs to be transferred from research

into design in quite the same sense as before and the life cycle of theories should

also be synchronized with the products they need to impact. Within this

framework, the operational discovery representation is effectively the rationale

governing the design of an artifact, whereas the application representation is a

series of user-interaction scenarios (Carroll, 1990).

The kind of information flow around the task – artifact cycle nevertheless

leaves somewhat unclear the precise relationships that might hold between the

explicit theories of the science base and the kind of implicit theories embodied in

artifacts. Early on in the development of these ideas, Carroll (1989a) points out

that such implicit theories may be a provisional medium for HCI, to be put aside

when explicit theory catches up. In a stronger version of the analysis, artifacts

are in principle irreducible to a standard scientific medium such as explicit

theories. Later it is noted that “it may be simplistic to imagine deductive relations

between science and design, but it would be bizarre if there were no relation at

all” (Carroll & Kellogg, 1989). Most recently, Carroll (1990) explicitly

identifies the psychology of tasks as the relevant science base for the form of

analysis that occurs within the task-artifact cycle (e.g. see Greif, this volume;

Norman this volume). The task-artifact cycle is presumed not only to draw upon

and contextualize knowledge in that science base, but also to provide new

knowledge to assimilate to it. In this latter respect, the current view of the task

artifact cycle appears broadly to conform with Figure 7.1. In doing so it makes

use of task-oriented theoretical apparatus rather than standard cognitive theory

and novel bridging representations for the purposes of understanding extant

interfaces (design rationale) and for the purposes of engineering new ones

(interaction scenarios).

In actual practice, whether the pertinent theory and methodology is grounded

in tasks, human information-processing psychology or artificial intelligence,

those disciplines that make up the relevant science bases for HCI are all

underdeveloped. Many of the basic theoretical claims are really provisional

claims; they may retain a verbal character (to be put aside when a more explicit

theory arrives), and even if fully explicit, the claims rarely generalize far beyond

the specific empirical settings that gave rise to them. In this respect, the wider

problem of how we go about bridging to and from a relevant science base

remains a long-term issue that is hard to leave unaddressed. Equally, any

research viewpoint that seeks to maintain a productive role for the science base in

artifact design needs to be accompanied by a serious reexamination of the

bridging representations used in theory development and in their application.

Science and design are very different activities. Given Figure 7.1., theorybased

design can never be direct; the full bridge must involve a transformation of

information in the science base to yield an applications representation, and

information in this structure must be synthesized into the design problem. In

much the same way that the application representation is constructed to support

design, our science base, and any mappings from it, could be better constructed

to support the development of effective application bridging. The model for

relating science to design is indirect, involving theoretical support for

Basic Theories and the Artifacts of HCI 109

engineering representations (both discovery and applications) rather than one

involving direct theoretical support in design.

The Science Base and Its Application

In spite of the difficulties, the fundamental case for the application of cognitive

theory to the design of technology

Comment 1

Cognitive theory, here, is part of the discipline of Psychology, which in turn sees itself as a Science Discipline.

remains very much what it was 20 years ago,

and indeed what it was 30 years ago (e.g., Broadbent, 1958). Knowledge

assimilated to the science base and synthesized into models or theories should

reduce our reliance on purely empirical evaluations. It offers the prospect of

supporting a deeper understanding of design issues and how to resolve them.

Comment 2

Understanding, here, in the manner of science is taken to mean the explanation and prediction of phenomena.

Indeed, Carroll and Kellogg’s (1989) theory nexus has developed out of a

cognitive paradigm rather than a behaviorist one. Although theory development

lags behind the design of artifacts, it may well be that the science base has more

to gain than the artifacts. The interaction of science and design nevertheless

should be a two-way process of added value.

Comment 3

What the science of Psychology base has to gain, here, is taken to be the phenomena associated with humans interacting with computers (Comment 2).

Much basic theoretical work involves the application of only partially explicit

and incomplete apparatus to specific laboratory tasks. It is not unreasonable to

argue that our basic cognitive theory tends only to be successful for modeling a

particular application. That application is itself behavior in laboratory tasks. The

scope of the application is delimited by the empirical paradigms and the artifacts

it requires – more often than not these days, computers and software for

presentation of information and response capture. Indeed, Carroll’s task-artifact

and interpretation cycles could very well be used to provide a neat description of

the research activities involved in the iterative design and development of basic

theory. The trouble is that the paradigms of basic psychological research, and

the bridging representations used to develop and validate theory, typically

involve unusually simple and often highly repetitive behavioral requirements

atypical of those faced outside the laboratory.

Comment 4

Behavioural requirements, here, comprise the human-computer interaction phenomena, which the science of Psychology seeks to understand by means of Cognitive Theory.

Although it is clear that there are many cases of invention and craft where the

kinds of scientific understanding established in the laboratory play little or no

role in artifact development (Carroll, 1989b), this is only one side of the story.

The other side is that we should only expect to find effective bridging when what

is in the science base is an adequate representation of some aspect of the real

world that is relevant to the specific artifact under development.

Comment 5

See Comment 4.

In this context it is worth considering a couple of examples not usually called into play in the

HCI domain. Psychoacoustic models of human hearing are well developed. Auditory

warning systems on older generations of aircraft are notoriously loud and

unreliable. Pilots don’t believe them and turn them off. Using standard

techniques, it is possible to measure the noise characteristics of the environment

on the flight deck of a particular aircraft and to design a candidate set of warnings

based on a model of the characteristics of human hearing. This determines

whether or not pilots can be expected to “hear” and identify those warnings over

the pattern of background noise without being positively deafened and distracted

(e.g., Patterson, 1983). Of course, the attention-getting and discriminative

properties of members of the full set of warnings still have to be crafted. Once

established, the extension of the basic techniques to warning systems in hospital

intensive-care units (Patterson, Edworthy, Shailer, Lower, & Wheeler, 1986)

and trains (Patterson, Cosgrove, Milroy, & Lower, 1989) is a relatively routine

matter.

Developed further and automated, the same kind of psychoacoustic model

can play a direct role in invention. As the front end to a connectionist speech

recognizer, it offers the prospect of a theoretically motivated coding structure that

could well prove to outperform existing technologies (e.g., see ACTS, 1989).

As used in invention, what is being embodied in the recognition artifact is an

integrated theory about the human auditory system rather than a simple heuristic

combination of current signal-processing technologies.

Another case arises out of short-term memory research. Happily, this one

does not concern limited capacity! When the research technology for short-term

memory studies evolved into a computerized form, it was observed that word

lists presented at objectively regular time intervals (onset to onset times for the

sound envelopes) actually sounded irregular. In order to be perceived as regular

the onset to onset times need to be adjusted so that the “perceptual centers” of the

words occur at equal intervals (Morton, Marcus, & Frankish, 1976). This

science base representation, and algorithms derived from it, can find direct use in

telecommunications technology or speech interfaces where there is a requirement

for the automatic generation of natural sounding number or option sequences.

Of course, both of these examples are admittedly relatively “low level.” For

many higher level aspects of cognition, what is in the science base are

representations of laboratory phenomena of restricted scope and accounts of

them. What would be needed in the science base to provide conditions for

bridging are representations of phenomena much closer to those that occur in the

real world. So, for example, the theoretical representations should be topicalized

on phenomena that really matter in applied contexts (Landauer, 1987).

Comment 6

See Comments 2 and 4.

They should be theoretical representations dealing with extended sequences of

cognitive behavior rather than discrete acts. They should be representations of

information-rich environments rather than information-impoverished ones. They

should relate to circumstances where cognition is not a pattern of short repeating

(experimental) cycles but where any cycles that might exist have meaning in

relation to broader task goals and so on.

Comment 7

Task goals imply the requirement for lower-level descriptions of the human-computer interactions, which constitute the phenomena to be understood by the science base of Psychology, as expressed in Cognitive Theory.

It is not hard to pursue points about what the science base might incorporate

in a more ideal world. Nevertheless, it does contain a good deal of useful

knowledge (cf. Norman, 1986), and indeed the first life cycle of HCI research

has contributed to it. Many of the major problems with the appropriateness,

scope, integration, and applicability of its content have been identified. Because

major theoretical prestroika will not be achieved overnight, the more productive

questions concern the limitations on the bridging representations of that first

cycle of research and how discovery representations and applications

representations might be more effectively developed in subsequent cycles.

An Analogy with Interface Design Practice

Not surprisingly, those involved in the first life cycle of HCI research relied very

heavily in the formation of their discovery representations on the methodologies

of the parent discipline.

Comment 8

Research, here, refers to the acquisition of Cognitive Theory as scientific knowledge, whose parent discipline is Psychology.

Likewise, in bridging from theory to application, those

involved relied heavily on the standard historical products used in the verification

of basic theory, that is, prediction of patterns of time and/or errors.

Comment 9

Verification , here, is taken to include validation, which in turn comprises: conceptualisation; operationalisation; test; and generalisation.

There are relatively few examples where other attributes of behavior are modeled, such as

choice among action sequences (but see Young & MacLean, 1988). A simple

bridge, predictive of times of errors, provides information about the user of an

interactive system. The user of that information is the designer, or more usually

the design team. Frameworks are generally presented for how that information

might be used to support design choice either directly (e.g., Card et al., 1983) or

through trade-off analyses (e.g., Norman, 1983).

Comment 10

Applied frameworks, as referenced here, are clearly different from; but dependent on, science/Psychology/Cognitive Theory frameworks.

However, these forms of

application bridge are underdeveloped to meet the real needs of designers.

Given the general dictum of human factors research, “Know the user”

(Hanson, 1971), it is remarkable how few explicitly empirical studies of design

decision making are reported in the literature. In many respects, it would not be

entirely unfair to argue that bridging representations between theory and design

have remained problematic for the same kinds of reasons that early interactive

interfaces were problematic. Like glass teletypes, basic psychological

technologies were underdeveloped and, like the early design of command

languages, the interfaces (application representations) were heuristically

constructed by applied theorists around what they could provide rather than by

analysis of requirements or extensive study of their target users or the actual

context of design (see also Bannon & BØdker, this volume; Henderson, this

volume).

Equally, in addressing questions associated with the relationship between

theory and design, the analogy can be pursued one stage further by arguing for

the iterative design of more effective bridging structures. Within the first life

cycle of HCI research a goodly number of lessons have been learned that could

be used to advantage in a second life cycle. So, to take a very simple example,

certain forms of modeling assume that users naturally choose the fastest method

for achieving their goal. However there is now some evidence that this is not

always the case (e.g., MacLean, Barnard, & Wilson, 1985). Any role for the

knowledge and theory embodied in the science base must accommodate, and

adapt to, those lessons.

Comment 11

Knowledge and theory in the science basis seeks to understand the phenomena, associated with humans interacting with computers. Cognitive Theory, then, requires frameworks at the detailed level of those interactions. See also Comment 7.

For many of the reasons that Carroll and others have

elaborated, simple deductive bridging is problematic. To achieve impact,

behavioral engineering research must itself directly support the design,

development, and invention of artifacts. On any reasonable time scale there is a

need for discovery and application representations that cannot be fully justified

through science-base principles or data. Nonetheless, such a requirement simply

restates the case for some form of cognitive engineering paradigm. It does not in

and of itself undermine the case for the longer-term development of applicable

theory.

Comment 12

Cognitive Science and Cognitive Engineering Paradigms are clearly distinguished here. This distinction is consistent with the position taken by Frameworks for HCI on this site.

Just as impact on design has most readily been achieved through the

application of psychological reasoning in the invention and demonstration of

artifacts, so a meaningful impact of theory might best be achieved through the

invention and demonstration of novel forms of applications representations. The

development of representations to bridge from theory to application cannot be

taken in isolation. It needs to be considered in conjunction with the contents of

the science base itself and the appropriateness of the discovery representations

that give rise to them.

Without attempting to be exhaustive, the remainder of this chapter will

exemplify how discovery representations might be modified in the second life

cycle of HCI research; and illustrate how theory might drive, and itself benefit

from, the invention and demonstration of novel forms of applications bridging.

Enhancing Discovery Representations

Although disciplines like psychology have a formidable array of methodological

techniques, those techniques are primarily oriented toward hypothesis testing.

Here, greatest effort is expended in using factorial experimental designs to

confirm or disconfirm a specific theoretical claim. Often wider characteristics of

phenomena are only charted as and when properties become a target of specific

theoretical interest. Early psycholinguistic research did not start off by studying

what might be the most important factors in the process of understanding and

using textual information. It arose out of a concern with transformational

grammars (Chomsky, 1957). In spite of much relevant research in earlier

paradigms (e.g., Bartlett, 1932), psycholinguistics itself only arrived at this

consideration after progressing through the syntax, semantics, and pragmatics of

single-sentence comprehension.

As Landauer (1987) has noted, basic psychology has not been particularly

productive at evolving exploratory research paradigms. One of the major

contributions of the first life cycle of HCI research has undoubtedly been a

greater emphasis on demonstrating how such empirical paradigms can provide

information to support design (again, see Landauer, 1987). Techniques for

analyzing complex tasks, in terms of both action decomposition and knowledge

requirements, have also progressed substantially over the past 20 years (e.g.,

Wilson, Barnard, Green, & MacLean, 1988).

A significant number of these developments are being directly assimilated

into application representations for supporting artifact development. Some can

also be assimilated into the science base, such as Lewis’s (1988) work on

abduction. Here observational evidence in the domain of HCI (Mack et al.,

1983) leads directly to theoretical abstractions concerning the nature of human

reasoning. Similarly, Carroll (1985) has used evidence from observational and

experimental studies in HCI to extend the relevant science base on naming and

reference. However, not a lot has changed concerning the way in which

discovery representations are used for the purposes of assimilating knowledge to

the science base and developing theory.

In their own assessment of progress during the first life cycle of HCI

research, Newell and Card (1985) advocate continued reliance on the hardening

of HCI as a science. This implicitly reinforces classic forms of discovery

representations based upon the tools and techniques of parent disciplines. Heavy

reliance on the time-honored methods of experimental hypothesis testing in

experimental paradigms does not appear to offer a ready solution to the two

problems dealing with theoretical scope and the speed of theoretical advance.

Likewise, given that these parent disciplines are relatively weak on exploratory

paradigms, such an approach does not appear to offer a ready solution to the

other problems of enhancing the science base for appropriate content or for

directing its efforts toward the theoretical capture of effects that really matter in

applied contexts.

The second life cycle of research in HCI might profit substantially by

spawning more effective discovery representations, not only for assimilation to

applications representations for cognitive engineering, but also to support

assimilation of knowledge to the science base and the development of theory.

Two examples will be reviewed here. The first concerns the use of evidence

embodied in HCI scenarios (Young & Barnard, 1987, Young, Barnard, Simon,

& Whittington, 1989). The second concerns the use of protocol techniques to

systematically sample what users know and to establish relationships between

verbalizable knowledge and actual interactive performance.

Test-driving Theories

Young and Barnard (1987) have proposed that more rapid theoretical advance

might be facilitated by “test driving” theories in the context of a systematically

sampled set of behavioral scenarios. The research literature frequently makes

reference to instances of problematic or otherwise interesting user-system

exchanges. Scenario material derived from that literature is selected to represent

some potentially robust phenomenon of the type that might well be pursued in

more extensive experimental research. Individual scenarios should be regarded

as representative of the kinds of things that really matter in applied settings. So

for example, one scenario deals with a phenomenon often associated with

unselected windows. In a multiwindowing environment a persistent error,

frequently committed even by experienced users, is to attempt some action in

inactive window. The action might be an attempt at a menu selection. However,

pointing and clicking over a menu item does not cause the intended result; it

simply leads to the window being activated. Very much like linguistic test

sentences, these behavioral scenarios are essentially idealized descriptions of

such instances of human-computer interactions.

If we are to develop cognitive theories of significant scope they must in

principle be able to cope with a wide range of such scenarios. Accordingly, a

manageable set of scenario material can be generated that taps behaviors that

encompass different facets of cognition. So, a set of scenarios might include

instances dealing with locating information in a directory entry, selecting

alternative methods for achieving a goal, lexical errors in command entry, the

unselected windows phenomenon, and so on (see Young, Barnard, Simon, &

Whittington, 1989). A set of contrasting theoretical approaches can likewise be

selected and the theories and scenarios organized into a matrix. The activity

involves taking each theoretical approach and attempting to formulate an account

of each behavioral scenario. The accuracy of the account is not at stake. Rather,

the purpose of the exercise is to see whether a particular piece of theoretical

apparatus is even capable of giving rise to a plausible account. The scenario

material is effectively being used as a set of sufficiency filters and it is possible to

weed out theories of overly narrow scope. If an approach is capable of

formulating a passable account, interest focuses on the properties of the account

offered. In this way, it is also possible to evaluate and capitalize on the

properties of theoretical apparatus and do provide appropriate sorts of analytic

leverage over the range of scenarios examined.

Traditionally, theory development places primary emphasis on predictive

accuracy and only secondary emphasis on scope. This particular form of

discovery representation goes some way toward redressing that balance. It

offers the prospect of getting appropriate and relevant theoretical apparatus in

place on a relatively short time cycle. As an exploratory methodology, it at least

addresses some of the more profound difficulties of interrelating theory and

application. The scenario material makes use of known instances of humancomputer

interaction. Because these scenarios are by definition instances of

interactions, any theoretical accounts built around them must of necessity be

appropriate to the domain. Because scenarios are intended to capture significant

aspects of user behavior, such as persistent errors, they are oriented toward what

matters in the applied context. As a quick and dirty methodology, it can make

effective use of the accumulated knowledge acquired in the first life cycle of HCI

research, while avoiding some of the worst “tar pits” (Norman, 1983) of

traditional experimental methods.

As a form of discovery bridge between application and theory, the real world

is represented, for some purpose, not by a local observation or example, but by a

sampled set of material. If the purpose is to develop a form of cognitive

architecture , then it may be most productive to select a set of scenarios that

encompass different components of the cognitive system (perception, memory,

decision making, control of action). Once an applications representation has

been formed, its properties might be further explored and tested by analyzing

scenario material sampled over a range of different tasks, or applications

domains (see Young & Barnard, 1987). At the point where an applications

representation is developed, the support it offers may also be explored by

systematically sampling a range of design scenarios and examining what

information can be offered concerning alternative interface options (AMODEUS,

1989). By contrast with more usual discovery representations, the scenario

methodology is not primarily directed at classic forms of hypothesis testing and

validation. Rather, its purpose is to support the generation of more readily

applicable theoretical ideas.

Verbal Protocols and Performance

One of the most productive exploratory methodologies utilized in HCI research

has involved monitoring user action while collecting concurrent verbal protocols

to help understand what is actually going on. Taken together these have often

given rise to the best kinds of problem-defining evidence, including the kind of

scenario material already outlined. Many of the problems with this form of

evidence are well known. Concurrent verbalization may distort performance and

significant changes in performance may not necessarily be accompanied by

changes in articulatable knowledge. Because it is labor intensive, the

observations are often confined to a very small number of subjects and tasks. In

consequence, the representatives of isolated observations is hard to assess.

Furthermore, getting real scientific value from protocol analysis is crucially

dependent on the insights and craft skill of the researcher concerned (Barnard,

Wilson, & MacLean, 1986; Ericsson & Simon, 1980).

Techniques of verbal protocol analysis can nevertheless be modified and

utilized as a part of a more elaborate discovery representation to explore and

establish systematic relationships between articulatable knowledge and

performance. The basic assumption underlying much theory is that a

characterization of the ideal knowledge a user should possess to successfully

perform a task can be used to derive predictions about performance. However,

protocol studies clearly suggest that users really get into difficulty when they

have erroneous or otherwise nonideal knowledge. In terms of the precise

relationships they have with performance, ideal and nonideal knowledge are

seldom considered together.

In an early attempt to establish systematic and potentially generalizable

relationships between the contents of verbal protocols and interactive

performance, Barnard et al., (1986) employed a sample of picture probes to elicit

users’ knowledge of tasks, states, and procedures for a particular office product

at two stages of learning. The protocols were codified, quantified, and

compared. In the verbal protocols, the number of true claims about the system

increased with system experience, but surprisingly, the number of false claims

remained stable. Individual users who articulated a lot of correct claims

generally performed well, but the amount of inaccurate knowledge did not appear

related to their overall level of performance. There was, however, some

indication that the amount of inaccurate knowledge expressed in the protocols

was related to the frequency of errors made in particular system contexts.

A subsequent study (Barnard, Ellis, & MacLean, 1989) used a variant of the

technique to examine knowledge of two different interfaces to the same

application functionality. High levels of inaccurate knowledge expressed in the

protocols were directly associated with the dialogue components on which

problematic performance was observed. As with the earlier study, the amount of

accurate knowledge expressed in any given verbal protocol was associated with

good performance, whereas the amount of inaccurate knowledge expressed bore

little relationship to an individual’s overall level of performance. Both studies

reinforced the speculation that is is specific interface characteristics that give rise

to the development of inaccurate or incomplete knowledge from which false

inferences and poor performance may follow.

Just as the systematic sampling and use of behavioral scenarios may facilitate

the development of theories of broader scope, so discovery representations

designed to systematically sample the actual knowledge possessed by users

should facilitate the incorporation into the science base of behavioral regularities

and theoretical claims that are more likely to reflect the actual basis of user

performance rather than a simple idealization of it.

Enhancing Application Representations

The application representations of the first life cycle of HCI research relied very

much on the standard theoretical products of their parent disciplines.

Grammatical techniques originating in linguistics were utilized to characterize the

complexity of interactive dialogues; artificial intelligence (A1)-oriented models

were used to represent and simulate the knowledge requirements of learning;

and, of course, derivatives of human information-processing models were used

to calculate how long it would take users to do things. Although these

approaches all relied upon some form of task analysis, their apparatus was

directed toward some specific function. They were all of limited scope and made

numerous trade-offs between what was modeled and the form of prediction made

(Simon, 1988).

Some of the models were primarily directed at capturing knowledge

requirements for dialogues for the purposes of representing complexity, such as

BNF grammars (Reisner, 1982) and Task Action Grammars (Payne & Green,

1986). Others focused on interrelationships between task specifications and

knowledge requirements, such as GOMS analyses and cognitive-complexity

theory (Card et al. 1983; Kieras & Polson, 1985). Yet other apparatus, such as

the model human information processor and the keystroke level model of Card et al.

(1983) were primarily aimed at time prediction for the execution of error-free

routine cognitive skill. Most of these modeling efforts idealized either the

knowledge that users needed to possess or their actual behavior. Few models

incorporated apparatus for integrating over the requirements of knowledge

acquisition or use and human information-processing constraints (e.g., see

Barnard, 1987). As application representations, the models of the first life cycle

had little to say about errors or the actual dynamics of user-system interaction as

influenced by task constraints and information or knowledge about the domain of

application itself.

Two modeling approaches will be used to illustrate how applications

representations might usefully be enhanced. They are programmable user

models (Young, Green, & Simon, 1989) and modeling based on Interacting

Cognitive Subsystems (Barnard, 1985). Although these approaches have

different origins, both share a number of characteristics. They are both aimed at

modeling more qualitative aspects of cognition in user-system interaction; both

are aimed at understanding how task, knowledge, and processing constraint

intersect to determine performance; both are aimed at exploring novel means of

incorporating explicit theoretical claims into application representations; and both

require the implementation of interactive systems for supporting decision making

in a design context. Although they do so in different ways, both approaches

attempt to preserve a coherent role for explicit cognitive theory. Cognitive theory

is embodied, not in the artifacts that emerge from the development process, but

in demonstrator artifacts that might emerge from the development process, but in

demonstrator artifacts that might support design. This is almost directly

analogous to achieving an impact in the marketplace through the application of

psychological reasoning in the invention of artifacts. Except in this case, the

target user populations for the envisaged artifacts are those involved in the design

and development of products.

Programmable User Models (PUMs)

The core ideas underlying the notion of a programmable user model have their

origins in the concepts and techniques of AI. Within AI, cognitive architectures

are essentially sets of constraints on the representation and processing of

knowledge. In order to achieve a working simulation, knowledge appropriate to

the domain and task must be represented within those constraints. In the normal

simulation methodology, the complete system is provided with some data and,

depending on its adequacy, it behaves with more or less humanlike properties.

Using a simulation methodology to provide the designer with an artificial

user would be one conceivable tactic. Extending the forms of prediction offered

by such simulations (cf. cognitive complexity theory; Polson, 1987) to

encompass qualitative aspects of cognition is more problematic. Simply

simulating behavior is of relatively little value. Given the requirements of

knowledge-based programming, it could, in many circumstances, be much more

straightforward to provide a proper sample of real users. There needs to be

some mechanism whereby the properties of the simulation provide information

of value in design. Programmable user models provide a novel perspective on

this latter problem. The idea is that the designer is provided with two things, an

“empty” cognitive architecture and an instruction language for providing with all

the knowledge it needs to carry out some task. By programming it, the designer

has to get the architecture to perform that task under conditions that match those

of the interactive system design (i.e., a device model). So, for example, given a

particular dialog design, the designer might have to program the architecture to

select an object displayed in a particular way on a VDU and drag it across that

display to a target position.

The key, of course, is that the constraints that make up the architecture being

programmed are humanlike. Thus, if the designer finds it hard to get the

architecture to perform the task, then the implication is that a human user would

also find the task hard to accomplish. To concretize this, the designer may find

that the easiest form of knowledge-based program tends to select and drag the

wrong object under particular conditions. Furthermore, it takes a lot of thought

and effort to figure out how to get round this problem within the specific

architectural constraints of the model. Now suppose the designer were to adjust

the envisaged user-system dialog in the device model and then found that

reprogramming the architecture to carry out the same task under these new

conditions was straightforward and the problem of selecting the wrong object no

longer arose. Young and his colleagues would then argue that this constitutes

direct evidence that the second version of the dialog design tried by the designer

is likely to prove more usable than the first.

The actual project to realize a working PUM remains at an early stage of

development. The cognitive architecture being used is SOAR (Laird, Newell, &

Rosenbloom, 1987). There are many detailed issues to be addressed concerning

the design of an appropriate instruction language. Likewise, real issues are

raised about how a model that has its roots in architectures for problem solving

(Newell & Simon, 1972) deals with the more peripheral aspects of human

information processing, such as sensation, perception, and motor control.

Nevertheless as an architecture, it has scope in the sense that a broad range of

tasks and applications can be modeled within it. Indeed, part of the motivation

of SOAR is to provide a unified general theory of cognition (Newell, 1989).

In spite of its immaturity, additional properties of the PUM concept as an

application bridging structure are relatively clear (see Young et al., 1989). First,

programmable user models embody explicit cognitive theory in the form of the

to-be-programmed architecture. Second, there is an interesting allocation of

function between the model and the designer. Although the modeling process

requires extensive operationalization of knowledge in symbolic form, the PUM

provides only the constraints and the instruction language, whereas the designer

provides the knowledge of the application and its associated tasks. Third,

knowledge in the science base is transmitted implicitly into the design domain via

an inherently exploratory activity. Designers are not told about the underlying

cognitive science; they are supposed to discover it. By doing what they know

how to do well – that is, programming – the relevant aspects of cognitive

constraints and their interactions with the application should emerge directly in

the design context.

Fourth, programmable user models support a form of qualitative predictive

evaluation that can be carried out relatively early in the design cycle. What that

evaluation provides is not a classic predictive product of laboratory theory, rather

it should be an understanding of why it is better to have the artifact constructed

one way rather than another. Finally, although the technique capitalizes on the

designer’s programming skills, it clearly requires a high degree of commitment

and expense. The instruction language has to be learned and doing the

programming would require the development team to devote considerable

resources to this form of predictive evaluation.

Approximate Models of Cognitive Activity

Interacting Cognitive Subsystems (Barnard, 1985) also specifies a form of

cognitive architecture. Rather than being an AI constraint-based architecture,

ICS has its roots in classic human information-processing theory. It specifies

the processing and memory resources underlying cognition, the organization of

these resources, and principles governing their operation. Structurally, the

complete human information-processing system is viewed as a distributed

architecture with functionally distinct subsystems each specializing in, and

supporting, different types of sensory, representational, and effector processing

activity. Unlike many earlier generations of human information-processing

models, there are no general purpose resources such as a central executive or

limited capacity working memory. Rather the model attempts to define and

characterize processes in terms of the mental representations they take as input

and the representations they output. By focusing on the mappings between

different mental representations, this model seeks to integrate a characterization

of knowledge-based processing activity with classic structural constraints on the

flow of information within the wider cognitive system.

A graphic representation of this architecture is shown in the right-hand panel

of Figure 7.2, which instantiates Figure 7.1 for the use of the ICS framework in

an HCI context. The architecture itself is part of the science base. Its initial

development was supported by using empirical evidence from laboratory studies

of short-term memory phenomena (Barnard, 1985). However, by concentrating

on the different types of mental representation and process that transform them,

rather than task and paradigm specific concepts, the model can be applied across

a broad range of settings (e.g., see Barnard & Teasdale, 1991). Furthermore,

for the purposes of constructing a representation to bridge between theory and

application it is possible to develop explicit, yet approximate, characterizations of

cognitive activity.

In broad terms, the way in which the overall architecture will behave is

dependent upon four classes of factor. First, for any given task it will depend on

the precise configuration of cognitive activity. Different subsets of processes

and memory records will be required by different tasks. Second, behavior will

be constrained by the specific procedural knowledge embodied in each mental

process that actually transforms one type of mental representation to another.

Third, behavior will be constrained by the form, content, and accessibility of any

memory records that are need in that phase of activity. Fourth, it will depend on

the overall way in which the complete configuration is coordinated and

controlled.

Because the resources are relatively well defined and constrained in terms of

their attributes and properties, interdependencies between them can be motivated

on the basis of known patterns of experimental evidence and rendered explicit.

So, for example, a complexity attribute of the coordination and control of

cognitive activity can be directly related to the number of incompletely

proceduralized processes within a specified configuration. Likewise, a strategic

attribute of the coordination and control of cognitive activity may be dependent

upon the overall amount of order uncertainty associated with the mental

representation of a task stored in a memory record. For present purposes the

precise details of these interdependencies do not matter, nor does the particularly

opaque terminology shown in the rightmost panel of Figure 7.2 (for more

details, see Barnard, 1987). The important point is that theoretical claims can be

specified within this framework at a high level of abstraction and that these

abstractions belong in the science base.

Although these theoretical abstractions could easily have come from classic

studies of human memory and performance, there were in fact motivated by

experimental studies of command naming in text editing (Grudin & Barnard,

1984) and performance on an electronic mailing task (Barnard, MacLean, &

Hammond, 1984). The full theoretical analyses are described in Barnard (1987)

and extended in Barnard, Grudin, and MacLean (1989). In both cases the tasks

were interactive, involved extended sequences of cognitive behavior, involved

information-rich environments, and the repeating patterns of data collection were

meaningful in relation to broader task goals not atypical of interactive tasks in the

real world. In relation to the arguments presented earlier in this chapter, the

information being assimilated to the science base should be more appropriate and

relevant to HCI than that derived from more abstract laboratory paradigms. It

will nonetheless be subject to interpretive restrictions inherent in the particular

form of discovery representation utilized in the design of these particular

experiments.

Armed with such theoretical abstractions, and accepting their potential

limitations, it is possible to generate a theoretically motivated bridge to

application. The idea is to build approximate models that describe the nature of

cognitive activity underlying the performance of complex tasks. The process is

actually carried out by an expert system that embodies the theoretical knowledge

required to build such models. The system “knows” what kinds of

configurations are associated with particular phases of cognitive activity; it

“knows” something about the conditions under which knowledge becomes

proceduralized, and the properties of memory records that might support recall

and inference in complex task environments. It also “knows” something about

the theoretical interdependencies between these factors in determining the overall

patterning, complexity, and qualities of the coordination and dynamic control of

cognitive activity. Abstract descriptions of cognitive activity are constructed in

terms of a four-component model specifying attributes of configurations,

procedural knowledge, record contents, and dynamic control. Finally, in order

to produce an output, the system “knows” something about the relationships

between these abstract models of cognitive activity and the attributes of user

behaviour.

 

 

Figure 7.2. The applied science paradigm instantiated for the use of interacting cognitive subsystems as a theoretical basis for the development of expert system design aid.

Obviously, no single model of this type can capture everything that goes on

in a complex task sequence. Nor can a single model capture different stages of

user development or other individual differences within the user population. It is

therefore necessary to build a set of interrelated models representing different

phases of cognitive activity, different levels and forms of user expertise, and so the basic

modeling unit uses the four-component description to characterize

cognitive activity for a particular phase, such as establishing a goal, determining

the action sequence, and executing it. Each of these models approximates over

the very short-term dynamics of cognition. Transitions between phases

approximate over the short-term dynamics of tasks, whereas transitions between

levels of expertise approximate over different stages of learning. In Figure 7.2,

the envisaged application representation thus consists of a family of interrelated

models depicted graphically as a stack of cards.

Like the concept of programmable user models, the concept of approximate

descriptive modeling is in the course of development. A running demonstrator

system exists that effectively replicates the reasoning underlying the explanation

of a limited range of empirical phenomena in HCI research (see Barnard,

Wilson, & MacLean, 1987, 1988). What actually happens is that the expert

system elicits, in a context-sensitive manner, descriptions of the envisaged

interface, its users, and the tasks that interface is intended to support. It then

effectively “reasons about” cognitive activity, its properties, and attributes in that

applications setting for one or more phases of activity and one or more stages of

learning. Once the models have stabilized, it then outputs a characterization of

the probable properties of user behavior. In order to achieve this, the expert

system has to have three classes of rules: those that map from descriptions of

tasks, users, and systems to entities and properties in the model representation;

rules that operate on those properties; and rules that map from the model

representation to characterizations of behavior. Even in its somewhat primitive

current state, the demonstrator system has interesting generalizing properties.

For example, theoretical principles derived from research on rather antiquated

command languages support limited generalization to direct manipulation and

iconic interfaces.

As an applications representation, the expert system concept is very different

from programmable user models. Like PUMs, the actual tool embodies explicit

theory drawn from the science base. Likewise, the underlying architectural

concept enables a relatively broad range of issues to be addressed. Unlike

PUMs, it more directly addresses a fuller range of resources across perceptual,

cognitive, and effector concerns. It also applies a different trade-off in when and

by whom the modeling knowledge is specified. At the point of creation, the

expert system must contain a complete set of rules for mapping between the

world and the model. In this respect, the means of accomplishing and

expressing the characterizations of cognition and behavior must be fully and

comprehensively encoded. This does not mean that the expert system must

necessarily “know” each and every detail. Rather, within some defined scope,

the complete chain of assumptions from artifact to theory and from theory to

behavior must be made explicit at an appropriate level of approximation.

Equally, the input and output rules must obviously be grounded in the language

of interface description and user-system interaction. Although some of the

assumptions may be heuristic, and many of them may need crafting, both

theoretical and craft components are there. The how-to-do-it modeling

knowledge is laid out for inspection.

However, at the point of use, the expert system requires considerably less

precision than PUMs in the specification and operationalization of the knowledge

required to use the application being considered. The expert system can build a

family of models very quickly and without its user necessarily acquiring any

great level of expertise in the underlying cognitive theory. In this way, it is

possible for that user to explore models for alternative system designs over the

course of something like one afternoon. Because the system is modular, and the

models are specified in abstract terms, it is possible in principle to tailor the

systems input and output rules without modifying the core theoretical reasoning.

The development of the tool could then respond to requirements that might

emerge from empirical studies of the real needs of design teams or of particular

application domains.

In a more fully developed form, it might be possible to address the issue of

which type of tool might prove more effective in what types of applications

context. However, strictly speaking, they are not direct competitors, they are

alternative types of application representation that make different forms of tradeoff

about the characteristics of the complete chain of bridging from theory to

application. By contrast with the kinds of theory-based techniques relied on in

the first life cycle of HCI research, both PUMs and the expert-system concept

represent more elaborate bridging structures. Although underdeveloped, both

approaches are intended ultimately to deliver richer and more integrated

information about properties of human cognition into the design environment in

forms in which it can be digested and used. Both PUMs and the expert system

represent ways in which theoretical support might be usefully embodied in future

generations of tools for supporting design. In both cases the aim is to deliver

within the lifetime of the next cycle of research a qualitative understanding of

what might be going on in a user’s head rather than a purely quantitative estimate

of how long the average head is going to be busy (see also Lewis, this volume).

Summary

The general theme that has been pursued in this chapter is that the relationship

between the real world and theoretical representations of it is always mediated by

bridging representations that subserve specific purposes. In the first life cycle of

research on HCI, the bridging representations were not only simple, they were

only a single step away from those used in the parent disciplines for the

development of basic theory and its validation. If cognitive theory is to find any

kind of coherent and effective role in forthcoming life cycles of HCI research, it

must seriously reexamine the nature and function of these bridging

representations as well as the content of the science base itself.

This chapter has considered bridging between specifically cognitive theory

and behavior in human-computer interaction. This form of bridging is but one

among many that need to be pursued. For example, there is a need to develop

bridging representations that will enable us to interrelate models of user cognition

with the formed models being developed to support design by software

engineers (e.g., Dix, Harrison, Runciman, & Thimbleby, 1987; Harrison,

Roast, & Wright, 1989; Thimbleby, 1985). Similarly there is a need to bridge

between cognitive models and aspects of the application and the situation of use

(e.g., Suchman, 1987). Truly interdisciplinary research formed a large part of

the promise, but little of the reality of early HCI research. Like the issue of

tackling nonideal user behavior, interdisciplinary bridging is now very much on

the agenda for the next phase of research (e.g., see Barnard & Harrison, 1989).

The ultimate impact of basic theory on design can only be indirect – through

an explicit application representation. Alternative forms of such representation

that go well beyond what has been achieved to date have to be invented,

developed, and evaluated. The views of Carroll and his colleagues form one

concrete proposal for enhancing our application representations. The design

rationale concept being developed by MacLean, Young, and Moran (1989)

constitutes another potential vehicle for expressing application representations.

Yet other proposals seek to capture qualitative aspects of human cognition while

retaining a strong theoretical character (Barnard et al., 1987; 1988; Young,

Green, & Simon, 1989).

On the view advocated here, the direct theory-based product of an applied

science paradigm operating in HCI is not an interface design. It is an application

representation capable of providing principled support for reasoning about

designs. There may indeed be very few examples of theoretically inspired

software products in the current commercial marketplace. However, the first life

cycle of HCI research has produced a far more mature view of what is entailed in

the development of bridging representations that might effectively support design

reasoning. In subsequent cycles, we may well be able to look forward to a

significant shift in the balance of added value within the interaction between

applied science and design. Although future progress will in all probability

remain less than rapid, theoretically grounded concepts may yet deliver rather

more in the way of principled support for design than has been achieved to date.

Acknowledgments

The participants ant the Kittle Inn workshop contributed greatly to my

understanding of the issues raised here. I am particularly indebted to Jack

Carroll, Wendy Kellogg, and John Long, who commented extensively on an

earlier draft. Much of the thinking also benefited substantially from my

involvement with the multidisciplinary AMODEUS project, ESPRIT Basic

Research Action 3066.

References

ACTS (1989). Connectionist Techniques for Speech (Esprit Basic Research

ACtion 3207), Technical Annex. Brussels: CEC.

Basic Theories and the Artifacts of HCI 125

AMODEUS (1989). Assimilating models of designers uses and systems (ESprit

Basic Research Action 3066), Technical Aneex. Brussels; CEC.

Anderson, J. R., & Skwarecki, E. 1986. The automated tutoring of

introductory computer programming. Communications of the ACM, 29,

842-849.

Barnard, P. J. (1985). Interacting cognitive subsystems: A psycholinguistic

approach to short term memory. In A. Ellis, (Ed.), Progress in the

psychology of language (Vol. 2, chapter 6, pp. 197-258. London:

Lawrenece Erlbaum Associates.

Barnard, P. J. (1987). Cognitive resources and the learning of human-computer

dialogs. In J.M. Carroll (Ed.), Interfacing thought: Cognitive aspects of

human-computer interaction (pp. 112-158). Cambridge MA: MIT Press.

Barnard, P. J., & Harrison, M. D. (1989). Integrating cognitive and system

models in human-computer interaction. In A. Sutcliffe & L. Macaulay,

(Ed.), People and computers V (pp. 87-103). Cambridge: Cambridge

University Press.

Barnard, P. J., Ellis, J., & MacLean, A. (1989). Relating ideal and non-ideal

verbalised knowledge to performance. In A. Sutcliffe & L. Macaulay

(Eds.), People and computers V (pp. 461-473). Cambridge: Cambridge

University Press.

Barnard, P. J., Grudin, J., & MacLean, A. (1989). Developing a science base

for the naming of computer commands. In J. B. Long & A. Whitefield

(Eds.), Cognitive ergonomics and human-computer interaction (pp. 95-

133). Cambridge: Cambridge University Press.

Barnard, P. J., Hammond, N., MacLean, A., & Morton, J. (1982). Learning

and remembering interactive commands in a text-editing task. Behaviour

and Information Technology, 1, 347-358.

Barnard, P. J., MacLean, A., & Hammond, N. V. (1984). User representations

of ordered sequences of command operations. In B. Shackel (Ed.),

Proceedings of Interact ’84: First IFIP Conference on Human-Computer

Interaction, (Vol. 1, pp. 434-438). London: IEE.

Barnard, P. J., & Teasdale, J. (1991). Interacting cognitive subsystems: A

systematic approach to cognitive-affective interaction and change.

Cognition and Emotion, 5, 1-39.

Barnard, P. J., Wilson, M., & MacLean, A. (1986). The elicitation of system

knowledge by picture probles. In M. Mantei & P. Orbeton (Eds.),

Proceedings of CHI ’86: Human Factors in Computing Systems (pp.

235-240). New York: ACM.

Barnard, P. J., Wilson, M., & MacLean, A. (1987). Approximate modelling of

cognitive activity: Towards an expert system design aid. In J. M. Carroll

& P. P. Tanner (Eds.), Proceedings of CHI + GI ’87: Human Factors in

Computing Systems and Graphics Interface (pp. 21-26). New York:

ACM.

Barnard, P. J., Wilson, M., & MacLean, A. (1988). Approximate modelling of

cognitive activity with an Expert system: A theory based strategy for

126 Barnard

developing an interactive design tool. The Computer Journal, 31, 445-

456.

Bartlett, F. C. (1932). Remembering: A study in experimental and social

psychology. Cambridge: Cambridge University Press.

Broadbent, D. E. (1958). Perception and communication. London: Pergamon

Press.

Card, S. K., & Henderson, D. A. (1987). A multiple virtual-workspace

interface to support user task-switching. In J. M. Carroll & P. P. Tanner

(Eds.), Proceedings of CHI + GI ’87: Human Factors in Computing

Systems and Graphics Interface (pp. 53-59). New York: ACM.

Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of humancomputer

interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.

Carroll, J. M. (1985). What’s in a name? New York: Freeman.

Carroll, J. M. (1989a). Taking artifacts seriously. In S. Maas & H. Oberquelle

(Eds.), Software-Ergonomie ’89 (pp. 36-50). Stuttgart: Teubner.

Carroll, J. M. (1989b). Evaluation, description and invention: Paradigms for

human-computer interaction. In M. C. Yovits (Ed.), Advances in

computers (Vol. 29, pp. 44-77). London: Academic Press.

Carroll, J. M. (1990). Infinite detail and emulation in an ontologically

minimized HCI. In J. Chew & J. Whiteside (Eds.), Proceedings of CHI

’90: Human Factors in Computing Systems (pp. 321-327). New York:

ACM.

Carroll, J. M., & Campbell, R. L. (1986). Softening up hard science: Reply to

Newell and Card. Human-Computer Interaction, 2, 227-249.

Carroll, J. M., & Campbell, R. L. (1989). Artifacts as psychological theories:

The case of human-computer interaction. Behaviour and Information

Technology, 8, 247-256.

Carroll, J. M., & Kellogg, W. A. (1989). Artifact as theory-nexus:

Hermaneutics meets theory-based design. In K. Bice & C. H. Lewis

(Eds.), Proceedings of CHI ’89: Human Factors in Computing Systems

(pp. 7-14). New York: ACM.

Chomsky, N. (1957). Syntactic structures. The Hague: Mouton.

Dix, A. J., Harrison, M. D., Runciman, C., & Thimbleby, H. W. (1987).

Interaction models and the principled design or interactive systems. In

  1. Nicholls & D. S. Simpson (Eds.), European software engineering

conference, (pp. 127-135). Berlin: Springer Lecture Notes.

Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data.

Psychological Review, 87, 215-251.

Grudin, J. T. (1990). The computer reaches out: The historical continuity of

interface design. In J. Chew & J. Whiteside (Eds.), Proceedings of CHI

’90: Human Factors in Computing Systems (pp. 261-268). New York:

ACM.

Grudin, J. T., & Barnard, P. J. (1984). The cognitive demands of learning

command names for text editing. Human Factors, 26, 407-422.

Hammond, N., & Allinson, L. 1988. Travels around a learning support

environment: rambling, orienteering or touring? In E. Soloway, D.

Basic Theories and the Artifacts of HCI 127

Frye, & S. B. Sheppard (Eds.), Proceedings of CHI ’88: Human

Factors in Computing Systems (pp. 269-273). New York: ACM.

Hammond, N. V., Long, J., Clark, I. A., Barnard, P. J., & Morton, J. (1980).

Documenting human-computer mismatch in interactive systems. In

Proceedings of the Ninth International Symposium on Human Factors in

Telecommunications (pp. 17-24). Red Bank, NJ.

Hanson, W. (1971). User engineering principles for interactive systems.

AFIPS Conference Proceedings , 39, 523-532.

Harrison, M. D., Roast, C. R., & Wright, P. C. (1989). Complementary

methods for the iterative design of interactive systems. In G. Salvendy

& M. J. Smith (Eds.), Proceedings of HCI International ’89 (pp. 651-

658). Boston: Elsevier Scientific.

Kieras, D. E., & Polson, P. G. (1985). An approach to formal analysis of user

complexity. International Journal of Man- Machine Studies, 22, 365-

394.

Laird, J.E., Newell, A., & Rosenbloom, P. S. (1987). SOAR: An architecture

for general intelligence. Artificial Intelligence, 33, 1-64.

Landauer, T. K. (1987). Relations between cognitive psychology and computer

systems design. In J. M. Carroll (Ed.), Interfacing thought: Cognitive

aspects of human-computer interaction (pp. 1-25). Cambridge, MA:

MIT Press.

Lewis, C. H. (1988). Why and how to learn why: Analysis-based

generalization of procedures. Cognitive Science, 12, 211-256.

Long, J. B. (1987). Cognitive ergonomics and human-computer interaction. In

  1. Warr (Ed.), Psychology at Work (erd ed.). Harmondsworth,

Middlesex: Penguin.

Long, J. B. (1989). Cognitive ergonomics and human-computer interaction: An

introduction. In J. B. Long & A. Whitefield (Eds.), Cognitive

ergonomics and human-computer interaction (pp. 4-34). Cambridge:

Cambridge University Press.

Long, J. B., & Dowell, J. (1989). Conceptions of the discipline of HCI: Craft,

applied science and engineering. In A. Sutcliffe & L. Macaulay (Eds.),

People and computers V (pp. 9-32). Cambridge: Cambridge University

Press.

MacLean, A., Barnard, P., & Wilson, M. (1985). Evaluating the human

interface of a data entry system: User choice and performance measures

yield different trade-off functions. In P. Johnson & S. Cook (Eds.),

People and computers: Designing the interface (pp. 172-185).

Cambridge: Cambridge University Press.

MacLean, A., Young, R. M., & Moran, T. P. (1989). Design rationale: The

argument behind the artefact. In K. Bice & C.H. Lewis (Eds.),

Proceedings of CHI ’89: Human Factors in Computing Systems (pp.

247-252). New York: ACM.

Mack, R., Lewis, C., & Carroll, J.M. (1983). Learning to use word

processors: Problems and prospects. ACM Transactions on Office

information Systems, 1, 254-271.

128 Barnard

Morton, J., Marcus, S., & Frankish, C. (1976). Perceptual centres: P-centres.

Psychological Review, 83, 405-408.

Newell, A. (1989). Unified Theories of Cognition: The 1987 William James

Lectures . Cambridge, MA: Harvard University Press.

Newell, A., & Card, S. K. (1985). The prospects for psychological science in

human computer interaction. Human-Comuter Interaction, 1, 209.242.

Newell, A., & Simon, H. A. (1972). Human Problem Solving. Englewood

Cliffs, NJ: Prentice-Hall.

Norman, D. A. (1983). Design principles for human-computer interaction. In

Proceedings of CHI ’83: Human Factors in Computing Systems (pp. 1-

10). New York: ACM.

Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W.

Draper (Eds.), User centered system design: New perspectives on

human-computer interaction (pp. 31-61). Hillsdale, NJ: Lawrence

Erlbaum Associates.

Olson, J. R., & Olson, G. M. (1990). The growth of cognitive modelling since

GOMS. Human Computer Interaction 5, 221-265.

Patterson, R. D. (1983). Guidelines for auditory warnings on civil aircraft: A

summary and prototype. In G. Rossi (Ed.), Noise as a Public Health

Problem (Vol. 2, pp. 1125-1133). Milan: Centro Richerche e Studi

Amplifon.

Patterson, R. D., Cosgrove, P., Milroy, R., & Lower, M.C. (1989). Auditory

warnings for the British Rail inductive loop warning system. In

Proceedings of the Institute of Acoustics, Spring Conference (Vol. 11,

  1. 5-51-58). Edinburgh: Institute of Acoustics.

Patterson, R. D., Edworthy, J., Shailer, M.J., Lower, M.C., & Wheeler, P. D.

(1986). Alarm sounds for medical equipment in intensive care areas and

operting theatres. Institute of Sound and Vibration (Research Report AC

598).

Payne, S., & Green, T. (1986). Task action grammars: A model of the mental

representation of task languages. HumanComputer Interaction, 2, 93-

133.

Polson, P. (1987). A quantitative theory of human-computer interaction. In J .

  1. Carroll (Ed.), Interfacing thought: Cognitive aspects of humancomputer

interaction (pp. 184-235). Cambridge, MA: MIT Press.

Reisner, P. (1982). Further developments towards using formal grammar as a

design tool. In Proceedings of Human Factors in Computer Systems

Gaithersburg (pp. 304-308). New York: ACM.

Scapin, D. L. (1981). Computer commands in restricted natural language: Some

aspects of memory and experience. Human Factors, 23, 365-375.

Simon, T. (1988). Analysing the scope of cognitive models in human-computer

interaction. In D. M. Jones & R. Winder (Eds.), People and computers

IV (pp. 79-93). Cambridge: Cambridge University Press.

Suchman, L. (1987). Plans and situated actions: The problem of humanmachine

communication. Cambridge: Cambridge University Press.

Basic Theories and the Artifacts of HCI 129

Thimbleby, H. W. (1985). Generative user-engineering principles for user

interface design. In B. Shackel (Ed.), Human computer interaction:

Interact ’84 (pp. 661-665). Amsterdam: North-Holland.

Whiteside, J., & Wixon, D. (1987). Improving human-computer interaction: A

quest for cognitive science. In J. M. Carroll (Ed.), Interfacing thought:

Cognitive aspects of human-computer interaction (pp. 353-365).

Cambridge, MA: MIT Press.

Wilson, M., Barnard, P. J., Green, T. R. G., & MacLean, A. (1988).

Knowedge-based task analysis for human-computer systems. In G. Van

der Veer, J-M Hoc, T. R. G. Green, & D. Murray (Eds.), Working with

computers (pp. 47-87). London: Academic Press.

Young, R. M., & Barnard, P. J. (1987). The use of scenarios in humancomputer

interaction research: Turbocharging the tortoise of cumulative

science. In J. M. Carroll & P. P. Tanner (Eds.), Proceedings of CHI +

GI ’87: Human Factors in Computing Systems and Graphics Interface

(Toronto, April 5-9) (pp. 291-296). New York: ACM.

Young, R. M., Barnard, P.J., Simon, A., & Whittington, J. (1989). How

would your favourite user model cope with these scenarios? SIGCHI

Bulletin, 20( 4), 51-55.

Young, R. M., Green, T. R. G., & Simon, T. (1989). Programmable user

models for predictive ev aluation of interface designs. In K. Bice and C.

  1. Lewis (Eds.), Proceedings of CHI ’89: Human Factors in Computing

Systems (pp. 15-19). New York: ACM.

Young, R.M., & MacLean, A. (1988). Choosing between methods: Analysing

the user’s decision space in terms of schemas and linear models. In E.

Soloway, D. Frye, & S. B. Sheppard (Eds.), Proceedings of CHI ’88:

Human Factors in Computing Systems (pp. 139-143). New York:

ACM.

Engineering Framework Illustration: Newman (2002) – Requirements 150 150 John

Engineering Framework Illustration: Newman (2002) – Requirements

Requirements

William Newman

October 21, 2002

 

Copyright © 2002, William Newman

 

Software engineering

Comment 1

Software engineering here, as it includes User Requirements as part of its scope, is to be assumed to include HCI and certainly for the purposes in hand.

is unique in many ways as a design practice, not least for its concern with methods for analysing and specifying requirements.

Comment 2

Methods here constitute (HCI) design knowledge and support (HCI) design practice. See also Comments 8 and 9.

In other engineering design disciplines,

Comment 3

Software engineering here (and so HCI,as viewed by some researchers) is considered to be an engineering design discipline.

the derivation of requirements is considered a routine matter; to the authors of engineering textbooks it is too straightforward and obvious to get even a mention. In the software world, however, things are different. Failure to sort out requirements is common, often the cause of costly over-runs. Methods for analysing and specifying requirements are always in demand.

Comment 4

See Comment 2.

In subsequent notes I will offer my own explanation for this peculiar concern with requirements. In the meantime, I want to try to explain what requirements really are, and how to deal with them.

What are requirements?

Requirements specify what a designed artefact must do. They are sometimes expressed in the future imperative tense, e.g., “The system shall provide a means of periodic backup of all files.” This is an example of a functional requirement, as distinct from a non-functional requirement that states quantitative and/or envionmental criteria that the design must meet, e.g., “The phone shall weigh no more than 100 grams.” The arcane future imperative style is usually abandoned in favour of something more familiar: “The system should provide…” or “The phone must weigh…” A complete set of such statements is usually called a requirements specification.

 

In the life-cycle of an artefact,

Comment 5

The designed artefact here is the product of design knowledge, such as methods, supporting design practice. (See also Comments 2, 4, 8, 9, and 10).

requirements define what capabilities or services are to be provided. Notwithstanding the mystique that has been constructed around them in recent years, requirements are as fundamental to creative work as design itself. Behind every design activity, however small, there are always requirements, either tacit or explicit. For the designer these requirements serve two basic purposes. First, they translate some external need into a requirement that can be met through design. Second, they offer a basis for testing the design as it takes shape. If the design is found to meet the requirements it may be assumed to address the need.

Comment 6

Further details are provided here concerning the general nature of design and its different aspects. Testing is obviously an important one. See also Comments 7 and 8.

 

A basic model

This basic model underlies the specification and use of requirements in every software project. Diagrammatically it can be presented as shown in Figure 1. The stages of transformation of needs into a system implementation are shown progressing from left to right. However, the progression is never a straight sequence, and rather is made up of numerous iterations, often out of sequence. Changes in one representation (e.g., in the design) can lead to changes in others (e.g., in requirements or in the implementation) and these must be tested for consistency with the source of the change.

Screen shot 2016-04-30 at 12.58.00

Figure 1. The model linking needs through requirements to the design and its implementation.

Does it all start with needs?

Ideally the process of system development should start with an expression of needs, or of some equivalent situation of concern (Checkland 1990); it should then proceed to the identification of requirements, and so on. In a technology-driven world, however, the inspiration for a new system can often arise from a technological advance. The technology is linked up with a putative need, probably very loosely specified, and a process commences of refining both needs and design, and of filling in requirements. During the last ten years the World Wide Web has had a similar inspiring effect on many designers. Recent advances – cameras in mobile phones, self-tracking pens, etc. – are likely to do the same, but perhaps on a smaller scale.

 

Where needs exist, and a technology can be found that appears to address them, a similar process of gradual “requirements infill” may take place. A celebrated instance was the genesis in 1954 of American Airlines’ Sabre reservation system, during a conversation between C. R. Smith, American Airlines CEO, and a senior IBM salesman, Blair Smith, on a flight from New York to Los Angeles (Goff 99). The first had a need to improve the efficiency of reservations, while the second was able to offer an idea for a design based on computing and communications technology. Completing the stages of the process took eight years; some of the system’s details and rationale have been described by Desmonde (1964).

Testing against requirements

I mentioned a second purpose of requirements, in testing the design. This is an essential part of tracking design progress and accepting the final implementation. In most domains of engineering – aeronautical, civil, mechanical, etc. – requirements play a dominant role as a basis for tests. In software design this role is less visible: testing is sometimes carried out against generic requirements such as usability criteria.

 

In cases where quantitative requirements have been specified, engineering and software design may adopt a common approach to empirical testing. For example, if a requirement exists that an operating system must boot up in under a minute, or that errors in text recognition should not exceed 1%, then the software is implemented and is put through a test in which the relevant measures are taken. If the software falls short, a further design iteration is undertaken.

Comment 7

Empirical testing is a critical component of the HCI contribution to design and comprises a host of different methods. See also Comments 2, 4 and 8.

This approach is not very popular with engineering designers because of the high cost and delay involved in implementing and testing a design.

Comment 8

Implement and test are two of the most important HCI design processes. See also Comments 2, 4, and 7.

The cost of testing components of spacecraft, for example, is a significant proportion of overall development costs: building a testbed may cost more than prototyping the component (Pinkus 97). Research engineers therefore develop analytical models capable of predicting the performance of designs while on the drawing board or in the CAD system. In software development, however, such models are relatively scarce, especially where user-level requirements are the issue. Equally scarce, for that matter, are quantitative requirements. So software testing is usually carried out empirically.

Comment 9

Analytical and empirical are the two major classes of HCI design methods and so practices. Analytical models, as here, would constitute HCI declarative (or substantive) knowledge (as opposed to methodological knowledge – see Comments 2, 4, 6, and 8.

Requirements are necessary and sufficient

The distinction between requirements and design specifications is clear: requirements state what the system must do, designs describe what is to be built. However, this distinction can easily become blurred when requirements for user interfaces are being developed.   We might, for example, find ourselves drawn into specifying precise requirements for the functions in the menu of a Windows-based tool:

 

The system should provide, under the File heading, functions for creating, opening, closing, printing and print-previewing images, and for exiting from the system.

 

The system should provide, under the Edit heading, functions for undoing and repeating actions, for …

 

Or we might simply specify the list of functions to be provided: create, open, close, print, etc.; or just state the requirement that the system should support “the standard range of File and Edit functions.”

 

The ground-rule in specifying requirements is that they should be sufficient to ensure that the needs are met, but should constrain the design only as necessary. Obviously we don’t want to leave open the possibility that the system will fail to meet the needs. Less obviously, we should not over-constrain the designers, for we might then prevent them from using a particularly efficient or reliable design that we had overlooked. The first version of our File and Edit requirements could be considered over-constraining, for quite a lot of design expertise goes into choosing the layout and wording of these menus. The third version is insufficient, for it allows the designers to leave out functions that may be essential to users. The middle way – the list of functions to be provided – is probably the best option.

Knowing what’s technically feasible

One other danger is that the designers will be unable to meet the specified requirements. This is one of the major reasons why iteration is needed during requirements specification.

Comment 10

Iteration of implementation and test methods constitutes part of the majority of HCI design practices and so design cycles. See also Comment 2, 4, 6, and 8.

Suppose we specify the requirement that text recognition errors should not exceed 1 percent. The customer agrees the specification. When the system is implemented and tested, we learn that the error rate is 8 percent, and are faced with a serious problem. Here, even if the customer’s need was for a 1 percent error rate, we should have checked the feasibility of this before specifying it.

Comment 11

Errors, as here, along with time, are primary criteria for interactive system performance and its testing.

 

Technical advances often make possible corresponding improvements in the requirements we can offer customers. One such advance, achieved at Xerox PARC in the mid-1970s, was the discovery of a way to implement fast global reformatting of very long documents in a WYSIWYG text editor. Until then, the users of such editors knew that changing the margin settings or font size of a long document could result in minutes of thrashing while the position of every line break was recalculated. Butler Lampson and J Moore realised that only the text on the screen needed to be recalculated at the time, and they deviced a ‘piece table’ scheme that allowed recalculation of other parts of the document to be deferred until they were displayed or written out to file (Hiltzik 99). This permitted the requirement for speed of response to such reformatting commands to be improved from minutes to seconds.

Comment 12

See also Comment 10, concerning errors and time.

 

Conclusion: How are needs identified?

This brief discussion of requirements has referred several times to the relationship between requirements and needs. In many respects this relationship mirrors that between designs and requirements. However, techniques for establishing needs are very different from those employed in other parts of the process. I will cover these techniques, and how they relate to the process as a whole, in my lext set of notes.

References

Checkland P. and Scholes J. (1990) Soft Systems Methodology in Action. Chichester: John Wiley.

Goff L. (1999) “1960: Sabre takes off.” See: http://www.cnn.com/TECH/computing/9906/29/1960.idg/

Hiltzik M. A. (1999) Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age. New York: HarperCollins.

Pinkus R. L. B., Shuman L. J., Hummon N. P. and Wolfe H. (1997) Engineering Ethics: Balancing Cost, Schedule and Risk – Lessons Learned from the Space Shuttle. Cambridge: Cambridge University Press.

 

Applied Science Framework Illustration: Barnard (1991) Bridging between Basic Theories and the Artifacts of Human-Computer Interaction 150 150 John

Applied Science Framework Illustration: Barnard (1991) Bridging between Basic Theories and the Artifacts of Human-Computer Interaction

Craft Approach Illustration: Wright et al. – FeedFinder: A Location-Mapping Mobile Application for Breastfeeding Women 150 150 John

Craft Approach Illustration: Wright et al. – FeedFinder: A Location-Mapping Mobile Application for Breastfeeding Women

Long and Dowell (1989) 150 150 John

Long and Dowell (1989)

Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering

John Long and John Dowell

Ergonomics Unit, University College London, 26 Bedford Way, London. WC1H 0AP.

The theme of HCI ’89 is ‘the theory and practice of HCI’. In providing a general introduction to the Conference, this paper develops the theme within a characterisation of alternative conceptions of the discipline of Human-Computer Interaction (HCI). First, consideration of disciplines in general suggests their complete definition can be summarised as: ‘knowledge, practices and a general problem having a particular scope, where knowledge supports practices seeking solutions to the general problem’. Second, the scope of the general problem of HCI is defined by reference to humans, computers, and the work they perform. Third, by intersecting these two definitions, a framework is proposed within which different conceptions of the HCI discipline may be established, ordered, and related. The framework expresses the essential characteristics of the HCI discipline, and can be summarised as: ‘the use of HCI knowledge to support practices seeking solutions to the general problem of HCI’. Fourth, three alternative conceptions of the discipline of HCI are identified. They are HCI as a craft discipline, as an applied scientific discipline, and as an engineering discipline. Each conception is considered in terms of its view of the general problem, the practices seeking solutions to the problem, and the knowledge supporting those practices; examples are provided. Finally, the alternative conceptions are reviewed, and the effectiveness of the discipline which each offers is comparatively assessed. The relationships between the conceptions in establishing a more effective discipline are indicated.

Published in: People and Computers V. Sutcliffe A. and Macaulay L. (ed.s). Cambridge University Press, Cambridge. Proceedings of the Fifth Conference of the BCS HCI SIG, Nottingham 5-8 September 1989. 1
Contents

1. Introduction

1.1. Alternative Interpretations of the Theme 1.2. Alternative Conceptions of HCI: the Requirement for a Framework 1.3. Aims

2. A Framework for Conceptions of the HCI Discipline

2.1. On the Nature of Disciplines 2.2. Of Humans Interacting with Computers 2.3. The Framework for Conceptions of the HCI Discipline

3. Three Conceptions of the Discipline of HCI

3.1. Conception of HCI as a Craft Discipline  3.2. Conception of HCI as an Applied Science Discipline 3.3. Conception of HCI as an Engineering Discipline

4. Summary and Conclusions

1. Introduction

HCI ’89 is the fifth conference in the ‘People and Computers’ series organised by the British Computer Society’s HCI Specialist Group. The main theme of HCI ’89 is ‘the theory and practice of HCI’. The significance of the theme derives from the questions it prompts and from the Conference aims arising from it. For example, what is HCI? What is HCI practice? What theory supports HCI practice? How well does HCI theory support HCI practice? Addressing such questions develops the Conference theme and so advances the Conference goals.

1.1. Alternative Interpretations of the Theme

Any attempt to address these questions, however, admits no singular answer. For example, some would claim HCI as a science, others as engineering. Some would claim HCI practice as ‘trial and error’, others as ‘specify and implement’. Some would claim HCI theory as explanatory laws, others as design principles. Some would claim HCI theory as directly supporting HCI practice, others as indirectly providing support. Some would claim HCI theory as effectively supporting HCI practice, whilst others may claim such support as non-existent. Clearly then, there will be many possible interpretations of the theme ‘the theory and practice of HCI’. Answers to some of the questions prompted by the theme will be related. Different answers to the same question may be mutually exclusive; for example, types of practice as ‘trial and error’ or ‘specify and implement’ will likely be mutually exclusive. Answers to different questions may also be mutually exclusive; for example, HCI as engineering would likely exclude HCI theory as explanatory laws, and HCI practice as ‘trial and error’. And moreover, answers to some questions may constrain the answers to other questions; for example, types of HCI theory, perhaps design principles, may constrain the type of practice, perhaps as ‘specify and implement’.

1.2. Alternative Conceptions of HCI: the Requirement for a Framework

It follows that we must admit the possibility of alternative, and equally legitimate, conceptions of the HCI discipline – and therein, of its theory and practice. A conception of the HCI discipline offers a unitary view; its value lies in the coherence and completeness with which it enables understanding of the discipline, how the discipline operates, and its effectiveness. So for example, a conception of HCI might be either of a scientific or of an engineering discipline; its view of the theory and practice of the discipline would be different in the two cases. Its view of how the discipline might operate, and its expectations for the effectiveness of the discipline, would also be different in the two cases. This paper identifies alternative conceptions of HCI, and attempts a comparative assessment of the (potential) effectiveness of the discipline which each views. The requirement for identifying the different conceptions is both prompted and required by the development of the Conference theme.

To advance alternative conceptions of HCI, however, it is necessary first to formulate some form of analytic structure to ensure that conceptions supposed as alternatives are both complete and of the same subject, rather than being conceptions of complementary, or simply different, subjects. A suitable structure for this purpose would be a framework identifying the essential characteristics of the HCI discipline. By such a framework, instances of conceptions of the HCI discipline – claimed to be substantively different, but equivalent – might be established, ordered, and related. And hence, so might their views of its theories and practices. The aims of this paper follow from the need to identify alternative conceptions of HCI as a discipline.

The aims are described in the next section.

1.3. Aims

To address and develop the Conference theme of ‘the theory and practice of HCI’ – and so to advance the goals of HCI ’89 – the aims of this paper are as follows:

(i) to propose a framework for conceptions of the HCI discipline

(ii) to identify and exemplify alternative conceptions of the HCI discipline in terms of the framework

(iii) to evaluate the effectiveness of the discipline as viewed by each of the conceptions, and to indicate the possible relationships between the conceptions in establishing a more effective discipline.

2. A Framework for Conceptions of the HCI Discipline

Two prerequisites of a framework for conceptions of the HCI discipline are assumed. The first is a definition of disciplines appropriate for the expression of HCI. The second is a definition of the province of concern of the HCI discipline which, whilst broad enough to include all disparate aspects, enables the discipline’s boundaries to be identified. Each of these prerequisites will be addressed in turn (Sections 2.1. and 2.2.). From them is derived a framework for conceptions of the HCI discipline (Section 2.3.). Source material for the framework is to be found in (Dowell & Long [1988]; Dowell & Long [manuscript submitted for publication]; and Long [1989]).

2.1. On the Nature of Disciplines

Most definitions assume three primary characteristics of disciplines: knowledge; practice; and a general problem.

All definitions of disciplines make reference to discipline knowledge as the product of research or more generally of a field of study. Knowledge can be public (ultimately formal) or private (ultimately experiential). It may assume a number of forms; for example, it may be tacit, formal, experiential, codified – as in theories, laws and principles etc. It may also be maintained in a number of ways; for example, it may be expressed in journals, or learning systems, or it may only be embodied in procedures and tools. All disciplines would appear to have knowledge as a component (for example, scientific discipline knowledge, engineering discipline knowledge, medical discipline knowledge, etc). Knowledge, therefore, is a necessary characteristic of a discipline.

Consideration of different disciplines suggests that practice is also a necessary characteristic of a discipline. Further, a discipline’s knowledge is used by its practices to solve a general (discipline) problem. For example, the discipline of science includes the scientific practice addressing the general (scientific) problem of explanation and prediction. The discipline of engineering includes the engineering practice addressing the general (engineering) problem of design. The discipline of medicine includes the medical practice addressing the general (medical) problem of supporting health. Practice, therefore, and the general (discipline) problem which it uses knowledge to solve, are also necessary characteristics of a discipline.

Clearly, disciplines are here being distinguished by the general (discipline) problem they address. The scientific discipline addresses the general (scientific) problem of explanation and prediction, the engineering discipline addresses the general (engineering) problem of design, and so on. Yet consideration also suggests those general (discipline) problems each have the necessary property of a scope. Decomposition of a general (discipline) problem with regard to its scope exposes (subsumed) general problems of particular scopes1. This decomposition allows the further division of disciplines into sub-disciplines.

For example, the scientific discipline includes the disciplines of physics, biology, psychology, etc., each distinguished by the particular scope of the general problem it addresses. The discipline of psychology addresses a general (scientific) problem whose particular scope is the mental and physical behaviours of humans and animals. It attempts to explain and predict those behaviours. It is distinguished from the discipline of biology which addresses a general problem whose particular scope includes anatomy, physiology, etc. Similarly, the discipline of engineering includes the disciplines of civil, mechanical, electrical engineering, etc. Electrical engineering is distinguished by the particular scope of the general (engineering) problem it addresses, i.e., the scope of electrical artefacts. And similarly, the discipline of medicine includes the disciplines of dermatology, neurology etc., each distinguished by the particular scope of the general problem it addresses. 1Notwithstanding the so-called ‘hierarchy theory ‘ which assumes a phenomenon to occur at a particular level of complexity and to subsume others at a lower level (eg, Pattee, 1973).

 

Figure 1. Definition of a Discipline

Two basic properties of disciplines are therefore concluded. One is the property of the scope of a general discipline problem. The other is the possibility of division of a discipline into sub-disciplines by decomposition of its general discipline problem.

Taken together, the three necessary characteristics of a discipline (and the two basic properties additionally concluded), suggest the definition of a discipline as: ‘the use of knowledge to support practices seeking solutions to a general problem having a particular scope’. It is represented schematically in Figure 1. This definition will be used subsequently to express HCI.

2.2. Of Humans Interacting with Computers

The second prerequisite of a framework for conceptions of the HCI discipline is a definition of the scope of the general problem addressed by the discipline. In delimiting the province of concern of the HCI discipline, such a definition might assure the completeness of any one conception (see Section 1.2.).

HCI concerns humans and computers interacting to perform work. It implicates: humans, both individually and in organisations; computers, both as programmable machines and functionally embedded devices within machines (stand alone or networked); and work performed by humans and computers within organisational contexts. It implicates both behaviours and structures of humans and computers. It implicates the interactions between humans and computers in performing both physical work (ie, transforming energy) and abstract work (ie, transforming information). Further, since both organisations and individuals have requirements for the effectiveness with which work is performed, also implicated is the optimisation of all aspects of the interactions supporting effectiveness.

Taken together, these implications suggest a definition of the scope of the general (discipline) problem of HCI. It is expressed, in summary, as ‘humans and computers interacting to perform work effectively’; it is represented schematically in Figure 2. This definition, in conjunction with the general definition of disciplines, will now enable expression of a framework for conceptions of the HCI discipline.
Figure 2. Definition of the Scope of the General Problem addressed by the discipline of HCI. (Humans and computers interacting to perform work. effectively).

2.3. The Framework for Conceptions of the HCI Discipline

The possibility of alternative, and equally legitimate, conceptions of the discipline of HCI was earlier postulated. This section proposes a framework within which different conceptions may be established, ordered, and related.

Given the definition of its scope (above), and the preceding definition of disciplines, the general problem addressed by the discipline of HCI is asserted as: ‘the design of humans and computers interacting to perform work effectively’. It is a general (discipline) problem of design : its ultimate product is designs. The practices of the HCI discipline seek solutions to this general problem, for example: in the construction of computer hardware and software; in the selection and training of humans to use computers; in aspects of the management of work, etc. HCI discipline knowledge supports the practices that provide such solutions.

The general problem of HCI can be decomposed (with regard to its scope) into two general problems, each having a particular scope. Whilst subsumed within the general problem of HCI, these two general problems are expressed as: ‘the design of humans interacting with computers’; and ‘the design of computers interacting with humans’. Each problem can be associated with a different sub- discipline of HCI. Human Factors (HF), or Ergonomics, addresses the problem of designing the human as they interact with a computer. Software Engineering (SE) addresses the problem of designing the computer as it interacts with a human. With different – though complementary – aims, both sub-disciplines address the problem of designing humans and computers which interact to perform work effectively. However, the HF discipline concerns the physical and mental aspects of the human and is supported by HF discipline knowledge. The SE discipline concerns the physical and software aspects of the computer and is supported by SE discipline knowledge.

Hence, we may express a framework for conceptions of the discipline of HCI as: ‘the use of HCI knowledge to support practices seeking solutions to the general problem of HCI of designing humans and computers interacting to perform work effectively. HCI knowledge is
constituted of HF knowledge and SE knowledge, respectively supporting HF practices and SE practices. Those practices respectively address the HF general problem of the design of humans interacting with computers, and the SE general problem of the design of computers interacting with humans’. The framework is represented schematically in Figure 3.

Importantly, the framework supposes the nature of effectiveness of the HCI discipline itself. There are two apparent components of this effectiveness. The first is the success with which its practices solve the general problem of designing humans and computers interacting to perform work effectively. It may be understood to be synonymous with ‘product quality’. The second component of effectiveness of the discipline is the resource costs incurred in solving the general problem to a given degree of success – costs incurred by both the acquisition and application of knowledge. It may be understood to be synonymous with ‘production costs’.

The framework will be used in Section 3 to establish, order, and relate alternative conceptions of HCI. It supports comparative assessment of the effectiveness of the discipline as supposed by each conception.
Figure 3 . Framework for Conceptions of the Disicpline of HCI
3. Three Conceptions of the Discipline of HCI

A review of the literature was undertaken to identify alternative conceptions of HCI, that is, conceptions of the use of knowledge to support practices solving the general problem of the design of humans and computers interacting to perform work effectively. The review identified three such conceptions. They are HCI as a craft discipline; as an applied scientific discipline; and as an engineering discipline. Each conception will be described and exemplified in terms of the framework. 3.1. Conception of HCI as a Craft Discipline Craft disciplines solve the general problems they address by practices of implementation and evaluation. Their practices are supported by knowledge typically in the form of heuristics; heuristics are implicit (as in the procedures of good practice) and informal (as in the advice provided by one craftsperson to another). Craft knowledge is acquired by practice and example, and so is experiential; it is neither explicit nor formal. Conception of HCI as a craft discipline is represented schematically in Figure 4. HCI as a craft discipline addresses the general problem of designing humans and computers interacting to perform work effectively. For example, Prestel uses Videotex technology to provide a public information service which also includes remote electronic shopping and banking facilities (Gilligan & Long [1984]). The practice of HCI to solve the general problem of Prestel interaction design is by implementation, evaluation and iteration (Buckley [1989]). For example, Videotex screen designers try out new solutions – for assigning colours to displays, for selecting formats to express user instructions, etc. Successful forms of interaction are integrated into accepted good practice – for example, clearly distinguishing references to domain ‘objects’ (goods on sale) from references to interface ‘objects’ (forms to order the goods) and so reducing user difficulties and errors. Screen designs successful in supporting interactions are copied by other designers. Unsuccessful interactions are excluded from subsequent implementations – for example, the repetition of large scale logos on all the screens (because the screens are written top-to-bottom and the interaction is slowed unacceptably). HCI craft knowledge, supporting practice, is maintained by practice itself. For example, in the case of Videotex shopping, users often fail to cite on the order form the reference number of the goods they wish to purchase. A useful design heuristic is to try prompting users with the relevant information, for example, by reminding them on the screen displaying the goods that the associated reference number is required for ordering and should be noted. An alternative heuristic is to try re-labelling the reference number of the goods, for example to ‘ordering’ rather than reference number. Heuristics such as these are formulated and tried out on new implementations and are retained if associated with successful interactions. To illustrate HCI as a craft discipline more completely, there follows a detailed example taken from a case history reporting the design of a text editor (Bornat & Thimbleby [1989]). Bornat and Thimbleby are computer scientists who, in the 1970s, designed a novel text display editor called ‘Ded’. The general problem of HCI for them was to design a text editor which would enable the user to enter text, review it, add to it, to reorganise its structure and to print it. In addition, the editor was to be easy to use. They characterise their practice as ‘production’ (implementation as used here) suffused by design activity. Indeed, their view is that Ded was not designed but evolved. There was always a fully working version of the text editor to be discussed, even from the very early days. 9
10 Figure 4. Conception of HCI as a Craft Discipline The evolution, however, was informed by ‘user interface principles’ (which they sometimes call theories and at other times call design ideas) which they invented themselves, tried out on Ded, retained if successful and reformulated if unsuccessful. The status of the principles at the time of their use would be termed here craft discipline knowledge or heuristics. (Subsequent validation of the heuristics as other than craft knowledge would of course be possible, and so change this status.) For example, ‘to indicate to users exactly what they are doing, try providing rapid feedback for every keypress’. Most feedback was embodied in changes to the display (cursor movements, characters added or deleted, etc.) which were visible to the user. However, if the effect of a keypress was not visible, there was no effect, but a bell rang to let the user know. In this way, the craft heuristic supporting the SE craft practice – by informing the design of the computer interacting with the human – can be expressed as: ‘if key depression and no display change, then ring bell’. The heuristic also supported HF craft practice – by informing the design of the human interacting with the computer. It may be expressed as: ‘if key pressed and no display change seen, and bell heard, then understand no effect of keypress (other than bell ring)’. Another example of a craft heuristic used by Bornat and Thimbleby (and one introduced to them by a colleague) is ‘to ensure that information in the computer is what the user thinks it is, try using only HCI Problem HCI HCI work (domain) human computer Pract ice HF imHpFlePmraecntt &ictest + SE implement & test Knowledge HF heuristics + SE heuristics
one mode’. The heuristic supported SE practice, informing the design of the computer interacting with the human – ‘if text displayed, and cursor under a character, and key depression, then insert character before cursor position’. The heuristic also supported HF practice, informing the design of the human interacting with the computer – ‘if text seen, and cursor located under a character, and key has been pressed, then only the insertion of a character before the cursor position can be seen to be effected (but nothing else)’. In summary, the design of Ded by Bornat and Thimbleby illustrates the critical features of HCI as a craft discipline. They addressed the specific form of the general problem (general because their colleague suggested part of the solution – one ‘mode’ – and because their heuristics were made available to others practising the craft discipline). Their practices involved the iterative implementation and evaluation of the computer interacting with the human, and of the human interacting with the computer. They were supported by craft discipline heuristics – for example: ‘simple operations should be simple, and the complex possible’. Such craft knowledge was either implicit or informal; the concepts of ‘simple’ and ‘complex’ remaining undefined, together with their associated operations (the only definitions being those implicit in Ded and in the expertise of Bornat and Thimbleby, or informal in their report). And finally, the heuristics were generated for a purpose, tried out for their adequacy (in the case of Ded) and then retained or discarded (for further application to Ded). This too is characteristic of a craft discipline. Accepting that Ded met its requirements for both functionality (enter text, review text, etc.) and for usability (use straight away, etc) – as claimed by Bornat and Thimbleby – it can be accepted as an example of good HCI craft practice. To conclude this characterisation of HCI as a craft discipline, let us consider its potential for effectiveness. As earlier proposed (Section 2.3), an effective discipline is one whose practices successfully solve its general problem, whilst incurring acceptable costs in acquiring and applying the knowledge supporting those practices (see Dowell & Long [1988]). HCI as a craft discipline will be evaluated in general for its effectiveness in solving the general problem of designing humans and computers interacting, as exemplified by Bornat and Thimbleby’s development of Ded in particular. Consideration of HCI as a craft discipline suggests that it fails to be effective (Dowell & Long [manuscript submitted for publication]). The first explanation of this – and one that may at first appear paradoxical – is that the (public) knowledge possessed by HCI as a craft discipline is not operational. That is to say, because it is either implicit or informal, it cannot be directly applied by those who are not associated with the generation of the heuristics or exposed to their use. If the heuristics are implicit in practice, they can be applied by others only by means of example practice. If the heuristics are informal, they can be applied only with the help of guidance from a successful practitioner (or by additional, but unvalidated, reasoning by the user). For example, the heuristic ”simple operations should be simple, and the complex possible’ could not be implemented without the help of Bornat and Thimbleby or extensive interpretation by the designer. The heuristic provides insufficient information for its operationalisation. In addition, since craft heuristics cannot be directly applied to practice, practice cannot be easily planned and coordinated. Further, when HF and SE design practice are allocated to different people or groups, practice cannot easily be integrated. (Bornat was responsible for both HF and SE design practice and was also final arbiter of design solutions.) Thus, with respect to the requirement for its knowledge to be operational, the HCI craft discipline fails to be effective. If craft knowledge is not operational, then it is unlikely to be testable – for example, whether the ‘simple’ operations when implemented are indeed ‘simple’, and whether the ‘complex’ operations when implemented are indeed ‘possible’. Hence, the second reason why HCI as a craft discipline fails to be effective is because there is no guarantee that practice applying HCI craft knowledge will have the consequences intended (guarantees cannot be provided if testing is precluded). There is no guarantee that its application to designing humans and computers interacting will result in their performing work effectively. For example, the heuristic of providing rapid feedback in Ded does not guarantee that users know what they are doing, because they might not understand the contingencies of the feedback. (However, it would be expected to help understanding, at least to 11
some extent, and more often than not). Thus, with respect to the guarantee that knowledge applied by practice will solve the general HCI problem, the HCI craft discipline fails to be effective. If craft knowledge is not testable, then neither is it likely to be generalisable – for example, whether ‘simple’ operations that are simple when implemented in Ded are also ‘simple’ when implemented in a different text editor. Hence, the third explanation of the failure of HCI as a craft discipline to be effective arises from the absence of generality of its knowledge. To be clear, if being operational demands that (public) discipline knowledge can be directly applied by others than those who generated the knowledge, then being general demands that the knowledge be guaranteed to be appropriate in instances other than those in which it was generated. Yet, the knowledge possessed by HCI as a craft discipline applies only to those problems already addressed by its practice, that is, in the instances in which it was generated. Bornat and Thimbleby’s heuristics for solving the design problem of Ded may have succeeded in this instance, but the ability of the heuristics to support the solution of other design problems is unknown and, until a solution is attempted, unknowable. The suitability of the heuristics ‘ignore deficiencies of the terminal hardware’ and ‘undo one keystroke at a time’ for a system controlling the processes of a nuclear power plant could only be established by implementation and evaluation in the context of the power plant. In the absence of a well defined general scope for the problems to be addressed by the knowledge supporting HCI craft practice, each problem of designing humans and computers interacting has to be solved anew. Thus, with respect to the generality of its knowledge, the HCI craft discipline fails to be effective. Further consideration of HCI as a craft discipline suggests that the costs incurred in generating, and so in acquiring craft knowledge, are few and acceptable. For example, Bornat and Thimbleby generated their design heuristics as required, that is – as evaluation showed the implementation of one heuristic to fail. Further, heuristics can be easily communicated (if not applied) and applied now (if applicable). Thus, with respect to the costs of acquiring its knowledge, HCI as a craft discipline would seem to be effective. In summary, although the costs of acquiring its knowledge would appear acceptable, and although its knowledge when applied by practice sometimes successfully solves the general problem of designing humans and computers interacting to perform work effectively, the craft discipline of HCI is ineffective because it is generally unable to solve the general problem. It is ineffective because its knowledge is neither operational (except in practice itself), nor generalisable, nor guaranteed to achieve its intended effect – except as the continued success of its practice and its continued use by successful craftspeople. 3.2. Conception of HCI as an Applied Science Discipline The discipline of science uses scientific knowledge (in the form of theories, models, laws, truth propositions, hypotheses, etc.) to support the scientific practice (analytic, empirical, etc.) of solving the general problem of explaining and predicting the phenomena within its scope (structural, behavioural, etc.) (see Section 3.1). Science solves its general problem by hypothesis and test. Hypotheses may be based on deduction from theory or induction from regularities of structure or behaviour associated with the phenomena. Scientific knowledge is explicit and formal, operational, testable and generalisable. It is therefore refutable (if not proveable; Popper [1959]). Scientific disciplines can be associated with both HF – for example, psychology, linguistics, artificial intelligence, etc. and SE – for example, computer science, artificial intelligence, etc. Psychology explains and predicts the phenomena of the mental life and behaviour of humans (for example, the acquisition of cognitive skill (Anderson [1983])); computer science explains, and predicts the phenomena of the computability of computers as Turing-compatible machines (for example, as concerns abstract data types (Scott [1976])). An applied science discipline is one which recruits scientific knowledge to the practice of solving its general problem – a design problem. HCI as an applied science discipline uses scientific knowledge 12
13 as an aid to addressing the general problem of designing humans and computers interacting to perform work effectively. HCI as an applied science is represented schematically in Figure 5. Figure 5. Conception of HCI as an Applied Science Discipline An example of psychological science knowledge which might be recruited to support the HF practice concerns the effect of feedback on sequences of behaviour, for example, noise and touch on keyboard operation, and confirmatory feedback on the sending of electronic messages (Hammond [1987]). (Feedback is chosen here because it was also used to exemplify craft discipline knowledge (see Section 3.1) and the contrast is informative.) Psychology provides the following predictive truth proposition concerning feedback: ‘controlled sequences need confirmatory feedback (both required and redundant); automated sequences only need required feedback during the automated sequence’. (The research supporting this predictive (but also explanatory proposition) would be expected to have defined and operationalised the terms – ‘feedback’, ‘controlled’, etc. and to have reported the empirical data on which the proposition is based.) HCI Problem HCI Practice HF i m H p F l e P mr a e c n t t &ictest HCI Knowledge HF g u i d e l i n e s ex science work (domain) human computer ++ SE implement & test SE guidelines ex science
However, as it stands, the proposition cannot contribute to the solution of the HF design problem such as that posed by the development of the text-editor Ded (Bornat & Thimbleby [1989] – see Section 3.1). The proposition only predicts the modifications of behaviour sequences by feedback under a given set of conditions. It does not prescribe the feedback required by Ded to achieve effective performance of work (enter text, review it, etc.; to be usable straight away etc.). Predictive psychological knowledge can be made prescriptive. For example Hammond transforms the predictive truth proposition concerning feedback into the following prescriptive proposition (or ‘guideline’): “When a procedure, task or sequence is not automatic to users (either because they are novice users or because the task is particularly complex or difficult), provide feedback in a number of complementary forms. Feedback should be provided both during the task sequence, to inform the user that things are progressing satisfactorily or otherwise, and at completion, to inform the user that the task sequence has been brought to a close satisfactorily or otherwise”. However, although prescriptive, it is so with respect to the modifiability of sequences of behaviour and not with respect to the effective performance of work. Although application of the guideline might be expected to modify behaviour (for example, decrease errors and increase speed), there is no indication of how the modification (either in absolute terms, or relative to other forms of feedback or its absence) would ensure any particular desired effective performance of work. Nor can there be, since its prescriptive form has not been characterised, operationalised, tested, and generalised with respect to design for effective performance (but only the knowledge on which it is based with respect to behavioural phenomena). As a result, the design of a system involving feedback, configured in the manner prescribed by the guideline, would still necessarily proceed by implementation, evaluation, and iteration. For example, although Bornat and Thimbleby appear not to have provided complementary feedback for the novice users of Ded, but only feedback by keypress (and not in addition on sequence completion – for example, at the end of editing a command), their users appear to have achieved the desired effective performance of work of entering text, using Ded straight away etc. Computer science knowledge might similarly be recruited to support SE practice in solving the problem of designing computers interacting with humans to perform work effectively. For example, explanatory and predictive propositions concerning computability, complexity, etc. might be transformed into prescriptive propositions informing system implementation, perhaps in ways similar to the attempt to achieve ‘effective computability’ (Kapur & Srivs [1988]). Alternatively, predictive computer science propositions might support general prescriptive SE principles, such as modularity, abstraction, hiding, localization, uniformity, completeness, confirmability, etc. (Charette [1986]). These general principles might in turn be used to support specific principles to solve the SE design problem of computers interacting with humans. However, as in the case of psychology, for as long as the general problem of computer science is the explanation and prediction of computability, and not the design of computers interacting with humans to perform work effectively, computer science knowledge cannot be prescriptive with respect to the latter. Whatever computer science knowledge (for example, use of abstract data types) or general SE principles (for example, modularity) informed or could have informed Bornat and Thimbleby’s development of Ded, the design would still have had to proceed by implementation, evaluation and iteration, because neither the computer science knowledge nor the SE principles address the problem of designing for the effective performance of work – entering text, using Ded straight away, etc. To illustrate HCI as an applied science discipline more completely, there follows a detailed example taken from a case history reporting the design of a computer-aided learning system to induct new undergraduates into their field of study – cognitive psychology (Hammond & Allinson [1988]). Hammond and Allinson called upon three areas of psychological knowledge, concerned with understanding and learning, to support the design of their system. These were ‘encoding specificity’ 14
theory (Tulving [1972]), ‘schema’ theory (Mandler [1979]), and ‘depth of processing’ theory (Craik & Lockhart [1972]). Only the first will be used as an example here. ‘Encoding specificity’ and ‘encoding variability’ explain and predict peoples’ memory behaviours. ‘Encoding specificity’ asserts that material can be recalled if it contains distinctive retrieval cues that can be generated at the time of recall. ‘Encoding variability’ asserts that multiple exposure to the same material in different contexts results in easier recall, since the varied contexts will result in a greater number of potential retrieval cues. On the basis of this psychological knowledge, Hammond and Allinson construct the guideline or principle: ‘provide distinctive and multiple forms of representation.’ They followed this prescription in their learning system by using the graphical and dynamic presentation of materials, working demonstrations and varied perspectives of the same information. However, although the guideline might have been expected to modify learning behaviour towards that of the easier recall of materials, the system design would have had to proceed by implementation, evaluation, and iteration. The theory of encoding specificity does not address the problem of the design of effective learning, in this case – new undergraduate induction, and the guideline has not been defined, operationalised, tested or generalised with respect to effective learning. Effective induction learning might follow from application of the guideline, but equally it might not (in spite of materials being recalled). Although Hammond and Allinson do not report whether computer science knowledge was recruited to support the solution of the SE problem of designing the computer interacting with the undergraduates, nor whether general SE principles were recruited, the same conclusion would follow as for the use of psychological knowledge. Effective induction learning performance might follow from the application of notions such as effective computability, or of principles such as modularity, but equally it might not (in spite of the computer’s program being more computably effective and better structured). In summary, the design of the undergraduate induction system by Hammond and Allinson illustrates the critical features of HCI as an applied science discipline. They addressed the specific form of the general problem (general because the knowledge and guidelines employed were intended to support a wide range of designs). Their practice involved the application of guidelines, the iterative implementation of the interacting computer and interacting human, and their evaluation. The implementation was supported by the use of psychological knowledge which formed the basis for the guidelines. The psychological knowledge (encoding specificity) was defined, operationalised, tested and generalised. The guideline ‘provide distinctive and multiple forms of representation’ was neither defined, operationalised, tested nor generalised with respect to effective learning performance. Finally, consider the effectiveness of HCI as an applied science discipline. An evaluation suggests that many of the conclusions concerning HCI as a craft discipline also hold for HCI as an applied science discipline. First, its science knowledge cannot be applied directly, not – as in the case of craft knowledge – because it is implicit or informal, but because the knowledge is not prescriptive; it is only explanatory and predictive. Its scope is not that of the general problem of design. The theory of encoding specificity is not directly applicable. Second, the guidelines based on the science knowledge, which are not predictive but prescriptive, are not defined, operationalised, tested or generalised with respect to desired effective performance. Their selection and application in any system would be a matter of heuristics (and so paradoxically of good practice). Even if the guideline of providing distinctive and multiple forms of representation worked in the case of undergraduate induction, it could not be generalised on the basis of this good practice alone. Third, the application of guidelines based on science knowledge does not guarantee the consequences intended, that is effective performance. The provision of distinctive and multiple forms of representation may enhance learning behaviours, but not necessarily such as to achieve the effective undergraduate induction desired. 15
HCI as an applied science discipline, however, differs in two important respects from HCI as a craft discipline. Science knowledge is explicit and formal, and so supports reasoning about the derivation of guidelines, their solution and application (although one might have to be a discipline specialist so to do). Second, science knowledge (of encoding specificity, for example) would be expected to be more correct, coherent and complete than common sense knowledge concerning learning and memory behaviours. Further, consideration of HCI as an applied science discipline suggests that the costs incurred in generating, and so in acquiring applied science knowledge, are both high (in acquiring science knowledge) and low (in generating guidelines). Whether the costs are acceptable depends on the extent to which the guidelines are effective. However, as indicated earlier, they are neither generalisable nor offer guarantees of effective performance. In summary, although its knowledge when applied by practice in the form of guidelines sometimes solves the general problem of designing humans and computers interacting to perform work effectively, the applied science discipline is ultimately ineffective because it is generally unsuccessful in solving the general problem and its costs may be unacceptable. It fails to be effective principally because its knowledge is not directly applicable and because the guidelines based on its knowledge are neither generalisable, nor guaranteed to achieve their intended effect. 3.3. Conception of HCI as an Engineering Discipline The discipline of engineering may characteristically solve its general problem (of design) by the specification of designs before their implementation. It is able to do so because of the prescriptive nature of its discipline knowledge supporting those practices – knowledge formulated as engineering principles. Further, its practices are characterised by their aim of ‘design for performance’. Engineering principles may enable designs to be prescriptively specified for artefacts, or systems which when implemented, demonstrate a prescribed and assured performance. And further, engineering disciplines may solve their general problem by exploiting a decompositional approach to design. Designs specified at a general level of description may be systematically decomposed until their specification is possible at a level of description of their complete implementation. Engineering principles may assure each level of specification as a representation of the previous level. A conception of HCI as an engineering discipline is also apparent (for example: Dix & Harrison [1987]; Dowell & Long [manuscript submitted for publication]). It is a conception of HCI discipline knowledge as (ideally) constituted of (HF and SE) engineering principles, and its practices (HF and SE practices) as (ideally) specifying then implementing designs. This Section summarises the conception (schematically represented in Figure 6) and attempts to indicate the effectiveness of such a discipline. The conception of HCI engineering principles assumes the possibility of a codified, general and testable formulation of HCI discipline knowledge which might be prescriptively applied to designing humans and computers interacting to perform work effectively. Such principles would be unequivocally formal and operational. Indeed their operational capability would derive directly from their formality, including the formality of their concepts – for example, the concepts of ‘simple’ and ‘complex’ would have an explicit and consistent definition (see Section 3.1). The complete and coherent definition of concepts, as necessary for the formulation of HCI engineering principles, would occur within a public and consensus conception of the general problem of HCI. A proposal for the form of such a conception (Dowell & Long [manuscript submitted for publication]), intended to promote the formulation of HCI engineering principles, can be summarised here. It dichotomises ‘interactive worksystems’ which perform work, and ‘domains of application’ in which work originates, is performed, and has its consequences. An interactive worksystem is conceptualised as the interacting behaviours of a human (the ‘user’) and a computer: 16
17 it is a behavioural system. The user and computer constitute behavioural systems in their own right, and therefore sub-systems of the interactive worksystem. Behaviours are the trajectory of states of humans and computers in their execution of work. The behaviours of the interactive worksystem are reflexive with two independent structures, a human structure of the user and a hardware and software structure of the computer. The behaviours of the interactive worksystem are both physical and informational, and so also are its structures. Further, behaviour incurs a resource cost, distinguished as the ‘structural’ resource cost of establishing and maintaining the structure able to support behaviour, and the ‘behavioural’ resource cost of recruiting the structure to express behaviour. Figure 6. Conception of HCI as an Engineering Discipline The behaviours of an interactive worksystem intentionally effect, and so correspond with, transformations of objects. Objects are physical and abstract and exhibit the affordance for transformations arising from the state potential of their attributes. A domain of application is a class of transformation afforded by a class of objects. An organisations` requirements for specific transformations of objects are expressed as product goals; they motivate the behaviours of an interactive worksystem. The effectiveness of an interactive worksystem is expressed in the concept of performance. Performance assimilates concepts expressing the transformation of objects with regard to its HCI Problem HCI Pract ice HCI Knowledge HF engineering principles + SE engineering principles HF specify & HFPra implement human ctice work (domain) + SE specify & computer implement
satisfying a product goal, and concepts expressing the resource costs incurred in realising that transformation. Hence, performance relates an interactive worksystem with a domain of application. A desired performance may be specified for any worksystem attempting to satisfy a particular product goal. The concepts described enable the expression of the general problem addressed by an engineering discipline of HCI as: specify then implement user behaviour {U} and computer behaviour {C}, such that {U} interacting with {C} constitutes an interactive worksystem exhibiting desired performance (PD). It is implicit in this expression that the specification of behaviour supposes and enables specification of the structure supporting that behaviour. HCI engineering principles are conceptualised as supporting the practices of an engineering HCI discipline in specifying implementable designs for the interacting behaviours of both the user and computer that would achieve PD. This conception of the general problem of an engineering discipline of HCI supposes its further decomposition into two related general problems of different particular scopes. One problem engenders the discipline of HF, the other the discipline of SE; both disciplines being incorporated in HCI. The problem engendering the discipline of SE is expressed as: specify then implement {C}, such that {C} interacting with {U} constitutes an interactive worksystem exhibiting PD. The problem engendering the discipline of HF is expressed as: specify then implement {U}, such that {U} interacting with {C} constitutes an interactive worksystem exhibiting PD. The disciplines of SE and HF might each possess their own principles. The abstracted form of those principles is visible. An HF engineering principle would take as input a performance requirement of the interactive worksystem, and a specified behaviour of the computer, and prescribe the necessary interacting behaviour of the user. An SE engineering principle would take as input the performance requirement of the interactive worksystem, and a specified behaviour of the user, and prescribe the necessary interacting behaviour of the computer. Given the independence of their principles, the engineering disciplines of SE and HF might each pursue their own practices, having commensurate and integrated roles in the development of interactive worksystems. Whilst SE specified and implemented the interacting behaviours of computers, HF would specify and implement the interacting behaviours of users. Together, the practices of SE and HF would aim to produce interactive worksystems which achieved PD. It is the case, however, that the contemporary discipline of HF does not possess engineering principles of this idealised form. Dowell & Long [manuscript submitted for publication) have postulated the form of potential HF engineering principles for application to the training of designers interacting with particular visualisation techniques of CAD systems. A visualisation technique is a graphical representational form within which images of artefacts are displayed; for example, the 21/2 D wireframe representational form of the Necker cube. The supposed principle would prescribe the visual search strategy {u} of the designer interacting with a specified display behaviour {c} of the computer (supported by a specified visualisation technique) to achieve a desired performance in the ‘benchmark’ evaluation of a design. Neither does the contemporary discipline of SE possess engineering principles of the idealised form discussed. However, formal models of the interaction of display editors proposed by Dix and Harrison [1987] may show potential for development in this respect. For example, Dix and Harrison model the (behavioural) property of a command that is ‘passive’, a command having no effect on the ‘data’ component of the computer’s state. Defining a projection from state into result as r: S R, a passive command c has the property that r(s) = r(c(s)). Although the model has a formal expression, the user behaviour interacting with the (passive) computer behaviour is only implied, and the model makes no reference to desired performance. 18
It is likely the case, however, that some would claim the (idealised) conception of HCI as an engineering discipline to be unrealiseable. They might justify their objection by claiming the general problem of HCI to be ‘too soft’ to allow the development of engineering principles – that human behaviour is too indeterministic (too unspecifiable) to be subject to such principles. Yet human behaviour can be usefully deterministic to some degree – as demonstrated, for example, by the response of driver behaviour to traffic system protocols. There may well be at least a commensurate potential for the development of HCI engineering principles. To conclude this summary description of the conception of an engineering discipline of HCI, we might consider the potential effectiveness of such a discipline. As before, effectiveness is evaluated as the success with which the discipline might solve its general problem, and the costs incurred with regard to both the acquisition and application of knowledge. First, HCI engineering principles would be a generaliseable knowledge. Hence, application of principles to solving each new design problem could be direct and efficient with regard to costs incurred. The discipline would be effective. Second, engineering HCI principles would be operational, and so their application would be specifiable. The further consequence of this would be that the roles of HF and SE in Systems Development could be specified and integrated, providing better planned and executed development programmes. The minimisation of application costs would result in an effective discipline. Third, engineering principles would have a guaranteed efficacy. Because they would be operational, they would be testable and their reliability and generality could be specified. Their consequent assurance of product quality would render effective an engineering discipline of HCI. Finally, consideration of HCI as an engineering discipline suggests that the costs of formulating engineering principles would be severe. A research programme committed to formulating even a basic corpus of HCI engineering principles might only be conceived as a long-term endeavour of extreme scale. In summary, although the costs of their formulation would be severe, the potential of a corpus of engineering principles for improving product quality is large, and so also might be the potential for effectiveness of an engineering discipline of HCI. 4. Summary and Conclusions This paper has developed the Conference theme of ‘the theory and practice of HCI’. Generalisation of the theme, in terms of a framework for conceptions of the HCI discipline, has shown that in addition to theory and practice, the theme needs to explicitly reference the general problem addressed by the discipline of HCI and the scope of the general problem. The proposal made here is that the general problem of HCI is the design of humans and computers interacting to perform work effectively. The qualification of the general problem as ‘design’, and the addition to the scope of that problem of ‘…. to perform work effectively’, has important consequences for the different conceptions of HCI (see Section 3). For example, since design is not the general problem of science, scientific knowledge (for example, psychology or computer science) cannot be recruited directly to the practice of solving the general problem of design (see Barnard, Grudin & Maclean [1989]). Further, certain attempts to develop complete engineering principles for HCI fail to qualify as such, because they make no reference to ‘…. to perform work effectively’ (Dix & Harrison [1987]; Thimbleby [1984]). Development of the theme indicated there might be no singular conception of the discipline of HCI. Although all conceptions of HCI as a discipline necessarily include the notion of practice (albeit of different types), the concept of theory is more readily associated with HCI as an applied science discipline, because scientific knowledge in its most correct, coherent and complete form is typically expressed as theories. Craft knowledge is more typically expressed as heuristics. Engineering 19
knowledge is more typically expressed as principles. If HCI knowledge is limited to theory, and theory is presumed to be that of science, then other conceptions of HCI as a discipline are excluded (for example, Dowell & Long [manuscript submitted for publication]). Finally, generalisation of the Conference theme has identified two conceptions of HCI as a discipline as alternatives to the applied science conception implied by the theme. The other two conceptions are HCI as a craft discipline and HCI as an engineering discipline. Although all three conceptions address the general problem of HCI, they differ concerning the knowledge recruited to solve the problem. Craft recruits heuristics; applied science recruits theories expressed as guidelines; and engineering recruits principles. They also differ in the practice they espouse to solve the general problem. Craft typically implements, evaluates and iterates (Bornat & Thimbleby [1989]); applied science typically selects guidelines to inform implementation, evaluation and iteration (although guidelines may also be generated on the basis of extant knowledge, e.g. – Hammond & Allinson [1988]); and engineering typically would specify and then implement (Dowell & Long [1988]). The different types of knowledge and the different types of practice have important consequences for the effectiveness of any discipline of HCI. Heuristics are easy to generate, but offer no guarantee that the design solution will exhibit the properties of performance desired. Scientific theories are difficult and costly to generate, and the guidelines derived from them (like heuristics) offer no final guarantee concerning performance. Engineering principles would offer guarantees, but are predicted to be difficult, costly and slow to develop. The development of the theme and the expression of the conceptions of HCI as a discipline – as craft, applied science and engineering – can usefully be employed to explicate issues raised by, and of concern to, the HCI community. Thus, Landauer’s complaint (Landauer [1987a]) that psychologists have not brought to HCI an impressive tool kit of design methods or principles can be understood as resulting from the disjunction between psychological principles explaining and predicting phenomena, and prescriptive design principles required to guarantee effective performance of work (see Section 3.2). Since research has primarily been directed at establishing the psychological principles, and not at validating the design guidelines, then the absence of an impressive tool kit of design methods or principles is perhaps not so surprising. A further issue which can be explained concerns the relationship between HF and SE during system development. In particular, there is a complaint by SE that the contributions of HF to system development are ‘too little’, too late’ and unemployable (Walsh, Lim, Long, & Carver [1988]). Assuming HCI to be an applied science discipline, HF contributions are too little because psychology does not address the general problem of design and so fails to provide a set of principles for the solution of that problem. HF contributions are too late, because they consist largely of evaluations of designs already implemented, but without the benefit of HF. They are unemployable, because they were never specified, and because implemented designs can be difficult, if not impossible, and costly to modify. Within an HCI engineering discipline, HF contributions would be adequate (because within the scope of the discipline’s problem); on time (because specifiable); and implementable (because specified). Landauer’s plea (Landauer [1987b]) that HF should extend its practice from implementation evaluation to user requirements identification and the creation of designs to satisfy those requirements can be similarly explicated. Lastly, Carroll and Campbell’s claim (Carroll & Campbell [1988]) that HCI research has been more successful at developing methodology than theory can be explicated by the need for guidelines to express psychological knowledge and the need to validate those guidelines formally, and the absence of engineering principles, plus the importation of psychology research methods into HCI and the simulation of good (craft) practice. The methodologies, however, are not methodological principles which guarantee the solution of the design problem (Dowell & Long [manuscript submitted for publication]), but procedures to be tailored anew in the manner of a craft discipline. Thus, relating the conceptions of HCI as a set of possible disciplines provides insight into whether HCI research has been more successful at developing methodologies than theories. 20
In addition to explicating issues already formulated, the development of the Conference theme and the expression of the conceptions of HCI as a discipline raise two novel issues. The first concerns reflexivity both with respect to the general design problem and with respect to the creation of discipline knowledge. It is often assumed that only HCI as an applied scientific discipline (by means of guidelines) and as an engineering discipline (by means of principles) are reflexive with respect to the general design problem. The conception of HCI as a craft discipline, however, has shown that it is similarly reflexive – by means of heuristics. Concerning the creation of discipline knowledge, it is often assumed that only the solution of the general discipline problem requires the reflexive cognitive act – of reason and intuition concerning the objects of activity (Kant [1781]). However, the conceptions of HCI as a craft discipline, as an applied science discipline, and as an engineering discipline suggest that the intial creation of discipline knowledge, whether heuristics, guidelines or principles, in all cases requires a reflexive cognitive act involving intuition and reason. Thus, contrary to common assumption, the craft, applied science, and engineering conceptions of the discipline of HCI are similarly reflexive with regard to the general design problem. The intial generation of albeit different discipline knowledges requires in each case the reflexive cognitive act of reason and intuition. The second novel issue raised by the development of the Conference theme and the conceptions of HCI as a discipline is the relationship between the different conceptions. For example, the different conceptions of HCI and their associated paradigm activities might be considered to be mutually exclusive and uninformative, one with respect to the other. Alternatively, one conception and its associated activities might be considered to be mutually supportive with respect to another. For example, engineering principles might be developed bottom-up on the basis of inductions from good craft practice. Alternatively, engineering principles might be developed top-down on the basis of deductions from scientific theory – both from psychology and from computer science. It would be possible to advance a rationale justifying either mutual exclusion of conceptions or mutual support. The case for mutual exclusion would be based on the fact that the form of their knowledge and practice differs, and so one conception would be unable directly to inform another. For example, craft practice will not develop a theory which can be directly assimilated to science; science will not develop design principles which can be directly recruited to engineering. Thus, the case for mutual exclusion is strong. However, there is a case for mutual support of conceptions and it is presented here as a final conclusion. The case is based on the claim made earlier that the creation of discipline knowledge of each conception of HCI requires a reflexive cognitive act of reason and intuition. If the claim is accepted, the reflexive cognitive act of one conception might be usefully but indirectly informed by the discipline knowledge of another. For example, the design ideas, or heuristics, which formed part of the craft practice of Bornat and Thimbleby in the 1970s (Bornat & Thimbleby [1989]), undoubtedly contributed to Thimbleby’s more systematic formulation (Thimbleby [1984]) and the formal expression by Dix and Harrison (Dix & Harrison [1987]). Although the principles fail to address the effectiveness of work and so fail to qualify as HCI engineering principles, their development towards that end might be encouraged by mutual support from engineering conceptions of HCI. Likewise, scientific concepts such as compatibility (Long [1987]) may indirectly inform the development of principles relating users’ mental structures to the analytic structure of a domain of application (Long [1989]), and even provide an indirect rationalisation for the concepts themselves and their relations with other associated concepts. Mutual support of conceptions, as opposed to mutual exclusion, has two further advantages. First, it maximises the exploitation of what is known and practised in HCI. The current success of HCI is not such that it can afford to ignore potential contributions to its own advancement. Second, it encourages the notion of a community of HCI superordinate to that of any single discipline conception. The novelty and complexity of the enterprise of developing knowledge to support the solution of the general problem of designing humans and computers interacting to perform work effectively requires every encouragement for the establishment and maintenance of such a community. Thus, the mutual support of different conceptions of HCI as a discipline is recommended. 21
References J R Anderson [1983], The Architecture of Cognition, Harvard University, Cambridge MA. P Barnard, J Grudin & A Maclean [1989], “Developing a Science Base for the Naming of Computer Commands”, in Cognitive Ergonomics and Human Computer Interaction, J B Long & A D Whitefield, eds., Cambridge University Press, Cambridge. R Bornat & H Thimbleby [1989], “The Life and Times of Ded, Text Display editor”, in Cognitive Ergonomics and Human Computer Interaction, J B Long & A D Whitefield, eds., Cambridge University Press, Cambridge. P Buckley [1989], “Expressing Research Findings to have a Practical Influence on Design”, in Cognitive Ergonomics and Human Computer Interaction, J B Long & A D Whitefield, eds., Cambridge University Press, Cambridge. J M Carroll & R L Campbell [1988], “Artifacts as Psychological Theories: the Case of Human Computer Interaction”, IBM research report, RC 13454(60225) 1/26/88, T.J. Watson Research Division Center, Yorktown Heights, NY. 10598. R N Charette [1986], Software Engineering Environments, Intertexts Publishers/McGraw Hill, New Y ork. F I M Craik & R S Lockhart [1972], “Levels of Processing: A Framework for Memory Research”, Journal of Verbal Learning and Verbal Behaviour, 11, 671-684. A J Dix & M D Harrison [1987], “Formalising Models of Interaction in the Design of a Display Editor”, in Human-Computer Interaction – INTERACT’87, H J Bullinger & B Shackel, (ed.s), North- Holland, Amsterdam, 409-414. J Dowell & J B Long [1988], “Human-Computer Interaction Engineering”, in Designing End-User Interfaces, N Heaton & M Sinclair, eds., Pergamon Infotech, Oxford. J Dowell & J B Long, “Towards a Conception for an Engineering Discipline of Human Factors”, (manuscript submitted for publication). P Gilligan & J B Long [1984], “Videotext Technology: an Overview with Special Reference to Transaction Processing as an Interactive Service”, Behaviour and Information Technology, 3, 41-47. N Hammond & L Allinson [1988], “Development and Evaluation of a CAL System for Non-Formal Domains: the Hitchhiker`s Guide to Cognition”, Computer Education, 12, 215-220. N Hammond [1987], “Principles from the Psychology of Skill Acquisition”, in Applying Cognitive Psychology to User-Interface Design, M Gardiner & B Christie, eds., John Wiley and Sons, Chichester. I Kant [1781], The Critique of Pure Reason, Second Edition, translated by Max Muller, Macmillan, London. D Kapur & M Srivas [1988], “Computability and Implementability: Issues in Abstract Data Types,” Science of Computer Programming, Vol. 10. T K Landauer [1987a], “Relations Between Cognitive Psychology and Computer System Design”, in Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, J M Carroll, (ed.), MIT Press, Cambridge MA. 22
T K Landauer [1987b], “Psychology as Mother of Invention”, CHI SI. ACM-0-89791-213- 6/84/0004/0333 J B Long [1989], “Cognitive Ergonomics and Human Computer Interaction: an Introduction”, in Cognitive Ergonomics and Human Computer Interaction, J B Long & A D Whitefield, eds., Cambridge University Press, Cambridge. J Long [1987], “Cognitive Ergonomics and Human Computer Interaction”, in Psychology at Work, P Warr, eds., Penguin, England. J M Mandler [1979], “Categorical and Schematic Organisation in Memory”, in Memory Organisation and Structure, C R Puff, ed., Academic Press, New York. H H Pattee [1973], Hierarchy Theory: the Challenge of Complex Systems, Braziller, New York. K R Popper [1959], The Logic of Scientific Discovery, Hutchinson, London. D Scott [1976], “Logic and Programming”, Communications of ACM, 20, 634-641. H Thimbleby [1984], “Generative User Engineering Principles for User Interface Design”, in Proceedings of the First IFIP Conference on Human-Computer Interaction, Human-Computer Interaction – INTERACT’84. Vol.2, B Shackel, ed., Elsevier Science, Amsterdam, 102-107. E Tulving [1972], “Episodic and Semantic Memory”, in Organisation of Memory, E Tulving & N Donaldson, eds., Academic Press, New York. P Walsh, K Y Lim, J B Long & M K Carver [1988], “Integrating Human Factors with System Development”, in Designing End-User Interfaces, N Heaton & M Sinclair, eds., Pergamon Infotech, Oxford. Acknowledgement. This paper has greatly benefited from discussion with others and from their criticisms. In particular, we would like to thank: Andy Whitefield and Andrew Life, colleagues at the Ergonomics Unit, University College London; Charles Brennan of Cambridge University, and Michael Harrison of York University; and also those who attended a seminar presentation of many of these ideas at the MRC Applied Psychology Unit, Cambridge. The views expressed in the paper, however, are those of the authors. 23

 

Dowell and Long (1989) 150 150 John

Dowell and Long (1989)

 

Towards a Conception for an Engineering Discipline of Human Factors

John Dowell and John Long

Ergonomics Unit, University College London,

26, Bedford Way, London. WC1H 0AP.

abstract

This paper concerns one possible response of Human Factors to the need for better user-interactions of computer-based systems. The paper is in two parts. Part I examines the potential for Human Factors to formulate engineering principles. A basic pre-requisite for realising that potential is a conception of the general design problem addressed by Human Factors. The problem is expressed informally as: ‘to design human interactions with computers for effective working’. A conception would provide the set of related concepts which both expressed the general design problem more formally, and which might be embodied in engineering principles. Part II of the paper proposes such a conception and illustrates its concepts. It is offered as an initial and speculative step towards a conception for an engineering discipline of Human Factors.

In P. Barber and J. Laws (ed.s) Special Issue on Cognitive Ergonomics, Ergonomics, 1989, vol. 32, no. 11, pp. 1613-1536. Dowell and Long 2

Part I. Requirement for Human Factors as an Engineering Discipline of Human-Computer Interaction

1.1 Introduction…………………………………………………………………………………………….2

1.2. Characterisation of the Human Factors Discipline…………………………………..3

1.3. State of the Human Factors Art………………………………………………………………..5

1.4. Human Factors Engineering Principles……………………………………………………7

1.5. The Requirement for an Engineering Conception for Human Factors…………………………………………………………………………………..11

1.1. Introduction

Advances in computer technology continue to raise expectations for the effectiveness of its applications. No longer is it sufficient for computer-based systems simply ‘to work’, but rather, their contribution to the success of the organisations utilising them is now under scrutiny (Didner, 1988). Consequently, views of organisational effectiveness must be extended to take account of the (often unacceptable) demands made on people interacting with computers to perform work, and the needs of those people. Any technical support for such views must be similarly extended (Cooley, 1980).

With recognition of the importance of ‘human-computer interactions’ as a determinant of effectiveness (Long, Hammond, Barnard, and Morton, 1983), Cognitive Ergonomics is emerging as a new and specialist activity of Ergonomics or Human Factors (HF). Throughout this paper, HF is to be understood as a discipline which includes Cognitive Ergonomics, but only as it addresses human-computer interactions. This usage is contrasted with HF as a discipline which more generally addresses human-machine interactions.

HF seeks to support the development of more effective computer-based systems. However, it has yet to prove itself in this respect, and moreover, the adequacy of the HF response to the need for better human-computer interactions is of concern. For it continues to be the case that interactions result from relatively ad hoc design activities to which may be attributed, at least in part, the frequent ineffectiveness of systems (Thimbleby, 1984).

This paper is concerned to develop one possible response of HF to the need for better human-computer interactions. It is in two parts. Part I examines the potential for HF to formulate HF engineering principles for supporting its better response. Pre-requisite to the realisation of that potential, it concludes, is a conception of the general design problem it addresses. Part II of the paper is a proposal for such a conception.

The structure of the paper is as follows. Part I first presents a characterisation of HF (Section 1.2) with regard to: the general design problem it addresses; its practices providing solutions to that problem; and its knowledge supporting those practices. The characterisation identifies the relations of HF with Software Engineering (SE) and with the super-ordinate discipline of Human-Computer Interaction (HCI). The characterisation supports both the assessment of contemporary HF and the arguments for the requirement of an engineering HF discipline.

Assessment of contemporary HF (Section 1.3.) concludes that its practices are predominantly those of a craft. Shortcomings of those practices are exposed which indict the absence of support from appropriate formal discipline knowledge. This absence prompts the question as to what might be the Dowell and Long 3

formal knowledge which HF could develop, and what might be the process of its formulation. By comparing the HF general design problem with other, better understood, general design problems, and by identifying the formal knowledge possessed by the corresponding disciplines, the potential for HF engineering principles is suggested (Section 1.4.).

However, a pre-requisite for the formulation of any engineering principle is a conception. A conception is a unitary (and consensus) view of a general design problem; its power lies in the coherence and completeness of its definition of the concepts which can express that problem. Engineering principles are articulated in terms of those concepts. Hence, the requirement for a conception for the HF discipline is concluded (Section 1.5.).

If HF is to be a discipline of the superordinate discipline of HCI, then the origin of a ‘conception for HF’ needs to be in a conception for the discipline of HCI itself. A conception (at least in form) as might be assumed by an engineering HCI discipline has been previously proposed (Dowell and Long, 1988a). It supports the conception for HF as an engineering discipline of HCI presented in Part II.

1.2. Characterisation of the Human Factors Discipline

HF seeks to support systems development through the systematic and reasoned design of human-computer interactions. As an endeavour, however, HF is still in its infancy, seeking to establish its identity and its proper contribution to systems development. For example, there is little consensus on how the role of HF in systems development is, or should be, configured with the role of SE (Walsh, Lim, Long, and Carver, 1988). A characterisation of the HF discipline is needed to clarify our understanding of both its current form and any conceivable future form. A framework supporting such a characterisation is summarised below (following Long and Dowell, 1989).

Most definitions of disciplines assume three primary characteristics: a general problem; practices, providing solutions to that problem; and knowledge, supporting those practices. This characterisation presupposes classes of general problem corresponding with types of discipline. For example, one class of general problem is that of the general design problem1 and includes the design of artefacts (of bridges, for example) and the design of ‘states of the world’ (of public administration, for example). Engineering and craft disciplines address general design problems.

Further consideration also suggests that any general problem has the necessary property of a scope, delimiting the province of concern of the associated discipline. Hence may disciplines also be distinguished from each other; for example, the engineering disciplines of Electrical and Mechanical Engineering are distinguished by their respective scopes of electrical and mechanical artefacts. So, knowledge possessed by Electrical Engineering supports its practices solving the general design problem of designing electrical artefacts (for example, Kirchoff’s Laws would support the analysis of branch currents for a given network design for an amplifier’s power supply).

Although rudimentary, this framework can be used to provide a characterisation of the HF discipline. It also allows a distinction to be made between the disciplines of HF and SE. First, however, it is required that the super-ordinate discipline of HCI be postulated. Thus, HCI is a discipline addressing a general design problem expressed informally as:

‘to design human-computer interactions for effective working’.

The scope of the HCI general design problem includes: humans, both as individuals, as groups, and as social organisations; computers, both as programmable machines, stand-alone and networked, and as functionally embedded devices within machines; and work, both with regard to individuals and the organisations in which it occurs (Long, 1989). For example, the general design problem of HCI

1They are to be distinguished from the class of general scientific problem of the explanation and prediction of phenomena. Dowell and Long 4

includes the problems of designing the effective use of navigation systems by aircrew on flight-decks, and the effective use of wordprocessors by secretaries in offices.

The general design problem of HCI can be decomposed into two general design problems, each having a particular scope. Whilst subsumed within the general design problem of HCI, these two general design problems are expressed informally as:

‘to design human interactions with computers for effective working’; and

‘to design computer interactions with humans for effective working’.

Each general design problem can be associated with a different discipline of the superordinate discipline of HCI. HF addresses the former, SE addresses the latter. With different – though complementary – aims, both disciplines address the design of human-computer interactions for effective working. The HF discipline concerns the physical and mental aspects of the human interacting with the computer. The SE discipline concerns the physical and software aspects of the computer interacting with the human.

The practices of HF and SE are the activities providing solutions to their respective general design problems and are supported by their respective discipline knowledge. Figure 1 shows schematically this characterisation of HF as a sub-discipline of HCI (following Long and Dowell, 1989). The following section employs the characterisation to evaluate contemporary HF.

1.3. State of the Human Factors Art

It would be difficult to reject the claim that the contemporary HF discipline has the character of a craft (at times even of a technocratic art). Its practices can justifiably be described as a highly refined form of design by ‘trial and error’ (Long and Dowell, 1989). Characteristic of a craft, the execution and success of its practices in systems development depends principally on the expertise, guided intuition and accumulated experience which the practitioner brings to bear on the design problem1.

It is also claimed that HF will always be a craft: that ultimately only the mind itself has the capability for reasoning about mental states, and for solving the under-specified and complex problem of designing user-interactions (see Carey, 1989); that only the designer’s mind can usefully infer the motivations underlying purposeful human behaviour, or make subjective assessments of the elegance or aesthetics of a computer interface (Bornat and Thimbleby, 1989).

The dogma of HF as necessarily a craft whose knowledge may only be the accrued experience of its practitioners, is nowhere presented rationally. Notions of the indeterminism, or the un-predictability of human behaviour are raised simply as a gesture. Since the dogma has support, it needs to be challenged to establish the extent to which it is correct, or to which it compels a misguided and counter-productive doctrine (see also, Carroll and Campbell, 1986).

Current HF practices exhibit four primary deficiencies which prompt the need to identify alternative forms for HF. First, HF practices are in general poorly integrated into systems development practices, nullifying the influence they might otherwise exert. Developers make implicit and explicit decisions with implications for user-interactions throughout the development process, typically without involving HF specialists. At an early stage of design, HF may offer only advice – advice which may all too easily be ignored and so not implemented. Its main contribution to the development of user-interactive systems is the evaluations it provides. Yet these are too often relegated to the closing stages of development programmes, where they can only suggest minor enhancements to completed designs because of the prohibitive costs of even modest re-implementations (Walsh et al,1988).

Second, HF practices have a suspect efficacy. Their contribution to improving product quality in any instance remains highly variable. Because there is no guarantee that experience of one development programme is appropriate or complete in its recruitment to another, re-application of that experience cannot be assured of repeated success (Long and Dowell, 1989).

Third, HF practices are inefficient. Each development of a system requires the solving of new problems by implementation then testing. There is no formal structure within which experience accumulated in the successful development of previous systems can be recruited to support solutions to the new problems, except through the memory and intuitions of the designer. These may not be shared by others, except indirectly (for example, through the formulation of heuristics), and so experience may be lost and may have to be re-acquired (Long and Dowell, 1989).

1The claimed craft status of HF practice remains unaffected by the counterclaim that science and, in particular, psychology, offers guidance to the designer. The guidance may be direct – by the designer’s familiarity with psychological theory and practice, or may be indirect by means of guidelines derived from psychological findings. In both cases, the guidance can offer only advice which must be implemented then tested to assess its effectiveness. Since the general scientific problem is the explanation and prediction of phenomena, and not the design of artifacts, the guidance cannot be directly embodied in design specifications which offer a guarantee with respect to the effectiveness of the implemented design. It is not being claimed here that the application of psychology directly or indirectly cannot contribute to better practice or to better designs, only that a practice supported in such a manner remains a craft, because its practice is by implementation then test, that is, by trial and error (see also Long and Dowell, 1989). Dowell and Long 6

Fourth, there are insufficient signs of systematic and intentional progress which will alleviate the three deficiencies of HF practices cited above. The lack of progress is particularly noticeable when HF is compared with the similarly nascent discipline of SE (Gries, 1981; Morgan, Shorter and Tainsh, 1988).

These four deficiencies are endemic to the craft nature of contemporary HF practice. They indict the tacit HF discipline knowledge consisting of accumulated experience embodied in procedures, even where that experience has been influenced by guidance offered by the science of psychology (see earlier footnote). Because the knowledge is tacit (i.e., implicit or informal), it cannot be operationalised, and hence the role of HF in systems development cannot be planned as would be necessary for the proper integration of the knowledge. Without being operationalised, its knowledge cannot be tested, and so the efficacy of the practices it supports cannot be guaranteed. Without being tested, its knowledge cannot be generalised for new applications and so the practices it can support will be inefficient. Without being operationalised, testable, and general, the knowledge cannot be developed in any structured way as required for supporting the systematic and intentional progress of the HF discipline.

It would be incorrect to assume the current absence of formality of HF knowledge to be a necessary response to the indeterminism of human behaviour. Both tacit discipline knowledge and ‘trial and error’ practices may simply be symptomatic of the early stage of development of the discipline1. The extent to which human behaviour is deterministic for the purposes of designing interactive computer-based systems needs to be independently established. Only then might it be known if HF discipline knowledge could be formal. Section 1.4. considers what form that knowledge might take, and Section 1.5. considers what might be the process of its formulation.

1.4. Human Factors Engineering Principles

HF has been viewed earlier (Section 1.2.) as comparable to other disciplines which address general design problems: for example, Civil Engineering and Health Administration. The nature of the formal knowledge of a future HF discipline might, then, be suggested by examining such disciplines. The general design problems of different disciplines, however, must first be related to their characteristic practices, in order to relate the knowledge supporting those practices. The establishment of this relationship follows.

The ‘design’ disciplines are ranged according to the ‘hardness’ or ‘softness’ of their respective general design problems. ‘Hard’ and ‘soft’ may have various meanings in this context. For example, hard design problems may be understood as those which include criteria for their ‘optimal’ solution (Checkland, 1981). In contrast, soft design problems are those which do not include such criteria. Any solution is assessed as ‘better or worse’ relative to other solutions. Alternatively, the hardness of a problem may be distinguished by its level of description, or the formality of the knowledge available for its specification (Carroll and Campbell, 1986). However, here hard and soft problems will be generally distinguished by their determinism for the purpose, that is, by the need for design solutions to be determinate. In this distinction between problems is implicated: the proliferation of variables expressed in a problem and their relations; the changes of variables and their relations, both with regard to their values and their number; and more generally, complexity, where it includes factors other than those identified. The variables implicated in the HF general design problem are principally those of human behaviours and structures.

A discipline’s practices construct solutions to its general design problem. Consideration of disciplines indicates much variation in their use of specification as a practice in constructing solutions.

1 Such was the history of many disciplines: the origin of modern day Production Engineering, for example, was a nineteenth century set of craft practices and tacit knowledge. Dowell and Long 7

This variation, however, appears not to be dependent on variations in the hardness of the general design problems. Rather, disciplines appear to differ in the completeness with which they specify solutions to their respective general design problems before implementation occurs. At one extreme, some disciplines specify solutions completely before implementation: their practices may be described as ‘specify then implement’ (an example might be Electrical Engineering). At the other extreme, disciplines appear not to specify their solutions at all before implementing them: their practices may be described as ‘implement and test’ (an example might be Graphic Design). Other disciplines, such as SE, appear characteristically to specify solutions partially before implementing them: their practices may be described as ‘specify and implement’. ‘Specify then Implement’, therefore, and ‘implement and test’, would appear to represent the extremes of a dimension by which disciplines may be distinguished by their practices. It is a dimension of the completeness with which they specify design solutions.

 

Taken together, the dimension of problem hardness, characterising general design problems, and the dimension of specification completeness, characterising discipline practices, constitute a classification space for design disciplines such as Electrical Engineering and Graphic Design. The space is shown in Figure 2, including for illustrative purposes, the speculative location of SE.

Two conclusions are prompted by Figure 2. First, a general relation may be apparent between the hardness of a general design problem and the realiseable completeness with which its solutions might be specified. In particular, a boundary condition is likely to be present beyond which more complete solutions could not be specified for a problem of given hardness. The shaded area of Figure 2 is intended to indicate this condition, termed the ‘Boundary of Determinism’ – because it derives from the determinism of the phenomena implicated in the general design problem. It suggests that whilst Dowell and Long 8

very soft problems may only be solved by ‘implement and test’ practices, hard problems may be solved by ‘specify then implement’ practices.

Second, it is concluded from Figure 2 that the actual completeness with which solutions to a general design problem are specified, and the realiseable completeness, might be at variance. Accordingly, there may be different possible forms of the same discipline – each form addressing the same problem but with characteristically different practices. With reference to HF then, the contemporary discipline, a craft, will characteristically solve the HF general design problem mainly by ‘implementation and testing’. If solutions are specified at all, they will be incomplete before being implemented. Yet depending on the hardness of the HF general design problem, the realiseable completeness of specified solutions may be greater and a future form of the discipline, with practices more characteristically those of ‘specify then implement’, may be possible. For illustrative purposes, those different forms of the HF discipline are located speculatively in the figure.

Whilst the realiseable completeness with which a discipline may specify design solutions is governed by the hardness of the general design problem, the actual completeness with which it does so is governed by the formality of the knowledge it possesses. Consideration of the traditional engineering disciplines supports this assertion. Their modern-day practices are characteristically those of ‘specify then implement’, yet historically, their antecedents were ‘specify and implement’ practices, and earlier still – ‘implement and test’ practices. For example, the early steam engine preceded formal knowledge of thermodynamics and was constructed by ‘implementation and testing’. Yet designs of thermodynamic machines are now relatively completely specified before being implemented, a practice supported by formal knowledge. Such progress then, has been marked by the increasing formality of knowledge. It is also in spite of the increasing complexity of new technology – an increase which might only have served to make the general design problem more soft, and the boundary of determinism more constraining. The dimension of the formality of a discipline’s knowledge – ranging from experience to principles, is shown in Figure 2 and completes the classification space for design disciplines.

It should be clear from Figure 2 that there exists no pre-ordained relationship between the formality of a discipline’s knowledge and the hardness of its general design problem. In particular, the practices of a (craft) discipline supported by experience – that is, by informal knowledge – may address a hard problem. But also, within the boundary of determinism, that discipline could acquire formal knowledge to support specification as a design practice.

In Section 1.3, four deficiencies of the contemporary HF discipline were identified. The absence of formal discipline knowledge was proposed to account for these deficiencies. The present section has been concerned to examine the potential for HF to develop a more formal discipline knowledge. The potential would appear to be governed by the hardness of the HF general design problem, that is, by the determinism of the human behaviours which it implicates, at least with respect to any solution of that problem. And clearly, human behaviour is, in some respects and to some degree, deterministic. For example, drivers’ behaviour on the roads is determined, at least within the limits required by a particular design solution, by traffic system protocols. A training syllabus determines, within the limits required by a particular solution, the behaviour of the trainees – both in terms of learning strategies and the level of training required. Hence, formal HF knowledge is to some degree attainable. At the very least, it cannot be excluded that the model for that formal knowledge is the knowledge possessed by the established engineering disciplines.

Generally, the established engineering disciplines possess formal knowledge: a corpus of operationalised, tested, and generalised principles. Those principles are prescriptive, enabling the complete specification of design solutions before those designs are implemented (see Dowell and Long, 1988b). This theme of prescription in design is central to the thesis offered here.

Engineering principles can be substantive or methodological (see Checkland, 1981; Pirsig, 1974). Methodological Principles prescribe the methods for solving a general design problem optimally. For example, methodological principles might prescribe the representations of designs specified at a general level of description and procedures for systematically decomposing those representations Dowell and Long 9

until complete specification is possible at a level of description of immediate design implementation (Hubka, Andreason and Eder, 1988). Methodological principles would assure each lower level of specification as being a complete representation of an immediately higher level.

Substantive Principles prescribe the features and properties of artefacts, or systems that will constitute an optimal solution to a general design problem. As a simple example, a substantive principle deriving from Kirchoff’s Laws might be one which would specify the physical structure of a network design (sources, resistances and their nodes etc) whose behaviour (e.g., distribution of current) would constitute an optimal solution to a design problem concerning an amplifier’s power supply.

1.5. The Requirement for an Engineering Conception for Human Factors

The contemporary HF discipline does not possess either methodological or substantive engineering principles. The heuristics it possesses are either ‘rules of thumb’ derived from experience or guidelines derived from psychological theories and findings. Neither guidelines nor rules of thumb offer assurance of their efficacy in any given instance, and particularly with regard to the effectiveness of a design. The methods and models of HF (as opposed to methodological and substantive principles) are similarly without such an assurance. Clearly, any evolution of HF as an engineering discipline in the manner proposed here has yet to begin. There is an immediate need then, for a view of how it might begin, and how formulation of engineering principles might be precipitated.

van Gisch and Pipino (1986) have suggested the process by which scientific (as opposed to engineering) disciplines acquire formal knowledge. They characterise the activities of scientific disciplines at a number of levels, the most general being an epistemological enquiry concerning the nature and origin of discipline knowledge. From such an enquiry a paradigm may evolve. Although a paradigm may be considered to subsume all discipline activities (Long, 1987), it must, at the very least, subsume a coherent and complete definition of the concepts which in this case describe the General (Scientific) Problem of a scientific discipline. Those concepts, and their derivatives, are embodied in the explanatory and predictive theories of science and enable the formulation of research problems. For example, Newton’s Principia commences with an epistemological enquiry, and a paradigm in which the concept of inertia first occurs. The concept of inertia is embodied in scientific theories of mechanics, as for example, in Newton’s Second Law.

Engineering disciplines may be supposed to require an equivalent epistemological enquiry. However, rather than that enquiry producing a paradigm, we may construe its product as a conception. Such a conception is a unitary (and consensus) view of the general design problem of a discipline. Its power lies in the coherence and completeness of its definition of concepts which express that problem. Hence, it enables the formulation of engineering principles which embody and instantiate those concepts. A conception (like a paradigm) is always open to rejection and replacement.

HF currently does not possess a conception of its general design problem. Current views of the issue are ill-formed, fragmentary, or implicit (Shneiderman, 1980; Card, Moran and Newell, 1983; Norman and Draper, 1986). The lack of such a shared view is particularly apparent within the HF research literature in which concepts are ambiguous and lacking in coherence; those associated with the ‘interface’ (eg, ‘virtual objects’, ‘human performance’, ‘task semantics’, ‘user error’ etc) are particular examples of this failure. It is inconceiveable that a formulation of HF engineering principles might occur whilst there is no consensus understanding of the concepts which they would embody. Articulation of a conception must then be a pre-requisite for formulation of engineering principles for HF. Dowell and Long 10

The origin of a conception for the HF discipline must be a conception for the HCI discipline itself, the superordinate discipline incorporating HF. A conception (at least in form) as might be assumed by an engineering HCI discipline has been previously proposed (Dowell and Long, 1988a). It supports the conception for HF as an engineering discipline presented in Part II.

In conclusion, Part I has presented the case for an engineering conception for HF. A proposal for such a conception follows in Part II. The status of the conception, however, should be emphasised. First, the conception at this point in time is speculative. Second, the conception continues to be developed in support of, and supported by, the research of the authors. Third, there is no validation in the conventional sense to be offered for the conception at this time. Validation of the conception for HF will come from its being able to describe the design problems of HF, and from the coherence of its concepts, that is, from the continuity of relations, and agreement, between concepts. Readers may assess these aspects of validity for themselves. Finally, the validity of the conception for HF will also rest in its being a consensus view held by the discipline as a whole and this is currently not the case. Dowell and Long 11

Part II. Conception for an Engineering Discipline of Human Factors

2.1. Conception of the Human Factors General Design Problem……………………………………………………………………………………….13

2.2 . Conception of Work and the User……………………………………………………………15

2.3. Conception of the Interactive Worksystem and the User……………………………………………………………………………………………………18

2.4. Conception of Performance of the Interactive Worksystem and the User………………………………………………………………………..24

2.5. Conclusions and the Prospect for Human Factors Engineering Principles.26

The potential for HF to become an engineering discipline, and so better to respond to the problem of interactive systems design, was examined in Part I. The possibility of realising this potential through HF engineering principles was suggested – principles which might prescriptively support HF design expressed as ‘specify then implement’. It was concluded that a pre-requisite to the development of HF engineering principles, is a conception of the general design problem of HF, which was informally expressed as:

‘to design human interactions with computers for effective working’.

Part II proposes a conception for HF. It attempts to establish the set of related concepts which can express the general design problem of HF more formally. Such concepts would be those embodied in HF engineering principles. As indicated in Section 1.1, the conception for HF is supported by a conception for an engineering discipline of HCI earlier proposed by Dowell and Long (1988a). Space precludes re-iteration of the conception for HCI here, other than as required for the derivation of the conception for HF. Part II first asserts a more formal expression of the HF general design problem which an engineering discipline would address. Part II then continues by elaborating and illustrating the concepts and their relations embodied in that expression.

2.1. Conception of the Human Factors General Design Problem.

The conception for the (super-ordinate) engineering discipline of HCI asserts a fundamental distinction between behavioural systems which perform work, and a world in which work originates, is performed and has its consequences. Specifically conceptualised are interactive worksystems consisting of human and computer behaviours together performing work. It is work evidenced in a world of physical and informational objects disclosed as domains of application. The distinction between worksystems and domains of application is represented schematically in Figure 3. Dowell and Long 12

 

Effectiveness derives from the relationship of an interactive worksystem with its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs it incurs. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed.

The concern of an engineering HCI discipline would be the design of interactive worksystems for performance. More precisely, its concern would be the design of behaviours constituting a worksystem {S} whose actual performance (PA) conformed with some desired performance (PD). And to design {S} would require the design of human behaviours {U} interacting with computer behaviours {C}. Hence, conception of the general design problem of an engineering discipline of HCI is expressed as:

Specify then implement {U} and {C}, such that

{U} interacting with {C} = {S}as PAPD

where PD = fn. { QD ,KD }

QD expresses the desired quality of the products of work within the given domain of application,

KD expresses acceptable (i.e., desired) costs incurred by the worksystem, i.e., by both human and computer.

The problem, when expressed as one of to ‘specify then implement’ designs of interactive worksystems, is equivalent to the general design problems characteristic of other engineering disciplines (see Section 1.4.).

The interactive worksystem can be distinguished as two separate, but interacting sub-systems, that is, a system of human behaviours interacting with a system of computer behaviours. The human behaviours may be treated as a behavioural system in their own right, but one interacting with the system of computer behaviours to perform work. It follows that the general design problem of HCI may be decomposed with regard to its scope (with respect to the human and computer behavioural Dowell and Long 13

sub-systems) giving two related problems. Decomposition with regard to the human behaviours gives the general design problem of the HF1 discipline as:

Specify then implement {U} such that

{U} interacting with {C} = {S}as PAPD

The general design problem of HF then, is one of producing implementable specifications of human behaviours {U} which, interacting with computer behaviours {C}, are constituted within a worksystem {S} whose performance conforms with a desired performance (PD).

The following sections elaborate the conceptualisation of human behaviours (the user, or users) with regard to the work they perform, the interactive worksystem in which they are constituted, and performance.

2.2 . Conception of Work and the User

The conception for HF identifies a world in which work originates, is performed and has its consequences. This section presents the concepts by which work and its relations with the user are expressed.

Objects and their attributes

Work occurs in a world consisting of objects and arises in the intersection of organisations and (computer) technology. Objects may be both abstract as well as physical, and are characterised by their attributes. Abstract attributes of objects are attributes of information and knowledge. Physical attributes are attributes of energy and matter. Letters (i.e., correspondence) are objects; their abstract attributes support the communication of messages etc; their physical attributes support the visual/verbal representation of information via language.

Attributes and levels of complexity

The different attributes of an object may emerge at different levels within a hierarchy of levels of complexity (see Checkland, 1981). For example, characters and their configuration on a page are physical attributes of the object ‘a letter’ which emerge at one level of complexity; the message of the letter is an abstract attribute which emerges at a higher level of complexity.

Objects are described at different levels of description commensurate with their levels of complexity. However, at a high level of description, separate objects may no longer be differentiated. For example, the object ‘income tax return’ and the object ‘personal letter’ are both ‘correspondence’ objects at a higher level of description. Lower levels of description distinguish their respective attributes of content, intended correspondent etc. In this way, attributes of an object described at one level of description completely re-represent those described at a lower level.

Relations between attributes

Attributes of objects are related, and in two ways. First, attributes at different levels of complexity are related. As indicated earlier, those at one level are completely subsumed in those at a higher level. In particular, abstract attributes will occur at higher levels of complexity than physical attributes and will subsume those lower level physical attributes. For example, the abstract attributes of an object ‘message’ concerning the representation of its content by language subsume the lower level physical attributes, such as the font of the characters expressing the language. As an alternative example, an

1The General Design Problem of SE would be equivalent and be expressed as ‘Specify then implement {C} such that .. etc. Dowell and Long 14

industrial process, such as a steel rolling process in a foundry, is an object whose abstract attributes will include the process’s efficiency. Efficiency subsumes physical attributes of the process, – its power consumption, rate of output, dimensions of the output (the rolled steel), etc – emerging at a lower level of complexity.

Second, attributes of objects are related within levels of complexity. There is a dependency between the attributes of an object emerging within the same level of complexity. For example, the attributes of the industrial process of power consumption and rate of output emerge at the same level and are inter-dependent.

Attribute states and affordance

At any point or event in the history of an object, each of its attributes is conceptualised as having a state. Further, those states may change. For example, the content and characters (attributes) of a letter (object) may change state: the content with respect to meaning and grammar etc; its characters with respect to size and font etc. Objects exhibit an affordance for transformation, engendered by their attributes’ potential for state change (see Gibson, 1977). Affordance is generally pluralistic in the sense that there may be many, or even, infinite transformations of objects, according to the potential changes of state of their attributes.

Attributes’ relations are such that state changes of one attribute may also manifest state changes in related attributes, whether within the same level of complexity, or across different levels of complexity. For example, changing the rate of output of an industrial process (lower level attribute) will change both its power consumption (same level attribute) and its efficiency (higher level attribute).

Organisations, domains (of application), and the requirement for attribute state changes

A domain of application may be conceptualised as: ‘a class of affordance of a class of objects’. Accordingly, an object may be associated with a number of domains of application (‘domains’). The object ‘book’ may be associated with the domain of typesetting (state changes of its layout attributes) and with the domain of authorship (state changes of its textual content). In principle, a domain may have any level of generality, for example, the writing of letters and the writing of a particular sort of letter.

Organisations are conceptualised as having domains as their operational province and of requiring the realisation of the affordance of objects. It is a requirement satisfied through work. Work is evidenced in the state changes of attributes by which an object is intentionally transformed: it produces transforms, that is, objects whose attributes have an intended state. For example, ‘completing a tax return’ and ‘writing to an acquaintance’, each have a ‘letter’ as their transform, where those letters are objects whose attributes (their content, format and status, for example) have an intended state. Further editing of those letters would produce additional state changes, and therein, new transforms.

Goals

Organisations express their requirement for the transformation of objects through specifying goals. A product goal specifies a required transform – a required realisation of the affordance of an object. In expressing the required transformation of an object, a product goal will generally suppose necessary state changes of many attributes. The requirement of each attribute state change can be expressed as a task goal, deriving from the product goal. So for example, the product goal demanding transformation of a letter making its message more courteous, would be expressed by task goals possibly requiring state changes of semantic attributes of the propositional structure of the text, and of syntactic attributes of the grammatical structure. Hence, a product goal can be re-expressed as a task goal structure, a hierarchical structure expressing the relations between task goals, for example, their sequences. Dowell and Long 15

In the case of the computer-controlled steel rolling process, the process is an object whose transformation is required by a foundry organisation and expressed by a product goal. For example, the product goal may specify the elimination of deviations of the process from a desired efficiency. As indicated earlier, efficiency will at least subsume the process’s attributes of power consumption, rate of output, dimensions of the output (the rolled steel), etc. As also indicated earlier, those attributes will be inter-dependent such that state changes of one will produce state changes in the others – for example, changes in rate of output will also change the power consumption and the efficiency of the process. In this way, the product goal (of correcting deviations from the desired efficiency) supposes the related task goals (of setting power consumption, rate of output, dimensions of the output etc). Hence, the product goal can be expressed as a task goal structure and task goals within it will be assigned to the operator monitoring the process.

Quality

The transformation of an object demanded by a product goal will generally be of a multiplicity of attribute state changes – both within and across levels of complexity. Consequently, there may be alternative transforms which would satisfy a product goal – letters with different styles, for example – where those different transforms exhibit differing compromises between attribute state changes of the object. By the same measure, there may also be transforms which will be at variance with the product goal. The concept of quality (Q) describes the variance of an actual transform with that specified by a product goal. It enables all possible outcomes of work to be equated and evaluated.

Work and the user

Conception of the domain then, is of objects, characterised by their attributes, and exhibiting an affordance arising from the potential changes of state of those attributes. By specifying product goals, organisations express their requirement for transforms – objects with specific attribute states. Transforms are produced through work, which occurs only in the conjunction of objects affording transformation and systems capable of producing a transformation.

From product goals derive a structure of related task goals which can be assigned either to the human or to the computer (or both) within an associated worksystem. The task goals assigned to the human are those which motivate the human’s behaviours. The actual state changes (and therein transforms) which those behaviours produce may or may not be those specified by task and product goals, a difference expressed by the concept of quality.

Taken together, the concepts presented in this section support the HF conception’s expression of work as relating to the user. The following section presents the concepts expressing the interactive worksystem as relating to the user.

2.3. Conception of the Interactive Worksystem and the User.

The conception for HF identifies interactive worksystems consisting of human and computer behaviours together performing work. This section presents the concepts by which interactive worksystems and the user are expressed.

Interactive worksystems

Humans are able to conceptualise goals and their corresponding behaviours are said to be intentional (or purposeful). Computers, and machines more generally, are designed to achieve goals, and their corresponding behaviours are said to be intended (or purposive1). An interactive worksystem

1 Human behaviour is teleological, machine behaviour is teleonomic (Checkland, 1981). Dowell and Long 16

(‘worksystem’) is a behavioural system distinguished by a boundary enclosing all human and computer behaviours whose purpose is to achieve and satisfy a common goal. For example, the behaviours of a secretary and wordprocessor whose purpose is to produce letters constitute a worksystem. Critically, it is only by identifying that common goal that the boundary of the worksystem can be established: entities, and more so – humans, may exhibit a range of contiguous behaviours, and only by specifying the goals of concern, might the boundary of the worksystem enclosing all relevant behaviours be correctly identified.

Worksystems transform objects by producing state changes in the abstract and physical attributes of those objects (see Section 2.2). The secretary and wordprocessor may transform the object ‘correspondence’ by changing both the attributes of its meaning and the attributes of its layout. More generally, a worksystem may transform an object through state changes produced in related attributes. An operator monitoring a computer-controlled industrial process may change the efficiency of the process through changing its rate of output.

The behaviours of the human and computer are conceptualised as behavioural sub-systems of the worksystem – sub-systems which interact1. The human behavioural sub-system is here more appropriately termed the user. Behaviour may be loosely understood as ‘what the human does’, in contrast with ‘what is done’ (i.e. attribute state changes in a domain). More precisely the user is conceptualised as:

a system of distinct and related human behaviours, identifiable as the sequence of states of a person2 interacting with a computer to perform work, and corresponding with a purposeful (intentional) transformation of objects in a domain3 (see also Ashby, 1956).

Although possible at many levels, the user must at least be expressed at a level commensurate with the level of description of the transformation of objects in the domain. For example, a secretary interacting with an electronic mailing facility is a user whose behaviours include receiving and replying to messages. An operator interacting with a computer-controlled milling machine is a user whose behaviours include planning the tool path to produce a component of specified geometry and tolerance.

The user as a system of mental and physical human behaviours

The behaviours constituting a worksystem are both physical as well as abstract. Abstract behaviours are generally the acquisition, storage, and transformation of information. They represent and process information at least concerning: domain objects and their attributes, attribute relations and attribute states, and the transformations required by goals. Physical behaviours are related to, and express, abstract behaviours.

Accordingly, the user is conceptualised as a system of both mental (abstract) and overt (physical) behaviours which extend a mutual influence – they are related. In particular, they are related within an assumed hierarchy of behaviour types (and their control) wherein mental behaviours generally determine, and are expressed by, overt behaviours. Mental behaviours may transform (abstract) domain objects represented in cognition, or express through overt behaviour plans for transforming domain objects.

1 The human behaviours and computer behaviours are separate systems ‘coupled’ to form a worksystem (see Ashby, 1956)

2Behaviours are conceptualised as being supported and enabled by co-extensive structures. The user, however, is a description of a behavioural system and does not describe the corresponding human structures (see later in Section 2.3.).

3This conception of human behaviour differs from that of behaviourist psychology which generally seeks correlations between observable inputs and outputs of a mental ‘blackbox’ without reference to any postulated artifacts of the mind or brain. Dowell and Long 17

So for example, the operator working in the control room of the foundry has the product goal required to maintain a desired condition of the computer-controlled steel rolling process. The operator attends to the computer (whose behaviours include the transmission of information about the process). Hence, the operator acquires a representation of the current condition of the process by collating the information displayed by the computer and assessing it by comparison with the condition specified by the product goal. The operator`s acquisition, collation and assessment are each distinct mental behaviours, conceptualised as representing and processing information. The operator reasons about the attribute state changes necessary to eliminate any discrepancy between current and desired conditions of the process, that is, the set of related changes which will produce the required transformation of the process. That decision is expressed in the set of instructions issued to the computer through overt behaviour – making keystrokes, for example.

The user is conceptualised as having cognitive, conative and affective aspects. The cognitive aspects of the user are those of their knowing, reasoning and remembering, etc; the conative aspects are those of their acting, trying and persevering, etc; and the affective aspects are those of their being patient, caring, and assured, etc. Both mental and overt human behaviours are conceptualised as having these three aspects.

Human-computer interaction

Although the human and computer behaviours may be treated as separable sub-systems of the worksystem, those sub-systems extend a “mutual influence”, or interaction whose configuration principally determines the worksystem (Ashby, 1956).

Interaction is conceptualised as:

the mutual influence of the user (i.e., the human behaviours) and the computer behaviours associated within an interactive worksystem

Hence, the user {U} and computer behaviours {C} constituting a worksystem {S}, were expressed in the general design problem of HF (Section 2.1) as:

{U} interacting with {C} = {S}

Interaction of the human and computer behaviours is the fundamental determinant of the worksystem, rather than their individual behaviours per se. For example, the behaviours of an operator interact with the behaviours of a computer-controlled milling machine. The operator’s behaviours influence the behaviours of the machine, perhaps in the tool path program – the behaviours of the machine, perhaps the run-out of its tool path, influences the selection behaviour of the operator. The configuration of their interaction – the inspection that the machine allows the operator, the tool path control that the operator allows the machine – determines the worksystem that the operator and machine behaviours constitute in their planning and execution of the machining work.

The assignment of task goals then, to either the human or the computer delimits the user and therein configures the interaction. For example, replacement of a mis-spelled word required in a document is a product goal which can be expressed as a task goal structure of necessary and related attribute state changes. In particular, the text field for the correctly spelled word demands an attribute state change in the text spacing of the document. Specifying that state change may be a task goal assigned to the user, as in interaction with the behaviours of early text editor designs, or it may be a task goal assigned to the computer, as in interaction with the ‘wrap-round’ behaviours of contemporary wordprocessor designs. The assignment of the task goal of specification configures the interaction of the human and computer behaviours in each case; it delimits the user.

On-line and off-line human behaviours Dowell and Long 18

The user may include both on-line and off-line human behaviours: on-line behaviours are associated with the computer’s representation of the domain; offline behaviours are associated with non-computer representations of the domain, or the domain itself.

As an illustration of the distinction, consider the example of an interactive worksystem consisting of behaviours of a secretary and a wordprocessor and required to produce a paper-based copy of a dictated letter stored on audio tape. The product goal of the worksystem here requires the transformation of the physical representation of the letter from one medium to another, that is, from tape to paper. From the product goal derives the task goals relating to required attribute state changes of the letter. Certain of those task goals will be assigned to the secretary. The secretary’s off-line behaviours include listening to, and assimilating the dictated letter, so acquiring a representation of the domain directly. By contrast, the secretary’s on-line behaviours include specifying the represention by the computer of the transposed content of the letter in a desired visual/verbal format of stored physical symbols.

On-line and off-line human behaviours are a particular case of the ‘internal’ interactions between a human’s behaviours as, for example, when the secretary’s typing interacts with memorisations of successive segments of the dictated letter.

Human structures and the user

Conceptualisation of the user as a system of human behaviours needs to be extended to the structures supporting behaviour.

Whereas human behaviours may be loosely understood as ‘what the human does’, the structures supporting them can be understood as ‘how they are able to do what they do’ (see Marr, 1982; Wilden, 1980). There is a one to many mapping between a human`s structures and the behaviours they might support: the structures may support many different behaviours.

In co-extensively enabling behaviours at each level, structures must exist at commensurate levels. The human structural architecture is both physical and mental, providing the capability for a human’s overt and mental behaviours. It provides a represention of domain information as symbols (physical and abstract) and concepts, and the processes available for the transformation of those representations. It provides an abstract structure for expressing information as mental behaviour. It provides a physical structure for expressing information as physical behaviour.

Physical human structure is neural, bio-mechanical and physiological. Mental structure consists of representational schemes and processes. Corresponding with the behaviours it supports and enables, human structure has cognitive, conative and affective aspects. The cognitive aspects of human structures include information and knowledge – that is, symbolic and conceptual representations – of the domain, of the computer and of the person themselves, and it includes the ability to reason. The conative aspects of human structures motivate the implementation of behaviour and its perseverence in pursuing task goals. The affective aspects of human structures include the personality and temperament which respond to and supports behaviour.

To illustrate the conceptualisation of mental structure, consider the example of structure supporting an operator’s behaviours in the foundry control room. Physical structure supports perception of the steel rolling process and executing corrective control actions to the process through the computer input devices. Mental structures support the acquisition, memorisation and transformation of information about the steel rolling process. The knowledge which the operator has of the process and of the computer supports the collation, assessment and reasoning about corrective control actions to be executed.

The limits of human structure determine the limits of the behaviours they might support. Such structural limits include those of: intellectual ability; knowledge of the domain and the computer; memory and attentional capacities; patience; perseverence; dexterity; and visual acuity etc. The structural limits on behaviour may become particularly apparent when one part of the structure (a Dowell and Long 19

channel capacity, perhaps) is required to support concurrent behaviours, perhaps simultaneous visual attending and reasoning behaviours. The user then, is ‘resource’ limited by the co-extensive human structure.

The behavioural limits of the human determined by structure are not only difficult to define with any kind of completeness, they will also be variable because that structure can change, and in a number of respects. A person may have self-determined changes in response to the domain – as expressed in learning phenomena, acquiring new knowledge of the domain, of the computer, and indeed of themselves, to better support behaviour. Also, human structure degrades with the expenditure of resources in behaviour, as evidenced in the phenomena of mental and physical fatigue. It may also change in response to motivating or de-motivating influences of the organisation which maintains the worksystem.

It must be emphasised that the structure supporting the user is independent of the structure supporting the computer behaviours. Neither structure can make any incursion into the other, and neither can directly support the behaviours of the other. (Indeed this separability of structures is a pre-condition for expressing the worksystem as two interacting behavioural sub-systems.) Although the structures may change in response to each other, they are not, unlike the behaviours they support, interactive; they are not included within the worksystem. The combination of structures of both human and computer supporting their interacting behaviours is conceptualised as the user interface .

Resource costs of the user

Work performed by interactive worksystems always incurs resource costs. Given the separability of the human and the computer behaviours, certain resource costs are associated directly with the user and distinguished as structural human costs and behavioural human costs.

Structural human costs are the costs of the human structures co-extensive with the user. Such costs are incurred in developing and maintaining human skills and knowledge. More specifically, structural human costs are incurred in training and educating people, so developing in them the structures which will enable their behaviours necessary for effective working. Training and educating may augment or modify existing structures, provide the person with entirely novel structures, or perhaps even reduce existing structures. Structural human costs will be incurred in each case and will frequently be borne by the organisation. An example of structural human costs might be the costs of training a secretary in the particular style of layout required for an organisation’s correspondence with its clients, and in the operation of the computer by which that layout style can be created.

Structural human costs may be differentiated as cognitive, conative and affective structural costs of the user. Cognitive structural costs express the costs of developing the knowledge and reasoning abilities of people and their ability for formulating and expressing novel plans in their overt behaviour – as necessary for effective working. Conative structural costs express the costs of developing the activity, stamina and persistence of people as necessary for effective working. Affective structural costs express the costs of developing in people their patience, care and assurance as necessary as necessary for effective working.

Behavioural human costs are the resource costs incurred by the user (i.e by human behaviours) in recruiting human structures to perform work. They are both physical and mental resource costs. Physical behavioural costs are the costs of physical behaviours, for example, the costs of making keystrokes on a keyboard and of attending to a screen display; they may be expressed without differentiation as physical workload. Mental behavioural costs are the costs of mental behaviours, for example, the costs of knowing, reasoning, and deciding; they may be expressed without differentiation as mental workload. Mental behavioural costs are ultimately manifest as physical behavioural costs.

When differentiated, mental and physical behavioural costs are conceptualised as the cognitive, conative and affective behavioural costs of the user. Cognitive behavioural costs relate to both the mental representing and processing of information, and the demands made on the individual`s extant Dowell and Long 20

knowledge, as well as the physical expression thereof in the formulation and expression of a novel plan. Conative behavioural costs relate to the repeated mental and physical actions and effort required by the formulation and expression of the novel plan. Affective behavioural costs relate to the emotional aspects of the mental and physical behaviours required in the formulation and expression of the novel plan. Behavioural human costs are evidenced in human fatigue, stress and frustration; they are costs borne directly by the individual. Dowell and Long 21

2.4. Conception of Performance of the Interactive Worksystem and the User.

In asserting the general design problem of HF (Section 2.1.), it was reasoned that:

“Effectiveness derives from the relationship of an interactive worksystemwith its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs incurred by it. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed. ”

This statement followed from the distinction between interactive worksystems performing work, and the work they perform. Subsequent elaboration upon this distinction enables reconsideration of the concept of performance, and examination of its central importance within the conception for HF.

Because the factors which constitute this engineering concept of performance (i.e the quality and costs of work) are determined by behaviour, a concordance is assumed between the behaviours of worksystems and their performance: behaviour determines performance (see Ashby, 1956; Rouse, 1980). The quality of work performed by interactive worksystems is conceptualised as the actual transformation of objects with regard to their transformation demanded by product goals. The costs of work are conceptualised as the resource costs incurred by the worksystem, and are separately attributed to the human and computer. Specifically, the resource costs incurred by the human are differentiated as: structural human costs – the costs of establishing and maintaining the structure supporting behaviour; and behavioural human costs – the costs of the behaviour recruiting structure to its own support. Structural and behavioural human costs were further differentiated as cognitive, conative and affective costs.

A desired performance of an interactive worksystem may be conceptualised. Such a desired performance might either be absolute, or relative as in a comparative performance to be matched or improved upon. Accordingly, criteria expressing desired performance, may either specify categorical gross resource costs and quality, or they may specify critical instances of those factors to be matched or improved upon1.

Discriminating the user’s performance within the performance of the interactive worksystem would require the separate assimilation of human resource costs and their achievement of desired attribute state changes demanded by their assigned task goals. Further assertions concerning the user arise from the conceptualisation of worksystem performance. First, the conception of performance is able to distinguish the quality of the transform from the effectiveness of the worksystems which produce them. This distinction is essential as two worksystems might be capable of producing the same transform, yet if one were to incur a greater resource cost than the other, its effectiveness would be the lesser of the two systems.

Second, given the concordance of behaviour with performance, optimal human (and equally, computer) behaviours may be conceived as those which incur a minimum of resource costs in producing a given transform. Optimal human behaviour would minimise the resource costs incurred in producing a transform of given quality (Q). However, that optimality may only be categorically determined with regard to worksystem performance, and the best performance of a worksystem may still be at variance with the performance desired of it (PD). To be more specific, it is not sufficient for human behaviours simply to be error-free. Although the elimination of errorful human behaviours may contribute to the best performance possible of a given worksystem, that performance may still be

1See Section 1.4. where the possibility for expressing, by an absolute value, the desired performance of a system or artifact is associated with the hardness of the design problem. Dowell and Long 22

less than desired performance. Conversely, although human behaviours may be errorful, a worksystem may still support a desired performance.

Third, the common measures of human ‘performance’ – errors and time, are related in this conceptualisation of performance. Errors are behaviours which increase resource costs incurred in producing a given transform, or which reduce the quality of transform, or both. The duration of human behaviours may (very generally) be associated with increases in behavioural user costs.

Fourth, structural and behavioural human costs may be traded-off in performance. More sophisticated human structures supporting the user, that is, the knowledge and skills of experienced and trained people, will incur high (structural) costs to develop, but enable more efficient behaviours – and therein, reduced behavioural costs.

Fifth, resource costs incurred by the human and the computer may be traded-off in performance. A user can sustain a level of performance of the worksystem by optimising behaviours to compensate for the poor behaviours of the computer (and vice versa), i.e., behavioural costs of the user and computer are traded-off. This is of particular concern for HF as the ability of humans to adapt their behaviours to compensate for poor computer-based systems often obscures the low effectiveness of worksystems.

This completes the conception for HF. From the initial assertion of the general design problem of HF, the concepts that were invoked in its formal expression have subsequently been defined and elaborated, and their coherence established.

2.5. Conclusions and the Prospect for Human Factors Engineering Principles

Part I of this paper examined the possibility of HF becoming an engineering discipline and specifically, of formulating HF engineering principles. Engineering principles, by definition prescriptive, were seen to offer the opportunity for a significantly more effective discipline, ameliorating the problems which currently beset HF – problems of poor integration, low efficiency, efficacy without guarantee, and slow development.

A conception for HF is a pre-requisite for the formulation of HF engineering principles. It is the concepts and their relations which express the HF general design problem and which would be embodied in HF engineering principles. The form of a conception for HF was proposed in Part II. Originating in a conception for an engineering discipline of HCI (Dowell and Long, 1988a), the conception for HF is postulated as appropriate for supporting the formulation of HF engineering principles.

The conception for HF is a broad view of the HF general design problem. Instances of the general design problem may include the development of a worksystem, or the utilisation of a worksystem within an organisation. Developing worksystems which are effective, and maintaining the effectiveness of worksystems within a changing organisational environment, are both expressed within the problem. In addition, the conception takes the broad view on the research and development activities necessary to solve the general design problem and its instantiations, respectively. HF engineering research practices would seek solutions, in the form of (methodological and substantive) engineering principles, to the general design problem. HF engineering practices in systems development programmes would seek to apply those principles to solve instances of the general design problem, that is, to the design of specific users within specific interactive worksystems. Collaboration of HF and SE specialists and the integration of their practices is assumed.

Notwithstanding the comprehensive view of determinacy developed in Part I, the intention of specification associated with people might be unwelcome to some. Yet, although the requirement for Dowell and Long 23

design and specification of the user is being unequivocally proposed, techniques for implementing those specifications are likely to be more familiar than perhaps expected – and possibly more welcome. Such techniques might include selection tests, aptitude tests, training programmes, manuals and help facilities, or the design of the computer.

A selection test would assess the conformity of a candidates’ behaviours with a specification for the user. An aptitude test would assess the potential for a candidates’ behaviours to conform with a specification for the user. Selection and aptitude tests might assess candidates either directly or indirectly. A direct test would observe candidates’ behaviours in ‘hands on’ trial periods with the ‘real’ computer and domain, or with simulations of the computer and domain. An indirect test would examine the knowledge and skills (i.e., the structures) of candidates, and might be in the form of written examinations. A training programme would develop the knowledge and skills of a candidate as necessary for enabling their behaviours to conform with a specification for the user. Such programmes might take the form of either classroom tuition or ‘hands on’ learning. A manual or on-line help facility would augment the knowledge possessed by a human, enabling their behaviours to conform with a specification for the user. Finally, the design of the computer itself, through the interactions of its behaviours with the user, would enable the implementation of a specification for the user.

To conclude, discussion of the status of the conception for HF must be briefly extended. The contemporary HF discipline was characterised as a craft discipline. Although it may alternatively be claimed as an applied science discipline, such claims must still admit the predominantly craft nature of systems development practices (Long and Dowell, 1989). No instantiations of the HF engineering discipline implied in this paper are visible, and examples of supposed engineering practices may be readily associated with craft or applied science disciplines. There are those, however, who would claim the craft nature of the HF discipline to be dictated by the nature of the problem it addresses. They may maintain that the indeterminism and complexity of the problem of designing human systems (the softness of the problem) precludes the application of formal and prescriptive knowledge. This claim was rejected in Part I on the grounds that it mistakes the current absence of formal discipline knowledge as an essential reflection of the softness of its general design problem. The claim fails to appreciate that this absence may rather be symptomatic of the early stage of the discipline`s development. The alternative position taken by this paper is that the softness of the problem needs to be independently established. The general design problem of HF is, to some extent, hard – human behaviour is clearly to some useful degree deterministic – and certainly sufficiently deterministic for the design of certain interactive worksystems. It may accordingly be presumed that HF engineering principles can be formulated to support product quality within a systems development ethos of ‘design for performance’.

The extent to which HF engineering principles might be realiseable in practice remains to be seen. It is not supposed that the development of effective systems will never require craft skills in some form, and engineering principles are not seen to be incompatible with craft knowledge, particularly with respect to their instantiation (Long and Dowell, 1989). At a minimum, engineering principles might be expected to augment the craft knowledge of HF professionals. Yet the great potential of HF engineering principles for the effectiveness of the discipline demands serious consideration. However, their development would only be by intention, and would be certain to demand a significant research effort. This paper is intended to contribute towards establishing the conception required for the formulation of HF engineering principles. Dowell and Long 24

References

Ashby W. Ross, (1956), An Introduction to Cybernetics. London: Methuen.

Bornat R. and Thimbleby H., (1989), The Life and Times of ded, Text Display Editor. In J.B. Long and A.D. Whitefield (ed.s), Cognitive Ergonomics and Human Computer Interaction. Cambridge: Cambridge University Press.

Card, S. K., Moran, T., and Newell, A., (1983), The Psychology of Human Computer Interaction, New Jersey: Lawrence Erlbaum Associates.

Carey, T., (1989), Position Paper: The Basic HCI Course For Software Engineers. SIGCHI Bulletin, Vol. 20, no. 3.

Carroll J.M., and Campbell R. L., (1986), Softening up Hard Science: Reply to Newell and Card. Human Computer Interaction, Vol. 2, pp. 227-249.

Checkland P., (1981), Systems Thinking, Systems Practice. Chichester: John Wiley and Sons.

Cooley M.J.E., (1980), Architect or Bee? The Human/Technology Relationship. Slough: Langley Technical Services.

Didner R.S. A Value Added Approach to Systems Design. Human Factors Society Bulletin, May 1988.

Dowell J., and Long J. B., (1988a), Human-Computer Interaction Engineering. In N. Heaton and M . Sinclair (ed.s), Designing End-User Interfaces. A State of the Art Report. 15:8. Oxford: Pergamon Infotech.

Dowell, J., and Long, J. B., 1988b, A Framework for the Specification of Collaborative Research in Human Computer Interaction, in UK IT 88 Conference Publication 1988, pub. IEE and BCS.

Gibson J.J., (1977), The Theory of Affordances. In R.E. Shaw and J. Branford (ed.s), Perceiving, Acting and Knowing. New Jersey: Erlbaum.

Gries D., (1981), The Science of Programming, New York: Springer Verlag.

Hubka V., Andreason M.M. and Eder W.E., (1988), Practical Studies in Systematic Design, London: Butterworths.

Long J.B., Hammond N., Barnard P. and Morton J., (1983), Introducing the Interactive Computer at Work: the Users’ Views. Behaviour And Information Technology, 2, pp. 39-106.

Long, J., (1987), Cognitive Ergonomics and Human Computer Interaction. In P. Warr (ed.), Psychology at Work. England: Penguin.

Long J.B., (1989), Cognitive Ergonomics and Human Computer Interaction: an Introduction. In J.B. Long and A.D. Whitefield (ed.s), Cognitive Ergonomics and Human Computer Interaction. Cambridge: Cambridge University Press.

Long J.B. and Dowell J., (1989), Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering. In Sutcliffe A. and Macaulay L., Proceedings of the Fifth Conference of the BCS HCI SG. Cambridge: Cambridge University Press. Dowell and Long 25

Marr D., (1982), Vision. New York: Wh Freeman and Co.

Morgan D.G., Shorter D.N. and Tainsh M., (1988), Systems Engineering. Improved Design and Construction of Complex IT systems. Available from IED, Kingsgate House, 66-74 Victoria Street, London, SW1.

Norman D.A. and Draper S.W. (eds) (1986): User Centred System Design. Hillsdale, New Jersey: Lawrence Erlbaum;

Pirsig R., 1974, Zen and the Art of Motorcycle Maintenance. London: Bodley Head.

Rouse W. B., (1980), Systems Engineering Models of Human Machine Interaction. New York: Elsevier North Holland.

Shneiderman B. (1980): Software Psychology: Human Factors in Computer and Information Systems. Cambridge, Mass.: Winthrop.

Thimbleby H., (1984), Generative User Engineering Principles for User Interface Design. In B. Shackel (ed.), Proceedings of the First IFIP conference on Human-Computer Interaction. Human-Computer Interaction – INTERACT’84. Amsterdam: Elsevier Science. Vol.2, pp. 102-107.

van Gisch J. P. and Pipino L.L., (1986), In Search of a Paradigm for the Discipline of Information Systems, Future Computing Systems, 1 (1), pp. 71-89.

Walsh P., Lim K.Y., Long J.B., and Carver M.K., (1988), Integrating Human Factors with System Development. In: N. Heaton and M. Sinclair (eds): Designing End-User Interfaces. Oxford: Pergamon Infotech.

Wilden A., 1980, System and Structure; Second Edition. London: Tavistock Publications.

This paper has greatly benefited from discussion with others and from their criticisms. We would like to thank our collegues at the Ergonomics Unit, University College London and in particular, Andy Whitefield, Andrew Life and Martin Colbert. We would also like to thank the editors of the special issue for their support and two anonymous referees for their helpful comments. Any remaining infelicities – of specification and implementation – are our own.

  • 1
  • 2