Ergonomics Unit Research

Papers and other publications, produced by the Ergonomics Unit at UCL

Towards a Conception for an Engineering Discipline of Human Factors 150 150 John

Towards a Conception for an Engineering Discipline of Human Factors

 

John Dowell and John Long

Ergonomics Unit, University College London, 

26, Bedford Way, London. WC1H 0AP. 

Abstract  This paper concerns one possible response of Human Factors to the need for better user-interactions of computer-based systems. The paper is in two parts. Part I examines the potential for Human Factors to formulate engineering principles. A basic pre-requisite for realising that potential is a conception of the general design problem addressed by Human Factors. The problem is expressed informally as: ‘to design human interactions with computers for effective working’. A conception would provide the set of related concepts which both expressed the general design problem more formally, and which might be embodied in engineering principles. Part II of the paper proposes such a conception and illustrates its concepts. It is offered as an initial and speculative step towards a conception for an engineering discipline of Human Factors. In P. Barber and J. Laws (ed.s) Special Issue on Cognitive Ergonomics, Ergonomics, 1989, vol. 32, no. 11, pp. 1613-1536.

Part I. Requirement for Human Factors as an Engineering Discipline of Human-Computer Interaction

1.1 Introduction;

1.2 Characterization of the human factors discipline;

1.3 State of the human factors art;

1.4 Human factors engineering;

1.5 The requirement for an engineering conception of human factors.

 

1.1 Introduction

Advances in computer technology continue to raise expectations for the effectiveness of its applications. No longer is it sufficient for computer-based systems simply ‘to work’, but rather, their contribution to the success of the organisations utilising them is now under scrutiny (Didner, 1988). Consequently, views of organisational effectiveness must be extended to take account of the (often unacceptable) demands made on people interacting with computers to perform work, and the needs of those people. Any technical support for such views must be similarly extended (Cooley, 1980).

With recognition of the importance of ‘human-computer interactions’ as a determinant of effectiveness (Long, Hammond, Barnard, and Morton, 1983), Cognitive Ergonomics is emerging as a new and specialist activity of Ergonomics or Human Factors (HF). Throughout this paper, HF is to be understood as a discipline which includes Cognitive Ergonomics, but only as it addresses human-computer interactions. This usage is contrasted with HF as a discipline which more generally addresses human-machine interactions. HF seeks to support the development of more effective computer-based systems. However, it has yet to prove itself in this respect, and moreover, the adequacy of the HF response to the need for better human-computer interactions is of concern. For it continues to be the case that interactions result from relatively ad hoc design activities to which may be attributed, at least in part, the frequent ineffectiveness of systems (Thimbleby, 1984).

This paper is concerned to develop one possible response of HF to the need for better human-computer interactions. It is in two parts. Part I examines the potential for HF to formulate HF engineering principles for supporting its better response. Pre-requisite to the realisation of that potential, it concludes, is a conception of the general design problem it addresses. Part II of the paper is a proposal for such a conception.

The structure of the paper is as follows. Part I first presents a characterisation of HF (Section 1.2) with regard to: the general design problem it addresses; its practices providing solutions to that problem; and its knowledge supporting those practices. The characterisation identifies the relations of HF with Software Engineering (SE) and with the super-ordinate discipline of Human-Computer Interaction (HCI). The characterisation supports both the assessment of contemporary HF and the arguments for the requirement of an engineering HF discipline.

Assessment of contemporary HF (Section 1.3.) concludes that its practices are predominantly those of a craft. Shortcomings of those practices are exposed which indict the absence of support from appropriate formal discipline knowledge. This absence prompts the question as to what might be the Dowell and Long 3 formal knowledge which HF could develop, and what might be the process of its formulation. By comparing the HF general design problem with other, better understood, general design problems, and by identifying the formal knowledge possessed by the corresponding disciplines, the potential for HF engineering principles is suggested (Section 1.4.).

However, a pre-requisite for the formulation of any engineering principle is a conception. A conception is a unitary (and consensus) view of a general design problem; its power lies in the coherence and completeness of its definition of the concepts which can express that problem. Engineering principles are articulated in terms of those concepts. Hence, the requirement for a conception for the HF discipline is concluded (Section 1.5.).

If HF is to be a discipline of the superordinate discipline of HCI, then the origin of a ‘conception for HF’ needs to be in a conception for the discipline of HCI itself. A conception (at least in form) as might be assumed by an engineering HCI discipline has been previously proposed (Dowell and Long, 1988a). It supports the conception for HF as an engineering discipline of HCI presented in Part II.

 

1.2. Characterisation of the Human Factors Discipline

HF seeks to support systems development through the systematic and reasoned design of human-computer interactions. As an endeavour, however, HF is still in its infancy, seeking to establish its identity and its proper contribution to systems development. For example, there is little consensus on how the role of HF in systems development is, or should be, configured with the role of SE (Walsh, Lim, Long, and Carver, 1988). A characterisation of the HF discipline is needed to clarify our understanding of both its current form and any conceivable future form. A framework supporting such a characterisation is summarised below (following Long and Dowell, 1989).

Most definitions of disciplines assume three primary characteristics: a general problem; practices, providing solutions to that problem; and knowledge, supporting those practices. This characterisation presupposes classes of general problem corresponding with types of discipline. For example, one class of general problem is that of the general design problem1 and includes the design of artefacts (of bridges, for example) and the design of ‘states of the world’ (of public administration, for example). Engineering and craft disciplines address general design problems.

Further consideration also suggests that any general problem has the necessary property of a scope, delimiting the province of concern of the associated discipline. Hence may disciplines also be distinguished from each other; for example, the engineering disciplines of Electrical and Mechanical Engineering are distinguished by their respective scopes of electrical and mechanical artefacts. So, knowledge possessed by Electrical Engineering supports its practices solving the general design problem of designing electrical artefacts (for example, Kirchoff’s Laws would support the analysis of branch currents for a given network design for an amplifier’s power supply).

Although rudimentary, this framework can be used to provide a characterisation of the HF discipline. It also allows a distinction to be made between the disciplines of HF and SE. First, however, it is required that the super-ordinate discipline of HCI be postulated. Thus, HCI is a discipline addressing a general design problem expressed informally as: ‘to design human-computer interactions for effective working’. The scope of the HCI general design problem includes: humans, both as individuals, as groups, and as social organisations; computers, both as programmable machines, stand-alone and networked, and as functionally embedded devices within machines; and work, both with regard to individuals and the organisations in which it occurs (Long, 1989). For example, the general design problem of HCI includes the problems of designing the effective use of navigation systems by aircrew on flight-decks, and the effective use of wordprocessors by secretaries in offices.

The general design problem of HCI can be decomposed into two general design problems, each having a particular scope. Whilst subsumed within the general design problem of HCI, these two general design problems are expressed informally as: ‘to design human interactions with computers for effective working’; and ‘to design computer interactions with humans for effective working’.

Each general design problem can be associated with a different discipline of the superordinate discipline of HCI. HF addresses the former, SE addresses the latter. With different – though complementary – aims, both disciplines address the design of human-computer interactions for effective working. The HF discipline concerns the physical and mental aspects of the human interacting with the computer. The SE discipline concerns the physical and software aspects of the computer interacting with the human.

The practices of HF and SE are the activities providing solutions to their respective general design problems and are supported by their respective discipline knowledge. Figure 1 shows schematically this characterisation of HF as a sub-discipline of HCI (following Long and Dowell, 1989). The following section employs the characterisation to evaluate contemporary HF.

 

1.3. State of the Human Factors Art

It would be difficult to reject the claim that the contemporary HF discipline has the character of a craft (at times even of a technocratic art). Its practices can justifiably be described as a highly refined form of design by ‘trial and error’ (Long and Dowell, 1989). Characteristic of a craft, the execution and success of its practices in systems development depends principally on the expertise, guided intuition and accumulated experience which the practitioner brings to bear on the design problem.

It is also claimed that HF will always be a craft: that ultimately only the mind itself has the capability for reasoning about mental states, and for solving the under-specified and complex problem of designing user-interactions (see Carey, 1989); that only the designer’s mind can usefully infer the motivations underlying purposeful human behaviour, or make subjective assessments of the elegance or aesthetics of a computer interface (Bornat and Thimbleby, 1989).

The dogma of HF as necessarily a craft whose knowledge may only be the accrued experience of its practitioners, is nowhere presented rationally. Notions of the indeterminism, or the un-predictability of human behaviour are raised simply as a gesture. Since the dogma has support, it needs to be challenged to establish the extent to which it is correct, or to which it compels a misguided and counter-productive doctrine (see also, Carroll and Campbell, 1986).

Current HF practices exhibit four primary deficiencies which prompt the need to identify alternative forms for HF. First, HF practices are in general poorly integrated into systems development practices, nullifying the influence they might otherwise exert. Developers make implicit and explicit decisions with implications for user-interactions throughout the development process, typically without involving HF specialists. At an early stage of design, HF may offer only advice – advice which may all too easily be ignored and so not implemented. Its main contribution to the development of user-interactive systems is the evaluations it provides. Yet these are too often relegated to the closing stages of development programmes, where they can only suggest minor enhancements to completed designs because of the prohibitive costs of even modest re-implementations (Walsh et al,1988).

Second, HF practices have a suspect efficacy. Their contribution to improving product quality in any instance remains highly variable. Because there is no guarantee that experience of one development programme is appropriate or complete in its recruitment to another, re-application of that experience cannot be assured of repeated success (Long and Dowell, 1989).

Third, HF practices are inefficient. Each development of a system requires the solving of new problems by implementation then testing. There is no formal structure within which experience accumulated in the successful development of previous systems can be recruited to support solutions to the new problems, except through the memory and intuitions of the designer. These may not be shared by others, except indirectly (for example, through the formulation of heuristics), and so experience may be lost and may have to be re-acquired (Long and Dowell, 1989).

The guidance may be direct – by the designer’s familiarity with psychological theory and practice, or may be indirect by means of guidelines derived from psychological findings. In both cases, the guidance can offer only advice which must be implemented then tested to assess its effectiveness. Since the general scientific problem is the explanation and prediction of phenomena, and not the design of artifacts, the guidance cannot be directly embodied in design specifications which offer a guarantee with respect to the effectiveness of the implemented design. It is not being claimed here that the application of psychology directly or indirectly cannot contribute to better practice or to better designs, only that a practice supported in such a manner remains a craft, because its practice is by implementation then test, that is, by trial and error (see also Long and Dowell, 1989).

Fourth, there are insufficient signs of systematic and intentional progress which will alleviate the three deficiencies of HF practices cited above. The lack of progress is particularly noticeable when HF is compared with the similarly nascent discipline of SE (Gries, 1981; Morgan, Shorter and Tainsh, 1988).

These four deficiencies are endemic to the craft nature of contemporary HF practice. They indict the tacit HF discipline knowledge consisting of accumulated experience embodied in procedures, even where that experience has been influenced by guidance offered by the science of psychology (see earlier footnote). Because the knowledge is tacit (i.e., implicit or informal), it cannot be operationalised, and hence the role of HF in systems development cannot be planned as would be necessary for the proper integration of the knowledge. Without being operationalised, its knowledge cannot be tested, and so the efficacy of the practices it supports cannot be guaranteed. Without being tested, its knowledge cannot be generalised for new applications and so the practices it can support will be inefficient. Without being operationalised, testable, and general, the knowledge cannot be developed in any structured way as required for supporting the systematic and intentional progress of the HF discipline.

It would be incorrect to assume the current absence of formality of HF knowledge to be a necessary response to the indeterminism of human behaviour. Both tacit discipline knowledge and ‘trial and error’ practices may simply be symptomatic of the early stage of development of the discipline1. The extent to which human behaviour is deterministic for the purposes of designing interactive computer-based systems needs to be independently established. Only then might it be known if HF discipline knowledge could be formal. Section 1.4. considers what form that knowledge might take, and Section 1.5. considers what might be the process of its formulation.

 

1.4. Human Factors Engineering Principles

HF has been viewed earlier (Section 1.2.) as comparable to other disciplines which address general design problems: for example, Civil Engineering and Health Administration. The nature of the formal knowledge of a future HF discipline might, then, be suggested by examining such disciplines. The general design problems of different disciplines, however, must first be related to their characteristic practices, in order to relate the knowledge supporting those practices. The establishment of this relationship follows.

The ‘design’ disciplines are ranged according to the ‘hardness’ or ‘softness’ of their respective general design problems. ‘Hard’ and ‘soft’ may have various meanings in this context. For example, hard design problems may be understood as those which include criteria for their ‘optimal’ solution (Checkland, 1981). In contrast, soft design problems are those which do not include such criteria. Any solution is assessed as ‘better or worse’ relative to other solutions. Alternatively, the hardness of a problem may be distinguished by its level of description, or the formality of the knowledge available for its specification (Carroll and Campbell, 1986). However, here hard and soft problems will be generally distinguished by their determinism for the purpose, that is, by the need for design solutions to be determinate. In this distinction between problems is implicated: the proliferation of variables expressed in a problem and their relations; the changes of variables and their relations, both with regard to their values and their number; and more generally, complexity, where it includes factors other than those identified. The variables implicated in the HF general design problem are principally those of human behaviours and structures.

A discipline’s practices construct solutions to its general design problem. Consideration of disciplines indicates much variation in their use of specification as a practice in constructing solutions. 1 Such was the history of many disciplines: the origin of modern day Production Engineering, for example, was a nineteenth century set of craft practices and tacit knowledge. This variation, however, appears not to be dependent on variations in the hardness of the general design problems. Rather, disciplines appear to differ in the completeness with which they specify solutions to their respective general design problems before implementation occurs. At one extreme, some disciplines specify solutions completely before implementation: their practices may be described as ‘specify then implement’ (an example might be Electrical Engineering). At the other extreme, disciplines appear not to specify their solutions at all before implementing them: their practices may be described as ‘implement and test’ (an example might be Graphic Design). Other disciplines, such as SE, appear characteristically to specify solutions partially before implementing them: their practices may be described as ‘specify and implement’. ‘Specify then Implement’, therefore, and ‘implement and test’, would appear to represent the extremes of a dimension by which disciplines may be distinguished by their practices. It is a dimension of the completeness with which they specify design solutions.

Taken together, the dimension of problem hardness, characterising general design problems, and the dimension of specification completeness, characterising discipline practices, constitute a classification space for design disciplines such as Electrical Engineering and Graphic Design. The space is shown in Figure 2, including for illustrative purposes, the speculative location of SE.

Two conclusions are prompted by Figure 2. First, a general relation may be apparent between the hardness of a general design problem and the realiseable completeness with which its solutions might be specified. In particular, a boundary condition is likely to be present beyond which more complete solutions could not be specified for a problem of given hardness. The shaded area of Figure 2 is intended to indicate this condition, termed the ‘Boundary of Determinism’ – because it derives from the determinism of the phenomena implicated in the general design problem. It suggests that whilst very soft problems may only be solved by ‘implement and test’ practices, hard problems may be solved by ‘specify then implement’ practices.

Second, it is concluded from Figure 2 that the actual completeness with which solutions to a general design problem are specified, and the realiseable completeness, might be at variance. Accordingly, there may be different possible forms of the same discipline – each form addressing the same problem but with characteristically different practices. With reference to HF then, the contemporary discipline, a craft, will characteristically solve the HF general design problem mainly by ‘implementation and testing’. If solutions are specified at all, they will be incomplete before being implemented. Yet depending on the hardness of the HF general design problem, the realiseable completeness of specified solutions may be greater and a future form of the discipline, with practices more characteristically those of ‘specify then implement’, may be possible. For illustrative purposes, those different forms of the HF discipline are located speculatively in the figure.

Whilst the realiseable completeness with which a discipline may specify design solutions is governed by the hardness of the general design problem, the actual completeness with which it does so is governed by the formality of the knowledge it possesses. Consideration of the traditional engineering disciplines supports this assertion. Their modern-day practices are characteristically those of ‘specify then implement’, yet historically, their antecedents were ‘specify and implement’ practices, and earlier still – ‘implement and test’ practices. For example, the early steam engine preceded formal knowledge of thermodynamics and was constructed by ‘implementation and testing’. Yet designs of thermodynamic machines are now relatively completely specified before being implemented, a practice supported by formal knowledge. Such progress then, has been marked by the increasing formality of knowledge. It is also in spite of the increasing complexity of new technology – an increase which might only have served to make the general design problem more soft, and the boundary of determinism more constraining. The dimension of the formality of a discipline’s knowledge – ranging from experience to principles, is shown in Figure 2 and completes the classification space for design disciplines.

It should be clear from Figure 2 that there exists no pre-ordained relationship between the formality of a discipline’s knowledge and the hardness of its general design problem. In particular, the practices of a (craft) discipline supported by experience – that is, by informal knowledge – may address a hard problem. But also, within the boundary of determinism, that discipline could acquire formal knowledge to support specification as a design practice.

In Section 1.3, four deficiencies of the contemporary HF discipline were identified. The absence of formal discipline knowledge was proposed to account for these deficiencies. The present section has been concerned to examine the potential for HF to develop a more formal discipline knowledge. The potential would appear to be governed by the hardness of the HF general design problem, that is, by the determinism of the human behaviours which it implicates, at least with respect to any solution of that problem. And clearly, human behaviour is, in some respects and to some degree, deterministic. For example, drivers’ behaviour on the roads is determined, at least within the limits required by a particular design solution, by traffic system protocols. A training syllabus determines, within the limits required by a particular solution, the behaviour of the trainees – both in terms of learning strategies and the level of training required. Hence, formal HF knowledge is to some degree attainable. At the very least, it cannot be excluded that the model for that formal knowledge is the knowledge possessed by the established engineering disciplines.

Generally, the established engineering disciplines possess formal knowledge: a corpus of operationalised, tested, and generalised principles. Those principles are prescriptive, enabling the complete specification of design solutions before those designs are implemented (see Dowell and Long, 1988b). This theme of prescription in design is central to the thesis offered here.

Engineering principles can be substantive or methodological (see Checkland, 1981; Pirsig, 1974). Methodological Principles prescribe the methods for solving a general design problem optimally. For example, methodological principles might prescribe the representations of designs specified at a general level of description and procedures for systematically decomposing those representations until complete specification is possible at a level of description of immediate design implementation (Hubka, Andreason and Eder, 1988). Methodological principles would assure each lower level of specification as being a complete representation of an immediately higher level.

Substantive Principles prescribe the features and properties of artefacts, or systems that will constitute an optimal solution to a general design problem. As a simple example, a substantive principle deriving from Kirchoff’s Laws might be one which would specify the physical structure of a network design (sources, resistances and their nodes etc) whose behaviour (e.g., distribution of current) would constitute an optimal solution to a design problem concerning an amplifier’s power supply.

 

1.5. The Requirement for an Engineering Conception for Human Factors

The contemporary HF discipline does not possess either methodological or substantive engineering principles. The heuristics it possesses are either ‘rules of thumb’ derived from experience or guidelines derived from psychological theories and findings. Neither guidelines nor rules of thumb offer assurance of their efficacy in any given instance, and particularly with regard to the effectiveness of a design. The methods and models of HF (as opposed to methodological and substantive principles) are similarly without such an assurance. Clearly, any evolution of HF as an engineering discipline in the manner proposed here has yet to begin. There is an immediate need then, for a view of how it might begin, and how formulation of engineering principles might be precipitated.

van Gisch and Pipino (1986) have suggested the process by which scientific (as opposed to engineering) disciplines acquire formal knowledge. They characterise the activities of scientific disciplines at a number of levels, the most general being an epistemological enquiry concerning the nature and origin of discipline knowledge. From such an enquiry a paradigm may evolve. Although a paradigm may be considered to subsume all discipline activities (Long, 1987), it must, at the very least, subsume a coherent and complete definition of the concepts which in this case describe the General (Scientific) Problem of a scientific discipline. Those concepts, and their derivatives, are embodied in the explanatory and predictive theories of science and enable the formulation of research problems. For example, Newton’s Principia commences with an epistemological enquiry, and a paradigm in which the concept of inertia first occurs. The concept of inertia is embodied in scientific theories of mechanics, as for example, in Newton’s Second Law.

Engineering disciplines may be supposed to require an equivalent epistemological enquiry. However, rather than that enquiry producing a paradigm, we may construe its product as a conception. Such a conception is a unitary (and consensus) view of the general design problem of a discipline. Its power lies in the coherence and completeness of its definition of concepts which express that problem. Hence, it enables the formulation of engineering principles which embody and instantiate those concepts. A conception (like a paradigm) is always open to rejection and replacement.

HF currently does not possess a conception of its general design problem. Current views of the issue are ill-formed, fragmentary, or implicit (Shneiderman, 1980; Card, Moran and Newell, 1983; Norman and Draper, 1986). The lack of such a shared view is particularly apparent within the HF research literature in which concepts are ambiguous and lacking in coherence; those associated with the ‘interface’ (eg, ‘virtual objects’, ‘human performance’, ‘task semantics’, ‘user error’ etc) are particular examples of this failure. It is inconceiveable that a formulation of HF engineering principles might occur whilst there is no consensus understanding of the concepts which they would embody. Articulation of a conception must then be a pre-requisite for formulation of engineering principles for HF.

The origin of a conception for the HF discipline must be a conception for the HCI discipline itself, the superordinate discipline incorporating HF. A conception (at least in form) as might be assumed by an engineering HCI discipline has been previously proposed (Dowell and Long, 1988a). It supports the conception for HF as an engineering discipline presented in Part II.

In conclusion, Part I has presented the case for an engineering conception for HF. A proposal for such a conception follows in Part II. The status of the conception, however, should be emphasised. First, the conception at this point in time is speculative. Second, the conception continues to be developed in support of, and supported by, the research of the authors. Third, there is no validation in the conventional sense to be offered for the conception at this time. Validation of the conception for HF will come from its being able to describe the design problems of HF, and from the coherence of its concepts, that is, from the continuity of relations, and agreement, between concepts. Readers may assess these aspects of validity for themselves. Finally, the validity of the conception for HF will also rest in its being a consensus view held by the discipline as a whole and this is currently not the case.

Part II. Conception for an Engineering Discipline of Human Factors 

2.1 Conception of the human factors general design problem;

2.2 Conception of work and user; 2.2.1 Objects and their attributes; 2.2.2 Attributes and levels of complexity; 2.2.3 Relations between attributes; 2.2.4 Attribute states and affordance; 2.2.5 Organisations, domains (of application)2.2.6 Goals; 2.2.7 Quality; 2.2.8 Work and the user; and the requirement for attribute state changes;

2.3 Conception of the interactive worksystem and the user; 2.3.1 Interactive worksystems; 2.3.2 The user as a system of mental and physical human behaviours; 2.3.3 Human-computer interaction; 2.3.4 On-line and off-line behaviours; 2.3.5 Human structures and the user; 2.3.6 Resource costs and the user;

2.4 Conception of performance of the interactive worksystem and the user;

2.5 Conclusions and the prospect for Human Factors engineering principles

 The potential for HF to become an engineering discipline, and so better to respond to the problem of interactive systems design, was examined in Part I. The possibility of realising this potential through HF engineering principles was suggested – principles which might prescriptively support HF design expressed as ‘specify then implement’. It was concluded that a pre-requisite to the development of HF engineering principles, is a conception of the general design problem of HF, which was informally expressed as: ‘to design human interactions with computers for effective working’.
Part II proposes a conception for HF. It attempts to establish the set of related concepts which can express the general design problem of HF more formally. Such concepts would be those embodied in HF engineering principles. As indicated in Section 1.1, the conception for HF is supported by a conception for an engineering discipline of HCI earlier proposed by Dowell and Long (1988a). Space precludes re-iteration of the conception for HCI here, other than as required for the derivation of the conception for HF. Part II first asserts a more formal expression of the HF general design problem which an engineering discipline would address. Part II then continues by elaborating and illustrating the concepts and their relations embodied in that expression.
2.1. Conception of the Human Factors General Design Problem.
The conception for the (super-ordinate) engineering discipline of HCI asserts a fundamental distinction between behavioural systems which perform work, and a world in which work originates, is performed and has its consequences. Specifically conceptualised are interactive worksystems consisting of human and computer behaviours together performing work. It is work evidenced in a world of physical and informational objects disclosed as domains of application. The distinction between worksystems and domains of application is represented schematically in Figure 3.

Effectiveness derives from the relationship of an interactive worksystem with its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs it incurs. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed.

The concern of an engineering HCI discipline would be the design of interactive worksystems for performance. More precisely, its concern would be the design of behaviours constituting a worksystem {S} whose actual performance (Pa) conformed with some desired performance (Pd). And to design {S} would require the design of human behaviours {U} interacting with computer behaviours {C}. Hence, conception of the general design problem of an engineering discipline of HCI is expressed as: Specify then implement {U} and {C}, such that {U} interacting with {C} = {S} PaPd where Pd = fn. { Qd ,Kd } Qd expresses the desired quality of the products of work within the given domain of application, KD expresses acceptable (i.e., desired) costs incurred by the worksystem, i.e., by both human and computer.

The problem, when expressed as one of to ‘specify then implement’ designs of interactive worksystems, is equivalent to the general design problems characteristic of other engineering disciplines (see Section 1.4.).

The interactive worksystem can be distinguished as two separate, but interacting sub-systems, that is, a system of human behaviours interacting with a system of computer behaviours. The human behaviours may be treated as a behavioural system in their own right, but one interacting with the system of computer behaviours to perform work. It follows that the general design problem of HCI may be decomposed with regard to its scope (with respect to the human and computer behavioural sub-systems) giving two related problems. Decomposition with regard to the human behaviours gives the general design problem of the HF1 discipline as: Specify then implement {U} such that {U} interacting with {C} = {S} PaPd.

The general design problem of HF then, is one of producing implementable specifications of human behaviours {U} which, interacting with computer behaviours {C}, are constituted within a worksystem {S} whose performance conforms with a desired performance (Pd).

The following sections elaborate the conceptualisation of human behaviours (the user, or users) with regard to the work they perform, the interactive worksystem in which they are constituted, and performance.

 

2.2 . Conception of Work and the User

The conception for HF identifies a world in which work originates, is performed and has its consequences. This section presents the concepts by which work and its relations with the user are expressed.

2.2.1 Objects and their attributes

Work occurs in a world consisting of objects and arises in the intersection of organisations and (computer) technology. Objects may be both abstract as well as physical, and are characterised by their attributes. Abstract attributes of objects are attributes of information and knowledge. Physical attributes are attributes of energy and matter. Letters (i.e., correspondence) are objects; their abstract attributes support the communication of messages etc; their physical attributes support the visual/verbal representation of information via language.

2.2.2 Attributes and levels of complexity

The different attributes of an object may emerge at different levels within a hierarchy of levels of complexity (see Checkland, 1981). For example, characters and their configuration on a page are physical attributes of the object ‘a letter’ which emerge at one level of complexity; the message of the letter is an abstract attribute which emerges at a higher level of complexity.

Objects are described at different levels of description commensurate with their levels of complexity. However, at a high level of description, separate objects may no longer be differentiated. For example, the object ‘income tax return’ and the object ‘personal letter’ are both ‘correspondence’ objects at a higher level of description. Lower levels of description distinguish their respective attributes of content, intended correspondent etc. In this way, attributes of an object described at one level of description completely re-represent those described at a lower level.

2.2.3 Relations between attributes

Attributes of objects are related, and in two ways. First, attributes at different levels of complexity are related. As indicated earlier, those at one level are completely subsumed in those at a higher level. In particular, abstract attributes will occur at higher levels of complexity than physical attributes and will subsume those lower level physical attributes. For example, the abstract attributes of an object ‘message’ concerning the representation of its content by language subsume the lower level physical attributes, such as the font of the characters expressing the language. As an alternative example, an industrial process, such as a steel rolling process in a foundry, is an object whose abstract attributes will include the process’s efficiency. Efficiency subsumes physical attributes of the process, – its power consumption, rate of output, dimensions of the output (the rolled steel), etc – emerging at a lower level of complexity.

Second, attributes of objects are related within levels of complexity. There is a dependency between the attributes of an object emerging within the same level of complexity. For example, the attributes of the industrial process of power consumption and rate of output emerge at the same level and are inter-dependent.

2.2.4 Attribute states and affordance

At any point or event in the history of an object, each of its attributes is conceptualised as having a state. Further, those states may change. For example, the content and characters (attributes) of a letter (object) may change state: the content with respect to meaning and grammar etc; its characters with respect to size and font etc. Objects exhibit an affordance for transformation, engendered by their attributes’ potential for state change (see Gibson, 1977). Affordance is generally pluralistic in the sense that there may be many, or even, infinite transformations of objects, according to the potential changes of state of their attributes.

Attributes’ relations are such that state changes of one attribute may also manifest state changes in related attributes, whether within the same level of complexity, or across different levels of complexity. For example, changing the rate of output of an industrial process (lower level attribute) will change both its power consumption (same level attribute) and its efficiency (higher level attribute).

2.2.5 Organisations, domains (of application), and the requirement for attribute state changes

domain of application may be conceptualised as: ‘a class of affordance of a class of objects’. Accordingly, an object may be associated with a number of domains of application (‘domains’). The object ‘book’ may be associated with the domain of typesetting (state changes of its layout attributes) and with the domain of authorship (state changes of its textual content). In principle, a domain may have any level of generality, for example, the writing of letters and the writing of a particular sort of letter.

Organisations are conceptualised as having domains as their operational province and of requiring the realisation of the affordance of objects. It is a requirement satisfied through work. Work is evidenced in the state changes of attributes by which an object is intentionally transformed: it produces transforms, that is, objects whose attributes have an intended state. For example, ‘completing a tax return’ and ‘writing to an acquaintance’, each have a ‘letter’ as their transform, where those letters are objects whose attributes (their content, format and status, for example) have an intended state. Further editing of those letters would produce additional state changes, and therein, new transforms.

2.2.6 Goals

Organisations express their requirement for the transformation of objects through specifying goals. A product goal specifies a required transform – a required realisation of the affordance of an object. In expressing the required transformation of an object, a product goal will generally suppose necessary state changes of many attributes. The requirement of each attribute state change can be expressed as a task goal, deriving from the product goal. So for example, the product goal demanding transformation of a letter making its message more courteous, would be expressed by task goals possibly requiring state changes of semantic attributes of the propositional structure of the text, and of syntactic attributes of the grammatical structure. Hence, a product goal can be re-expressed as a task goal structure, a hierarchical structure expressing the relations between task goals, for example, their sequences.

In the case of the computer-controlled steel rolling process, the process is an object whose transformation is required by a foundry organisation and expressed by a product goal. For example, the product goal may specify the elimination of deviations of the process from a desired efficiency. As indicated earlier, efficiency will at least subsume the process’s attributes of power consumption, rate of output, dimensions of the output (the rolled steel), etc. As also indicated earlier, those attributes will be inter-dependent such that state changes of one will produce state changes in the others – for example, changes in rate of output will also change the power consumption and the efficiency of the process. In this way, the product goal (of correcting deviations from the desired efficiency) supposes the related task goals (of setting power consumption, rate of output, dimensions of the output etc). Hence, the product goal can be expressed as a task goal structure and task goals within it will be assigned to the operator monitoring the process.

2.2.7 Quality

The transformation of an object demanded by a product goal will generally be of a multiplicity of attribute state changes – both within and across levels of complexity. Consequently, there may be alternative transforms which would satisfy a product goal – letters with different styles, for example – where those different transforms exhibit differing compromises between attribute state changes of the object. By the same measure, there may also be transforms which will be at variance with the product goal. The concept of quality (Q) describes the variance of an actual transform with that specified by a product goal. It enables all possible outcomes of work to be equated and evaluated.

2.2.8 Work and the user

  Conception of the domain then, is of objects, characterised by their attributes, and exhibiting an affordance arising from the potential changes of state of those attributes. By specifying product goals, organisations express their requirement for transforms – objects with specific attribute states. Transforms are produced through work, which occurs only in the conjunction of objects affording transformation and systems capable of producing a transformation.

From product goals derive a structure of related task goals which can be assigned either to the human or to the computer (or both) within an associated worksystem. The task goals assigned to the human are those which motivate the human’s behaviours. The actual state changes (and therein transforms) which those behaviours produce may or may not be those specified by task and product goals, a difference expressed by the concept of quality.

Taken together, the concepts presented in this section support the HF conception’s expression of work as relating to the user. The following section presents the concepts expressing the interactive worksystem as relating to the user.

 

2.3. Conception of the Interactive Worksystem and the User.

The conception for HF identifies interactive worksystems consisting of human and computer behaviours together performing work. This section presents the concepts by which interactive worksystems and the user are expressed.

2.3.1 Interactive worksystems

Humans are able to conceptualise goals and their corresponding behaviours are said to be intentional (or purposeful). Computers, and machines more generally, are designed to achieve goals, and their corresponding behaviours are said to be intended (or purposive1). An interactive worksystem   (‘worksystem’) is a behavioural system distinguished by a boundary enclosing all human and computer behaviours whose purpose is to achieve and satisfy a common goal. For example, the behaviours of a secretary and wordprocessor whose purpose is to produce letters constitute a worksystem. Critically, it is only by identifying that common goal that the boundary of the worksystem can be established: entities, and more so – humans, may exhibit a range of contiguous behaviours, and only by specifying the goals of concern, might the boundary of the worksystem enclosing all relevant behaviours be correctly identified.

Worksystems transform objects by producing state changes in the abstract and physical attributes of those objects (see Section 2.2). The secretary and wordprocessor may transform the object ‘correspondence’ by changing both the attributes of its meaning and the attributes of its layout. More generally, a worksystem may transform an object through state changes produced in related attributes. An operator monitoring a computer-controlled industrial process may change the efficiency of the process through changing its rate of output.

The behaviours of the human and computer are conceptualised as behavioural sub-systems of the worksystem – sub-systems which interact1. The human behavioural sub-system is here more appropriately termed the user. Behaviour may be loosely understood as ‘what the human does’, in contrast with ‘what is done’ (i.e. attribute state changes in a domain). More precisely the user is conceptualised as:

a system of distinct and related human behaviours, identifiable as the sequence of states of a person2 interacting with a computer to perform work, and corresponding with a purposeful (intentional) transformation of objects in a domain3 (see also Ashby, 1956).

Although possible at many levels, the user must at least be expressed at a level commensurate with the level of description of the transformation of objects in the domain. For example, a secretary interacting with an electronic mailing facility is a user whose behaviours include receiving and replying to messages. An operator interacting with a computer-controlled milling machine is a user whose behaviours include planning the tool path to produce a component of specified geometry and tolerance.

2.3.2 The user as a system of mental and physical behaviours

The behaviours constituting a worksystem are both physical as well as abstract. Abstract behaviours are generally the acquisition, storage, and transformation of information. They represent and process information at least concerning: domain objects and their attributes, attribute relations and attribute states, and the transformations required by goals. Physical behaviours are related to, and express, abstract behaviours.

Accordingly, the user is conceptualised as a system of both mental (abstract) and overt (physical) behaviours which extend a mutual influence – they are related. In particular, they are related within an assumed hierarchy of behaviour types (and their control) wherein mental behaviours generally determine, and are expressed by, overt behaviours. Mental behaviours may transform (abstract) domain objects represented in cognition, or express through overt behaviour plans for transforming domain objects.

So for example, the operator working in the control room of the foundry has the product goal required to maintain a desired condition of the computer-controlled steel rolling process. The operator attends to the computer (whose behaviours include the transmission of information about the process). Hence, the operator acquires a representation of the current condition of the process by collating the information displayed by the computer and assessing it by comparison with the condition specified by the product goal. The operator`s acquisition, collation and assessment are each distinct mental behaviours, conceptualised as representing and processing information. The operator reasons about the attribute state changes necessary to eliminate any discrepancy between current and desired conditions of the process, that is, the set of related changes which will produce the required transformation of the process. That decision is expressed in the set of instructions issued to the computer through overt behaviour – making keystrokes, for example.

The user is conceptualised as having cognitive, conative and affective aspects. The cognitive aspects of the user are those of their knowing, reasoning and remembering, etc; the conative aspects are those of their acting, trying and persevering, etc; and the affective aspects are those of their being patient, caring, and assured, etc. Both mental and overt human behaviours are conceptualised as having these three aspects.

2.3.3 Human-computer interaction

Although the human and computer behaviours may be treated as separable sub-systems of the worksystem, those sub-systems extend a “mutual influence”, or interaction whose configuration principally determines the worksystem (Ashby, 1956).

Interaction is conceptualised as: the mutual influence of the user (i.e., the human behaviours) and the computer behaviours associated within an interactive worksystem.

Hence, the user {U} and computer behaviours {C} constituting a worksystem {S}, were expressed in the general design problem of HF (Section 2.1) as: {U} interacting with {C} = {S}

Interaction of the human and computer behaviours is the fundamental determinant of the worksystem, rather than their individual behaviours per se. For example, the behaviours of an operator interact with the behaviours of a computer-controlled milling machine. The operator’s behaviours influence the behaviours of the machine, perhaps in the tool path program – the behaviours of the machine, perhaps the run-out of its tool path, influences the selection behaviour of the operator. The configuration of their interaction – the inspection that the machine allows the operator, the tool path control that the operator allows the machine – determines the worksystem that the operator and machine behaviours constitute in their planning and execution of the machining work.

The assignment of task goals then, to either the human or the computer delimits the user and therein configures the interaction. For example, replacement of a mis-spelled word required in a document is a product goal which can be expressed as a task goal structure of necessary and related attribute state changes. In particular, the text field for the correctly spelled word demands an attribute state change in the text spacing of the document. Specifying that state change may be a task goal assigned to the user, as in interaction with the behaviours of early text editor designs, or it may be a task goal assigned to the computer, as in interaction with the ‘wrap-round’ behaviours of contemporary wordprocessor designs. The assignment of the task goal of specification configures the interaction of the human and computer behaviours in each case; it delimits the user.

2.3.4 On-line and off-line behaviours

The user may include both on-line and off-line human behaviours: on-line behaviours are associated with the computer’s representation of the domain; offline behaviours are associated with non-computer representations of the domain, or the domain itself.

As an illustration of the distinction, consider the example of an interactive worksystem consisting of behaviours of a secretary and a wordprocessor and required to produce a paper-based copy of a dictated letter stored on audio tape. The product goal of the worksystem here requires the transformation of the physical representation of the letter from one medium to another, that is, from tape to paper. From the product goal derives the task goals relating to required attribute state changes of the letter. Certain of those task goals will be assigned to the secretary. The secretary’s off-line behaviours include listening to, and assimilating the dictated letter, so acquiring a representation of the domain directly. By contrast, the secretary’s on-line behaviours include specifying the represention by the computer of the transposed content of the letter in a desired visual/verbal format of stored physical symbols.

On-line and off-line human behaviours are a particular case of the ‘internal’ interactions between a human’s behaviours as, for example, when the secretary’s typing interacts with memorisations of successive segments of the dictated letter.

2.3.5 Human structures and the user

  Conceptualisation of the user as a system of human behaviours needs to be extended to the structures supporting behaviour.

Whereas human behaviours may be loosely understood as ‘what the human does’, the structures supporting them can be understood as ‘how they are able to do what they do’ (see Marr, 1982; Wilden, 1980). There is a one to many mapping between a human`s structures and the behaviours they might support: the structures may support many different behaviours.

In co-extensively enabling behaviours at each level, structures must exist at commensurate levels. The human structural architecture is both physical and mental, providing the capability for a human’s overt and mental behaviours. It provides a represention of domain information as symbols (physical and abstract) and concepts, and the processes available for the transformation of those representations. It provides an abstract structure for expressing information as mental behaviour. It provides a physical structure for expressing information as physical behaviour.

Physical human structure is neural, bio-mechanical and physiological. Mental structure consists of representational schemes and processes. Corresponding with the behaviours it supports and enables, human structure has cognitive, conative and affective aspects. The cognitive aspects of human structures include information and knowledge – that is, symbolic and conceptual representations – of the domain, of the computer and of the person themselves, and it includes the ability to reason. The conative aspects of human structures motivate the implementation of behaviour and its perseverence in pursuing task goals. The affective aspects of human structures include the personality and temperament which respond to and supports behaviour.

To illustrate the conceptualisation of mental structure, consider the example of structure supporting an operator’s behaviours in the foundry control room. Physical structure supports perception of the steel rolling process and executing corrective control actions to the process through the computer input devices. Mental structures support the acquisition, memorisation and transformation of information about the steel rolling process. The knowledge which the operator has of the process and of the computer supports the collation, assessment and reasoning about corrective control actions to be executed.

The limits of human structure determine the limits of the behaviours they might support. Such structural limits include those of: intellectual ability; knowledge of the domain and the computer; memory and attentional capacities; patience; perseverence; dexterity; and visual acuity etc. The structural limits on behaviour may become particularly apparent when one part of the structure (a channel capacity, perhaps) is required to support concurrent behaviours, perhaps simultaneous visual attending and reasoning behaviours. The user then, is ‘resource’ limited by the co-extensive human structure.

The behavioural limits of the human determined by structure are not only difficult to define with any kind of completeness, they will also be variable because that structure can change, and in a number of respects. A person may have self-determined changes in response to the domain – as expressed in learning phenomena, acquiring new knowledge of the domain, of the computer, and indeed of themselves, to better support behaviour. Also, human structure degrades with the expenditure of resources in behaviour, as evidenced in the phenomena of mental and physical fatigue. It may also change in response to motivating or de-motivating influences of the organisation which maintains the worksystem.

It must be emphasised that the structure supporting the user is independent of the structure supporting the computer behaviours. Neither structure can make any incursion into the other, and neither can directly support the behaviours of the other. (Indeed this separability of structures is a pre-condition for expressing the worksystem as two interacting behavioural sub-systems.) Although the structures may change in response to each other, they are not, unlike the behaviours they support, interactive; they are not included within the worksystem. The combination of structures of both human and computer supporting their interacting behaviours is conceptualised as the user interface .

2.3.6 Resource costs of the user

Work performed by interactive worksystems always incurs resource costs. Given the separability of the human and the computer behaviours, certain resource costs are associated directly with the user and distinguished as structural human costs and behavioural human costs.

Structural human costs are the costs of the human structures co-extensive with the user. Such costs are incurred in developing and maintaining human skills and knowledge. More specifically, structural human costs are incurred in training and educating people, so developing in them the structures which will enable their behaviours necessary for effective working. Training and educating may augment or modify existing structures, provide the person with entirely novel structures, or perhaps even reduce existing structures. Structural human costs will be incurred in each case and will frequently be borne by the organisation. An example of structural human costs might be the costs of training a secretary in the particular style of layout required for an organisation’s correspondence with its clients, and in the operation of the computer by which that layout style can be created.

Structural human costs may be differentiated as cognitive, conative and affective structural costs of the user. Cognitive structural costs express the costs of developing the knowledge and reasoning abilities of people and their ability for formulating and expressing novel plans in their overt behaviour – as necessary for effective working. Conative structural costs express the costs of developing the activity, stamina and persistence of people as necessary for effective working. Affective structural costs express the costs of developing in people their patience, care and assurance as necessary as necessary for effective working.

Behavioural human costs are the resource costs incurred by the user (i.e by human behaviours) in recruiting human structures to perform work. They are both physical and mental resource costs. Physical behavioural costs are the costs of physical behaviours, for example, the costs of making keystrokes on a keyboard and of attending to a screen display; they may be expressed without differentiation as physical workload. Mental behavioural costs are the costs of mental behaviours, for example, the costs of knowing, reasoning, and deciding; they may be expressed without differentiation as mental workload. Mental behavioural costs are ultimately manifest as physical behavioural costs.

When differentiated, mental and physical behavioural costs are conceptualised as the cognitive, conative and affective behavioural costs of the user. Cognitive behavioural costs relate to both the mental representing and processing of information, and the demands made on the individual`s extant knowledge, as well as the physical expression thereof in the formulation and expression of a novel plan. Conative behavioural costs relate to the repeated mental and physical actions and effort required by the formulation and expression of the novel plan. Affective behavioural costs relate to the emotional aspects of the mental and physical behaviours required in the formulation and expression of the novel plan. Behavioural human costs are evidenced in human fatigue, stress and frustration; they are costs borne directly by the individual.

 

2.4. Conception of Performance of the Interactive Worksystem and the User.

In asserting the general design problem of HF (Section 2.1.), it was reasoned that:

“Effectiveness derives from the relationship of an interactive worksystemwith its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs incurred by it. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed. ”

This statement followed from the distinction between interactive worksystems performing work, and the work they perform. Subsequent elaboration upon this distinction enables reconsideration of the concept of performance, and examination of its central importance within the conception for HF.

Because the factors which constitute this engineering concept of performance (i.e the quality and costs of work) are determined by behaviour, a concordance is assumed between the behaviours of worksystems and their performance: behaviour determines performance (see Ashby, 1956; Rouse, 1980). The quality of work performed by interactive worksystems is conceptualised as the actual transformation of objects with regard to their transformation demanded by product goals. The costs of work are conceptualised as the resource costs incurred by the worksystem, and are separately attributed to the human and computer. Specifically, the resource costs incurred by the human are differentiated as: structural human costs – the costs of establishing and maintaining the structure supporting behaviour; and behavioural human costs – the costs of the behaviour recruiting structure to its own support. Structural and behavioural human costs were further differentiated as cognitive, conative and affective costs.

A desired performance of an interactive worksystem may be conceptualised. Such a desired performance might either be absolute, or relative as in a comparative performance to be matched or improved upon. Accordingly, criteria expressing desired performance, may either specify categorical gross resource costs and quality, or they may specify critical instances of those factors to be matched or improved upon.

Discriminating the user’s performance within the performance of the interactive worksystem would require the separate assimilation of human resource costs and their achievement of desired attribute state changes demanded by their assigned task goals. Further assertions concerning the user arise from the conceptualisation of worksystem performance. First, the conception of performance is able to distinguish the quality of the transform from the effectiveness of the worksystems which produce them. This distinction is essential as two worksystems might be capable of producing the same transform, yet if one were to incur a greater resource cost than the other, its effectiveness would be the lesser of the two systems.

Second, given the concordance of behaviour with performance, optimal human (and equally, computer) behaviours may be conceived as those which incur a minimum of resource costs in producing a given transform. Optimal human behaviour would minimise the resource costs incurred in producing a transform of given quality (Q). However, that optimality may only be categorically determined with regard to worksystem performance, and the best performance of a worksystem may still be at variance with the performance desired of it (Pd). To be more specific, it is not sufficient for human behaviours simply to be error-free. Although the elimination of errorful human behaviours may contribute to the best performance possible of a given worksystem, that performance may still be less than desired performance. Conversely, although human behaviours may be errorful, a worksystem may still support a desired performance.

Third, the common measures of human ‘performance’ – errors and time, are related in this conceptualisation of performance. Errors are behaviours which increase resource costs incurred in producing a given transform, or which reduce the quality of transform, or both. The duration of human behaviours may (very generally) be associated with increases in behavioural user costs.

Fourth, structural and behavioural human costs may be traded-off in performance. More sophisticated human structures supporting the user, that is, the knowledge and skills of experienced and trained people, will incur high (structural) costs to develop, but enable more efficient behaviours – and therein, reduced behavioural costs.

Fifth, resource costs incurred by the human and the computer may be traded-off in performance. A user can sustain a level of performance of the worksystem by optimising behaviours to compensate for the poor behaviours of the computer (and vice versa), i.e., behavioural costs of the user and computer are traded-off. This is of particular concern for HF as the ability of humans to adapt their behaviours to compensate for poor computer-based systems often obscures the low effectiveness of worksystems.

This completes the conception for HF. From the initial assertion of the general design problem of HF, the concepts that were invoked in its formal expression have subsequently been defined and elaborated, and their coherence established.

 

2.5. Conclusions and the Prospect for Human Factors Engineering Principles

Part I of this paper examined the possibility of HF becoming an engineering discipline and specifically, of formulating HF engineering principles. Engineering principles, by definition prescriptive, were seen to offer the opportunity for a significantly more effective discipline, ameliorating the problems which currently beset HF – problems of poor integration, low efficiency, efficacy without guarantee, and slow development.

A conception for HF is a pre-requisite for the formulation of HF engineering principles. It is the concepts and their relations which express the HF general design problem and which would be embodied in HF engineering principles. The form of a conception for HF was proposed in Part II. Originating in a conception for an engineering discipline of HCI (Dowell and Long, 1988a), the conception for HF is postulated as appropriate for supporting the formulation of HF engineering principles.

The conception for HF is a broad view of the HF general design problem. Instances of the general design problem may include the development of a worksystem, or the utilisation of a worksystem within an organisation. Developing worksystems which are effective, and maintaining the effectiveness of worksystems within a changing organisational environment, are both expressed within the problem. In addition, the conception takes the broad view on the research and development activities necessary to solve the general design problem and its instantiations, respectively. HF engineering research practices would seek solutions, in the form of (methodological and substantive) engineering principles, to the general design problem. HF engineering practices in systems development programmes would seek to apply those principles to solve instances of the general design problem, that is, to the design of specific users within specific interactive worksystems. Collaboration of HF and SE specialists and the integration of their practices is assumed.

Notwithstanding the comprehensive view of determinacy developed in Part I, the intention of specification associated with people might be unwelcome to some. Yet, although the requirement for design and specification of the user is being unequivocally proposed, techniques for implementing those specifications are likely to be more familiar than perhaps expected – and possibly more welcome. Such techniques might include selection tests, aptitude tests, training programmes, manuals and help facilities, or the design of the computer.

A selection test would assess the conformity of a candidates’ behaviours with a specification for the user. An aptitude test would assess the potential for a candidates’ behaviours to conform with a specification for the user. Selection and aptitude tests might assess candidates either directly or indirectly. A direct test would observe candidates’ behaviours in ‘hands on’ trial periods with the ‘real’ computer and domain, or with simulations of the computer and domain. An indirect test would examine the knowledge and skills (i.e., the structures) of candidates, and might be in the form of written examinations. A training programme would develop the knowledge and skills of a candidate as necessary for enabling their behaviours to conform with a specification for the user.Such programmes might take the form of either classroom tuition or ‘hands on’ learning. A manual or on-line help facility would augment the knowledge possessed by a human, enabling their behaviours to conform with a specification for the user. Finally, the design of the computer itself, through the interactions of its behaviours with the user, would enable the implementation of a specification for the user.

To conclude, discussion of the status of the conception for HF must be briefly extended. The contemporary HF discipline was characterised as a craft discipline. Although it may alternatively be claimed as an applied science discipline, such claims must still admit the predominantly craft nature of systems development practices (Long and Dowell, 1989). No instantiations of the HF engineering discipline implied in this paper are visible, and examples of supposed engineering practices may be readily associated with craft or applied science disciplines. There are those, however, who would claim the craft nature of the HF discipline to be dictated by the nature of the problem it addresses. They may maintain that the indeterminism and complexity of the problem of designing human systems (the softness of the problem) precludes the application of formal and prescriptive knowledge. This claim was rejected in Part I on the grounds that it mistakes the current absence of formal discipline knowledge as an essential reflection of the softness of its general design problem. The claim fails to appreciate that this absence may rather be symptomatic of the early stage of the discipline`s development. The alternative position taken by this paper is that the softness of the problem needs to be independently established. The general design problem of HF is, to some extent, hard – human behaviour is clearly to some useful degree deterministic – and certainly sufficiently deterministic for the design of certain interactive worksystems. It may accordingly be presumed that HF engineering principles can be formulated to support product quality within a systems development ethos of ‘design for performance’.

The extent to which HF engineering principles might be realiseable in practice remains to be seen. It is not supposed that the development of effective systems will never require craft skills in some form, and engineering principles are not seen to be incompatible with craft knowledge, particularly with respect to their instantiation (Long and Dowell, 1989). At a minimum, engineering principles might be expected to augment the craft knowledge of HF professionals. Yet the great potential of HF engineering principles for the effectiveness of the discipline demands serious consideration. However, their development would only be by intention, and would be certain to demand a significant research effort. This paper is intended to contribute towards establishing the conception required for the formulation of HF engineering principles.

References Ashby W. Ross, (1956), An Introduction to Cybernetics. London: Methuen.

Bornat R. and Thimbleby H., (1989), The Life and Times of ded, Text Display Editor. In J.B. Long and A.D. Whitefield (ed.s), Cognitive Ergonomics and Human Computer Interaction. Cambridge: Cambridge University Press.

Card, S. K., Moran, T., and Newell, A., (1983), The Psychology of Human Computer Interaction, New Jersey: Lawrence Erlbaum Associates.

Carey, T., (1989), Position Paper: The Basic HCI Course For Software Engineers. SIGCHI Bulletin, Vol. 20, no. 3.

Carroll J.M., and Campbell R. L., (1986), Softening up Hard Science: Reply to Newell and Card. Human Computer Interaction, Vol. 2, pp. 227-249.

Checkland P., (1981), Systems Thinking, Systems Practice. Chichester: John Wiley and Sons.

Cooley M.J.E., (1980), Architect or Bee? The Human/Technology Relationship. Slough: Langley Technical Services.

Didner R.S. A Value Added Approach to Systems Design. Human Factors Society Bulletin, May 1988. Dowell J., and

Long J. B., (1988a), Human-Computer Interaction Engineering. In N. Heaton and M . Sinclair (ed.s), Designing End-User Interfaces. A State of the Art Report. 15:8. Oxford: Pergamon Infotech.

Dowell, J., and Long, J. B., 1988b, A Framework for the Specification of Collaborative Research in Human Computer Interaction, in UK IT 88 Conference Publication 1988, pub. IEE and BCS.

Gibson J.J., (1977), The Theory of Affordances. In R.E. Shaw and J. Branford (ed.s), Perceiving, Acting and Knowing. New Jersey: Erlbaum.

Gries D., (1981), The Science of Programming, New York: Springer Verlag.

Hubka V., Andreason M.M. and Eder W.E., (1988), Practical Studies in Systematic Design, London: Butterworths.

Long J.B., Hammond N., Barnard P. and Morton J., (1983), Introducing the Interactive Computer at Work: the Users’ Views. Behaviour And Information Technology, 2, pp. 39-106.

Long, J., (1987), Cognitive Ergonomics and Human Computer Interaction. In P. Warr (ed.), Psychology at Work. England: Penguin.

Long J.B., (1989), Cognitive Ergonomics and Human Computer Interaction: an Introduction. In J.B. Long and A.D. Whitefield (ed.s), Cognitive Ergonomics and Human Computer Interaction. Cambridge: Cambridge University Press.

Long J.B. and Dowell J., (1989), Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering. In Sutcliffe A. and Macaulay L., Proceedings of the Fifth Conference of the BCS HCI SG. Cambridge: Cambridge University Press.

Marr D., (1982), Vision. New York: Wh Freeman and Co. Morgan D.G.,

Shorter D.N. and Tainsh M., (1988), Systems Engineering. Improved Design and Construction of Complex IT systems. Available from IED, Kingsgate House, 66-74 Victoria Street, London, SW1.

Norman D.A. and Draper S.W. (eds) (1986): User Centred System Design. Hillsdale, New Jersey: Lawrence Erlbaum;

Pirsig R., 1974, Zen and the Art of Motorcycle Maintenance. London: Bodley Head.

Rouse W. B., (1980), Systems Engineering Models of Human Machine Interaction. New York: Elsevier North Holland.

Shneiderman B. (1980): Software Psychology: Human Factors in Computer and Information Systems. Cambridge, Mass.: Winthrop.

Thimbleby H., (1984), Generative User Engineering Principles for User Interface Design. In B. Shackel (ed.), Proceedings of the First IFIP conference on Human-Computer Interaction. Human-Computer Interaction – INTERACT’84. Amsterdam: Elsevier Science. Vol.2, pp. 102-107.

van Gisch J. P. and Pipino L.L., (1986), In Search of a Paradigm for the Discipline of Information Systems, Future Computing Systems, 1 (1), pp. 71-89.

Walsh P., Lim K.Y., Long J.B., and Carver M.K., (1988), Integrating Human Factors with System Development. In: N. Heaton and M. Sinclair (eds): Designing End-User Interfaces. Oxford: Pergamon Infotech.

Wilden A., 1980, System and Structure; Second Edition. London: Tavistock Publications.

This paper has greatly benefited from discussion with others and from their criticisms. We would like to thank our collegues at the Ergonomics Unit, University College London and in particular, Andy Whitefield, Andrew Life and Martin Colbert. We would also like to thank the editors of the special issue for their support and two anonymous referees for their helpful comments. Any remaining infelicities – of specification and implementation – are our own.

4.4 Dowell and Long (1989) – HCI Engineering Practice – Short Version 150 150 John

4.4 Dowell and Long (1989) – HCI Engineering Practice – Short Version

Ergonomics Unit, University College London, 

26, Bedford Way, London. WC1H 0AP. 

Abstract  ……..a conception of the general design problem addressed by Human Factors. The problem is expressed informally as: ‘to design human interactions with computers for effective working’.

In P. Barber and J. Laws (ed.s) Special Issue on Cognitive Ergonomics, Ergonomics, 1989, vol. 32, no. 11, pp. 1613-1536.

Part I. Requirement for Human Factors as an Engineering Discipline of Human-Computer Interaction

1.1 Introduction;

1.2 Characterization of the human factors discipline;

1.3 State of the human factors art;

1.4 Human factors engineering;

1.5 The requirement for an engineering conception of human factors.

 

1.1 Introduction

 

…….. Assessment of contemporary HF (Section 1.3.) concludes that its practices are predominantly those of a craft. Shortcomings of those practices are exposed which indict the absence of support from appropriate formal discipline knowledge.

 

 

1.2. Characterisation of the Human Factors Discipline

HF seeks to support systems development through the systematic and reasoned design of human-computer interactions. As an endeavour, however, HF is still in its infancy, seeking to establish its identity and its proper contribution to systems development. For example, there is little consensus on how the role of HF in systems development is, or should be, configured……..

Most definitions of disciplines assume three primary characteristics: a general problem; practices, providing solutions to that problem; and knowledge, supporting those practices………

 

…….. Thus, HCI is a discipline addressing a general design problem expressed informally as: ‘to design human-computer interactions for effective working’. ……..

The general design problem of HCI can be decomposed into two general design problems, each having a particular scope. Whilst subsumed within the general design problem of HCI, these two general design problems are expressed informally as: ‘to design human interactions with computers for effective working’; and ‘to design computer interactions with humans for effective working’.

 

The practices of HF and SE are the activities providing solutions to their respective general design problems and are supported by their respective discipline knowledge. Figure 1 shows schematically this characterisation ……..

1.3. State of the Human Factors Art

It would be difficult to reject the claim that the contemporary HF discipline has the character of a craft (at times even of a technocratic art). Its practices can justifiably be described as a highly refined form of design by ‘trial and error’ (Long and Dowell, 1989). Characteristic of a craft, the execution and success of its practices in systems development depends principally on the expertise, guided intuition and accumulated experience which the practitioner brings to bear on the design problem.

I

 

Current HF practices exhibit four primary deficiencies which prompt the need to identify alternative forms for HF. First, HF practices are in general poorly integrated into systems development practices, nullifying the influence they might otherwise exert. Developers make implicit and explicit decisions with implications for user-interactions throughout the development process, typically without involving HF specialists. At an early stage of design, HF may offer only advice – advice which may all too easily be ignored and so not implemented. Its main contribution to the development of user-interactive systems is the evaluations it provides. Yet these are too often relegated to the closing stages of development programmes, where they can only suggest minor enhancements to completed designs because of the prohibitive costs of even modest re-implementations (Walsh et al,1988).

Second, HF practices have a suspect efficacy. Their contribution to improving product quality in any instance remains highly variable. Because there is no guarantee that experience of one development programme is appropriate or complete in its recruitment to another, re-application of that experience cannot be assured of repeated success (Long and Dowell, 1989).

Third, HF practices are inefficient. Each development of a system requires the solving of new problems by implementation then testing. There is no formal structure within which experience accumulated in the successful development of previous systems can be recruited to support solutions to the new problems, except through the memory and intuitions of the designer. These may not be shared by others, except indirectly (for example, through the formulation of heuristics), and so experience may be lost and may have to be re-acquired (Long and Dowell, 1989).

The guidance may be direct – by the designer’s familiarity with psychological theory and practice, or may be indirect by means of guidelines derived from psychological findings. In both cases, the guidance can offer only advice which must be implemented then tested to assess its effectiveness. Since the general scientific problem is the explanation and prediction of phenomena, and not the design of artifacts, the guidance cannot be directly embodied in design specifications which offer a guarantee with respect to the effectiveness of the implemented design. It is not being claimed here that the application of psychology directly or indirectly cannot contribute to better practice or to better designs, only that a practice supported in such a manner remains a craft, because its practice is by implementation then test, that is, by trial and error (see also Long and Dowell, 1989).

Fourth, there are insufficient signs of systematic and intentional progress which will alleviate the three deficiencies of HF practices cited above. The lack of progress is particularly noticeable when HF is compared with the similarly nascent discipline of SE (Gries, 1981; Morgan, Shorter and Tainsh, 1988).

These four deficiencies are endemic to the craft nature of contemporary HF practice. They indict the tacit HF discipline knowledge consisting of accumulated experience embodied in procedures, even where that experience has been influenced by guidance offered by the science of psychology (see earlier footnote). Because the knowledge is tacit (i.e., implicit or informal), it cannot be operationalised, and hence the role of HF in systems development cannot be planned as would be necessary for the proper integration of the knowledge. Without being operationalised, its knowledge cannot be tested, and so the efficacy of the practices it supports cannot be guaranteed. Without being tested, its knowledge cannot be generalised for new applications and so the practices it can support will be inefficient. Without being operationalised, testable, and general, the knowledge cannot be developed in any structured way as required for supporting the systematic and intentional progress of the HF discipline.

It would be incorrect to assume the current absence of formality of HF knowledge to be a necessary response to the indeterminism of human behaviour. Both tacit discipline knowledge and ‘trial and error’ practices may simply be symptomatic of the early stage of development of the discipline1. The extent to which human behaviour is deterministic for the purposes of designing interactive computer-based systems needs to be independently established. ……..

 

1.4. Human Factors Engineering Principles

 

A discipline’s practices construct solutions to its general design problem. Consideration of disciplines indicates much variation in their use of specification as a practice in constructing solutions. 1 Such was the history of many disciplines: the origin of modern day Production Engineering, for example, was a nineteenth century set of craft practices and tacit knowledge. This variation, however, appears not to be dependent on variations in the hardness of the general design problems. Rather, disciplines appear to differ in the completeness with which they specify solutions to their respective general design problems before implementation occurs. At one extreme, some disciplines specify solutions completely before implementation: their practices may be described as ‘specify then implement’ (an example might be Electrical Engineering). At the other extreme, disciplines appear not to specify their solutions at all before implementing them: their practices may be described as ‘implement and test’ (an example might be Graphic Design). Other disciplines, such as SE, appear characteristically to specify solutions partially before implementing them: their practices may be described as ‘specify and implement’. ‘Specify then Implement’, therefore, and ‘implement and test’, would appear to represent the extremes of a dimension by which disciplines may be distinguished by their practices. It is a dimension of the completeness with which they specify design solutions.

Taken together, the dimension of problem hardness, characterising general design problems, and the dimension of specification completeness, characterising discipline practices, constitute a classification space for design disciplines such as Electrical Engineering and Graphic Design. The space is shown in Figure 2, including for illustrative purposes, the speculative location of SE.

Two conclusions are prompted by Figure 2. First, a general relation may be apparent between the hardness of a general design problem and the realiseable completeness with which its solutions might be specified. In particular, a boundary condition is likely to be present beyond which more complete solutions could not be specified for a problem of given hardness. The shaded area of Figure 2 is intended to indicate this condition, termed the ‘Boundary of Determinism’ – because it derives from the determinism of the phenomena implicated in the general design problem. It suggests that whilst very soft problems may only be solved by ‘implement and test’ practices, hard problems may be solved by ‘specify then implement’ practices.

Second, it is concluded from Figure 2 that the actual completeness with which solutions to a general design problem are specified, and the realiseable completeness, might be at variance. Accordingly, there may be different possible forms of the same discipline – each form addressing the same problem but with characteristically different practices. With reference to HF then, the contemporary discipline, a craft, will characteristically solve the HF general design problem mainly by ‘implementation and testing’. If solutions are specified at all, they will be incomplete before being implemented. Yet depending on the hardness of the HF general design problem, the realiseable completeness of specified solutions may be greater and a future form of the discipline, with practices more characteristically those of ‘specify then implement’, may be possible. For illustrative purposes, those different forms of the HF discipline are located speculatively in the figure.

Whilst the realiseable completeness with which a discipline may specify design solutions is governed by the hardness of the general design problem, the actual completeness with which it does so is governed by the formality of the knowledge it possesses. Consideration of the traditional engineering disciplines supports this assertion. Their modern-day practices are characteristically those of ‘specify then implement’, yet historically, their antecedents were ‘specify and implement’ practices, and earlier still – ‘implement and test’ practices. For example, the early steam engine preceded formal knowledge of thermodynamics and was constructed by ‘implementation and testing’. Yet designs of thermodynamic machines are now relatively completely specified before being implemented, a practice supported by formal knowledge. Such progress then, has been marked by the increasing formality of knowledge. It is also in spite of the increasing complexity of new technology – an increase which might only have served to make the general design problem more soft, and the boundary of determinism more constraining. The dimension of the formality of a discipline’s knowledge – ranging from experience to principles, is shown in Figure 2 and completes the classification space for design disciplines.

It should be clear from Figure 2 that there exists no pre-ordained relationship between the formality of a discipline’s knowledge and the hardness of its general design problem. In particular, the practices of a (craft) discipline supported by experience – that is, by informal knowledge – may address a hard problem. But also, within the boundary of determinism, that discipline could acquire formal knowledge to support specification as a design practice.

 

Generally, the established engineering disciplines possess formal knowledge: a corpus of operationalised, tested, and generalised principles. Those principles are prescriptive, enabling the complete specification of design solutions before those designs are implemented (see Dowell and Long, 1988b). This theme of prescription in design is central to the thesis offered here.

Engineering principles can be substantive or methodological (see Checkland, 1981; Pirsig, 1974). Methodological Principles prescribe the methods for solving a general design problem optimally. For example, methodological principles might prescribe the representations of designs specified at a general level of description and procedures for systematically decomposing those representations until complete specification is possible at a level of description of immediate design implementation (Hubka, Andreason and Eder, 1988). Methodological principles would assure each lower level of specification as being a complete representation of an immediately higher level. ……..

 

1.5. The Requirement for an Engineering Conception for Human Factors

The contemporary HF discipline does not possess either methodological or substantive engineering principles. The heuristics it possesses are either ‘rules of thumb’ derived from experience or guidelines derived from psychological theories and findings. Neither guidelines nor rules of thumb offer assurance of their efficacy in any given instance, and particularly with regard to the effectiveness of a design. The methods and models of HF (as opposed to methodological and substantive principles) are similarly without such an assurance. Clearly, any evolution of HF as an engineering discipline in the manner proposed here has yet to begin. There is an immediate need then, for a view of how it might begin, and how formulation of engineering principles might be precipitated………

Part II. Conception for an Engineering Discipline of Human Factors 

2.1 Conception of the human factors general design problem;

……..

The concern of an engineering HCI discipline would be the design of interactive worksystems for performance. More precisely, its concern would be the design of behaviours constituting a worksystem {S} whose actual performance (Pa) conformed with some desired performance (Pd). And to design {S} would require the design of human behaviours {U} interacting with computer behaviours {C}. Hence, conception of the general design problem of an engineering discipline of HCI is expressed as: Specify then implement {U} and {C}, such that {U} interacting with {C} = {S} PaPd where Pd = fn. { Qd ,Kd } Qd expresses the desired quality of the products of work within the given domain of application, KD expresses acceptable (i.e., desired) costs incurred by the worksystem, i.e., by both human and computer.

The problem, when expressed as one of to ‘specify then implement’ designs of interactive worksystems, is equivalent to the general design problems characteristic of other engineering disciplines (see Section 1.4.)………

……..The general design problem of HF then, is one of producing implementable specifications of human behaviours {U} which, interacting with computer behaviours {C}, are constituted within a worksystem {S} whose performance conforms with a desired performance (Pd)………

2.2 Conception of work and user; 2.2.1 Objects and their attributes; 2.2.2 Attributes and levels of complexity; 2.2.3 Relations between attributes; 2.2.4 Attribute states and affordance; 2.2.5 Organisations, domains (of application)2.2.6 Goals; 2.2.7 Quality; 2.2.8 Work and the user; and the requirement for attribute state changes;

2.3 Conception of the interactive worksystem and the user; 2.3.1 Interactive worksystems; 2.3.2 The user as a system of mental and physical human behaviours; 2.3.3 Human-computer interaction; 2.3.4 On-line and off-line behaviours; 2.3.5 Human structures and the user; 2.3.6 Resource costs and the user;

2.4 Conception of performance of the interactive worksystem and the user;

2.5 Conclusions and the prospect for Human Factors engineering principles

 The potential for HF to become an engineering discipline, and so better to respond to the problem of interactive systems design, was examined in Part I. The possibility of realising this potential through HF engineering principles was suggested – principles which might prescriptively support HF design expressed as ‘specify then implement’. It was concluded that a pre-requisite to the development of HF engineering principles, is a conception of the general design problem of HF, which was informally expressed as: ‘to design human interactions with computers for effective working’.
Part II proposes a conception for HF. It attempts to establish the set of related concepts which can express the general design problem of HF more formally. Such concepts would be those embodied in HF engineering principles. As indicated in Section 1.1, the conception for HF is supported by a conception for an engineering discipline of HCI earlier proposed by Dowell and Long (1988a). Space precludes re-iteration of the conception for HCI here, other than as required for the derivation of the conception for HF. Part II first asserts a more formal expression of the HF general design problem which an engineering discipline would address. Part II then continues by elaborating and illustrating the concepts and their relations embodied in that expression.
2.1. Conception of the Human Factors General Design Problem.
The conception for the (super-ordinate) engineering discipline of HCI asserts a fundamental distinction between behavioural systems which perform work, and a world in which work originates, is performed and has its consequences. Specifically conceptualised are interactive worksystems consisting of human and computer behaviours together performing work. It is work evidenced in a world of physical and informational objects disclosed as domains of application. The distinction between worksystems and domains of application is represented schematically in Figure 3.

Effectiveness derives from the relationship of an interactive worksystem with its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs it incurs. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed.

The concern of an engineering HCI discipline would be the design of interactive worksystems for performance. More precisely, its concern would be the design of behaviours constituting a worksystem {S} whose actual performance (Pa) conformed with some desired performance (Pd). And to design {S} would require the design of human behaviours {U} interacting with computer behaviours {C}. Hence, conception of the general design problem of an engineering discipline of HCI is expressed as: Specify then implement {U} and {C}, such that {U} interacting with {C} = {S} PaPd where Pd = fn. { Qd ,Kd } Qd expresses the desired quality of the products of work within the given domain of application, KD expresses acceptable (i.e., desired) costs incurred by the worksystem, i.e., by both human and computer.

The problem, when expressed as one of to ‘specify then implement’ designs of interactive worksystems, is equivalent to the general design problems characteristic of other engineering disciplines (see Section 1.4.).

The interactive worksystem can be distinguished as two separate, but interacting sub-systems, that is, a system of human behaviours interacting with a system of computer behaviours. The human behaviours may be treated as a behavioural system in their own right, but one interacting with the system of computer behaviours to perform work. It follows that the general design problem of HCI may be decomposed with regard to its scope (with respect to the human and computer behavioural sub-systems) giving two related problems. Decomposition with regard to the human behaviours gives the general design problem of the HF1 discipline as: Specify then implement {U} such that {U} interacting with {C} = {S} PaPd.

The general design problem of HF then, is one of producing implementable specifications of human behaviours {U} which, interacting with computer behaviours {C}, are constituted within a worksystem {S} whose performance conforms with a desired performance (Pd).

The following sections elaborate the conceptualisation of human behaviours (the user, or users) with regard to the work they perform, the interactive worksystem in which they are constituted, and performance.

 

2.2 . Conception of Work and the User

The conception for HF identifies a world in which work originates, is performed and has its consequences. This section presents the concepts by which work and its relations with the user are expressed.

2.2.1 Objects and their attributes

Work occurs in a world consisting of objects and arises in the intersection of organisations and (computer) technology. Objects may be both abstract as well as physical, and are characterised by their attributes. Abstract attributes of objects are attributes of information and knowledge. Physical attributes are attributes of energy and matter. Letters (i.e., correspondence) are objects; their abstract attributes support the communication of messages etc; their physical attributes support the visual/verbal representation of information via language.

2.2.2 Attributes and levels of complexity

The different attributes of an object may emerge at different levels within a hierarchy of levels of complexity (see Checkland, 1981). For example, characters and their configuration on a page are physical attributes of the object ‘a letter’ which emerge at one level of complexity; the message of the letter is an abstract attribute which emerges at a higher level of complexity.

Objects are described at different levels of description commensurate with their levels of complexity. However, at a high level of description, separate objects may no longer be differentiated. For example, the object ‘income tax return’ and the object ‘personal letter’ are both ‘correspondence’ objects at a higher level of description. Lower levels of description distinguish their respective attributes of content, intended correspondent etc. In this way, attributes of an object described at one level of description completely re-represent those described at a lower level.

2.2.3 Relations between attributes

Attributes of objects are related, and in two ways. First, attributes at different levels of complexity are related. As indicated earlier, those at one level are completely subsumed in those at a higher level. In particular, abstract attributes will occur at higher levels of complexity than physical attributes and will subsume those lower level physical attributes. For example, the abstract attributes of an object ‘message’ concerning the representation of its content by language subsume the lower level physical attributes, such as the font of the characters expressing the language. As an alternative example, an industrial process, such as a steel rolling process in a foundry, is an object whose abstract attributes will include the process’s efficiency. Efficiency subsumes physical attributes of the process, – its power consumption, rate of output, dimensions of the output (the rolled steel), etc – emerging at a lower level of complexity.

Second, attributes of objects are related within levels of complexity. There is a dependency between the attributes of an object emerging within the same level of complexity. For example, the attributes of the industrial process of power consumption and rate of output emerge at the same level and are inter-dependent.

2.2.4 Attribute states and affordance

At any point or event in the history of an object, each of its attributes is conceptualised as having a state. Further, those states may change. For example, the content and characters (attributes) of a letter (object) may change state: the content with respect to meaning and grammar etc; its characters with respect to size and font etc. Objects exhibit an affordance for transformation, engendered by their attributes’ potential for state change (see Gibson, 1977). Affordance is generally pluralistic in the sense that there may be many, or even, infinite transformations of objects, according to the potential changes of state of their attributes.

Attributes’ relations are such that state changes of one attribute may also manifest state changes in related attributes, whether within the same level of complexity, or across different levels of complexity. For example, changing the rate of output of an industrial process (lower level attribute) will change both its power consumption (same level attribute) and its efficiency (higher level attribute).

2.2.5 Organisations, domains (of application), and the requirement for attribute state changes

domain of application may be conceptualised as: ‘a class of affordance of a class of objects’. Accordingly, an object may be associated with a number of domains of application (‘domains’). The object ‘book’ may be associated with the domain of typesetting (state changes of its layout attributes) and with the domain of authorship (state changes of its textual content). In principle, a domain may have any level of generality, for example, the writing of letters and the writing of a particular sort of letter.

Organisations are conceptualised as having domains as their operational province and of requiring the realisation of the affordance of objects. It is a requirement satisfied through work. Work is evidenced in the state changes of attributes by which an object is intentionally transformed: it produces transforms, that is, objects whose attributes have an intended state. For example, ‘completing a tax return’ and ‘writing to an acquaintance’, each have a ‘letter’ as their transform, where those letters are objects whose attributes (their content, format and status, for example) have an intended state. Further editing of those letters would produce additional state changes, and therein, new transforms.

2.2.6 Goals

Organisations express their requirement for the transformation of objects through specifying goals. A product goal specifies a required transform – a required realisation of the affordance of an object. In expressing the required transformation of an object, a product goal will generally suppose necessary state changes of many attributes. The requirement of each attribute state change can be expressed as a task goal, deriving from the product goal. So for example, the product goal demanding transformation of a letter making its message more courteous, would be expressed by task goals possibly requiring state changes of semantic attributes of the propositional structure of the text, and of syntactic attributes of the grammatical structure. Hence, a product goal can be re-expressed as a task goal structure, a hierarchical structure expressing the relations between task goals, for example, their sequences.

In the case of the computer-controlled steel rolling process, the process is an object whose transformation is required by a foundry organisation and expressed by a product goal. For example, the product goal may specify the elimination of deviations of the process from a desired efficiency. As indicated earlier, efficiency will at least subsume the process’s attributes of power consumption, rate of output, dimensions of the output (the rolled steel), etc. As also indicated earlier, those attributes will be inter-dependent such that state changes of one will produce state changes in the others – for example, changes in rate of output will also change the power consumption and the efficiency of the process. In this way, the product goal (of correcting deviations from the desired efficiency) supposes the related task goals (of setting power consumption, rate of output, dimensions of the output etc). Hence, the product goal can be expressed as a task goal structure and task goals within it will be assigned to the operator monitoring the process.

2.2.7 Quality

The transformation of an object demanded by a product goal will generally be of a multiplicity of attribute state changes – both within and across levels of complexity. Consequently, there may be alternative transforms which would satisfy a product goal – letters with different styles, for example – where those different transforms exhibit differing compromises between attribute state changes of the object. By the same measure, there may also be transforms which will be at variance with the product goal. The concept of quality (Q) describes the variance of an actual transform with that specified by a product goal. It enables all possible outcomes of work to be equated and evaluated.

2.2.8 Work and the user

  Conception of the domain then, is of objects, characterised by their attributes, and exhibiting an affordance arising from the potential changes of state of those attributes. By specifying product goals, organisations express their requirement for transforms – objects with specific attribute states. Transforms are produced through work, which occurs only in the conjunction of objects affording transformation and systems capable of producing a transformation.

From product goals derive a structure of related task goals which can be assigned either to the human or to the computer (or both) within an associated worksystem. The task goals assigned to the human are those which motivate the human’s behaviours. The actual state changes (and therein transforms) which those behaviours produce may or may not be those specified by task and product goals, a difference expressed by the concept of quality.

Taken together, the concepts presented in this section support the HF conception’s expression of work as relating to the user. The following section presents the concepts expressing the interactive worksystem as relating to the user.

 

2.3. Conception of the Interactive Worksystem and the User.

The conception for HF identifies interactive worksystems consisting of human and computer behaviours together performing work. This section presents the concepts by which interactive worksystems and the user are expressed.

2.3.1 Interactive worksystems

Humans are able to conceptualise goals and their corresponding behaviours are said to be intentional (or purposeful). Computers, and machines more generally, are designed to achieve goals, and their corresponding behaviours are said to be intended (or purposive1). An interactive worksystem   (‘worksystem’) is a behavioural system distinguished by a boundary enclosing all human and computer behaviours whose purpose is to achieve and satisfy a common goal. For example, the behaviours of a secretary and wordprocessor whose purpose is to produce letters constitute a worksystem. Critically, it is only by identifying that common goal that the boundary of the worksystem can be established: entities, and more so – humans, may exhibit a range of contiguous behaviours, and only by specifying the goals of concern, might the boundary of the worksystem enclosing all relevant behaviours be correctly identified.

Worksystems transform objects by producing state changes in the abstract and physical attributes of those objects (see Section 2.2). The secretary and wordprocessor may transform the object ‘correspondence’ by changing both the attributes of its meaning and the attributes of its layout. More generally, a worksystem may transform an object through state changes produced in related attributes. An operator monitoring a computer-controlled industrial process may change the efficiency of the process through changing its rate of output.

The behaviours of the human and computer are conceptualised as behavioural sub-systems of the worksystem – sub-systems which interact1. The human behavioural sub-system is here more appropriately termed the user. Behaviour may be loosely understood as ‘what the human does’, in contrast with ‘what is done’ (i.e. attribute state changes in a domain). More precisely the user is conceptualised as:

a system of distinct and related human behaviours, identifiable as the sequence of states of a person2 interacting with a computer to perform work, and corresponding with a purposeful (intentional) transformation of objects in a domain3 (see also Ashby, 1956).

Although possible at many levels, the user must at least be expressed at a level commensurate with the level of description of the transformation of objects in the domain. For example, a secretary interacting with an electronic mailing facility is a user whose behaviours include receiving and replying to messages. An operator interacting with a computer-controlled milling machine is a user whose behaviours include planning the tool path to produce a component of specified geometry and tolerance.

2.3.2 The user as a system of mental and physical behaviours

The behaviours constituting a worksystem are both physical as well as abstract. Abstract behaviours are generally the acquisition, storage, and transformation of information. They represent and process information at least concerning: domain objects and their attributes, attribute relations and attribute states, and the transformations required by goals. Physical behaviours are related to, and express, abstract behaviours.

Accordingly, the user is conceptualised as a system of both mental (abstract) and overt (physical) behaviours which extend a mutual influence – they are related. In particular, they are related within an assumed hierarchy of behaviour types (and their control) wherein mental behaviours generally determine, and are expressed by, overt behaviours. Mental behaviours may transform (abstract) domain objects represented in cognition, or express through overt behaviour plans for transforming domain objects.

So for example, the operator working in the control room of the foundry has the product goal required to maintain a desired condition of the computer-controlled steel rolling process. The operator attends to the computer (whose behaviours include the transmission of information about the process). Hence, the operator acquires a representation of the current condition of the process by collating the information displayed by the computer and assessing it by comparison with the condition specified by the product goal. The operator`s acquisition, collation and assessment are each distinct mental behaviours, conceptualised as representing and processing information. The operator reasons about the attribute state changes necessary to eliminate any discrepancy between current and desired conditions of the process, that is, the set of related changes which will produce the required transformation of the process. That decision is expressed in the set of instructions issued to the computer through overt behaviour – making keystrokes, for example.

The user is conceptualised as having cognitive, conative and affective aspects. The cognitive aspects of the user are those of their knowing, reasoning and remembering, etc; the conative aspects are those of their acting, trying and persevering, etc; and the affective aspects are those of their being patient, caring, and assured, etc. Both mental and overt human behaviours are conceptualised as having these three aspects.

2.3.3 Human-computer interaction

Although the human and computer behaviours may be treated as separable sub-systems of the worksystem, those sub-systems extend a “mutual influence”, or interaction whose configuration principally determines the worksystem (Ashby, 1956).

Interaction is conceptualised as: the mutual influence of the user (i.e., the human behaviours) and the computer behaviours associated within an interactive worksystem.

Hence, the user {U} and computer behaviours {C} constituting a worksystem {S}, were expressed in the general design problem of HF (Section 2.1) as: {U} interacting with {C} = {S}

Interaction of the human and computer behaviours is the fundamental determinant of the worksystem, rather than their individual behaviours per se. For example, the behaviours of an operator interact with the behaviours of a computer-controlled milling machine. The operator’s behaviours influence the behaviours of the machine, perhaps in the tool path program – the behaviours of the machine, perhaps the run-out of its tool path, influences the selection behaviour of the operator. The configuration of their interaction – the inspection that the machine allows the operator, the tool path control that the operator allows the machine – determines the worksystem that the operator and machine behaviours constitute in their planning and execution of the machining work.

The assignment of task goals then, to either the human or the computer delimits the user and therein configures the interaction. For example, replacement of a mis-spelled word required in a document is a product goal which can be expressed as a task goal structure of necessary and related attribute state changes. In particular, the text field for the correctly spelled word demands an attribute state change in the text spacing of the document. Specifying that state change may be a task goal assigned to the user, as in interaction with the behaviours of early text editor designs, or it may be a task goal assigned to the computer, as in interaction with the ‘wrap-round’ behaviours of contemporary wordprocessor designs. The assignment of the task goal of specification configures the interaction of the human and computer behaviours in each case; it delimits the user.

2.3.4 On-line and off-line behaviours

The user may include both on-line and off-line human behaviours: on-line behaviours are associated with the computer’s representation of the domain; offline behaviours are associated with non-computer representations of the domain, or the domain itself.

As an illustration of the distinction, consider the example of an interactive worksystem consisting of behaviours of a secretary and a wordprocessor and required to produce a paper-based copy of a dictated letter stored on audio tape. The product goal of the worksystem here requires the transformation of the physical representation of the letter from one medium to another, that is, from tape to paper. From the product goal derives the task goals relating to required attribute state changes of the letter. Certain of those task goals will be assigned to the secretary. The secretary’s off-line behaviours include listening to, and assimilating the dictated letter, so acquiring a representation of the domain directly. By contrast, the secretary’s on-line behaviours include specifying the represention by the computer of the transposed content of the letter in a desired visual/verbal format of stored physical symbols.

On-line and off-line human behaviours are a particular case of the ‘internal’ interactions between a human’s behaviours as, for example, when the secretary’s typing interacts with memorisations of successive segments of the dictated letter.

2.3.5 Human structures and the user

  Conceptualisation of the user as a system of human behaviours needs to be extended to the structures supporting behaviour.

Whereas human behaviours may be loosely understood as ‘what the human does’, the structures supporting them can be understood as ‘how they are able to do what they do’ (see Marr, 1982; Wilden, 1980). There is a one to many mapping between a human`s structures and the behaviours they might support: the structures may support many different behaviours.

In co-extensively enabling behaviours at each level, structures must exist at commensurate levels. The human structural architecture is both physical and mental, providing the capability for a human’s overt and mental behaviours. It provides a represention of domain information as symbols (physical and abstract) and concepts, and the processes available for the transformation of those representations. It provides an abstract structure for expressing information as mental behaviour. It provides a physical structure for expressing information as physical behaviour.

Physical human structure is neural, bio-mechanical and physiological. Mental structure consists of representational schemes and processes. Corresponding with the behaviours it supports and enables, human structure has cognitive, conative and affective aspects. The cognitive aspects of human structures include information and knowledge – that is, symbolic and conceptual representations – of the domain, of the computer and of the person themselves, and it includes the ability to reason. The conative aspects of human structures motivate the implementation of behaviour and its perseverence in pursuing task goals. The affective aspects of human structures include the personality and temperament which respond to and supports behaviour.

To illustrate the conceptualisation of mental structure, consider the example of structure supporting an operator’s behaviours in the foundry control room. Physical structure supports perception of the steel rolling process and executing corrective control actions to the process through the computer input devices. Mental structures support the acquisition, memorisation and transformation of information about the steel rolling process. The knowledge which the operator has of the process and of the computer supports the collation, assessment and reasoning about corrective control actions to be executed.

The limits of human structure determine the limits of the behaviours they might support. Such structural limits include those of: intellectual ability; knowledge of the domain and the computer; memory and attentional capacities; patience; perseverence; dexterity; and visual acuity etc. The structural limits on behaviour may become particularly apparent when one part of the structure (a channel capacity, perhaps) is required to support concurrent behaviours, perhaps simultaneous visual attending and reasoning behaviours. The user then, is ‘resource’ limited by the co-extensive human structure.

The behavioural limits of the human determined by structure are not only difficult to define with any kind of completeness, they will also be variable because that structure can change, and in a number of respects. A person may have self-determined changes in response to the domain – as expressed in learning phenomena, acquiring new knowledge of the domain, of the computer, and indeed of themselves, to better support behaviour. Also, human structure degrades with the expenditure of resources in behaviour, as evidenced in the phenomena of mental and physical fatigue. It may also change in response to motivating or de-motivating influences of the organisation which maintains the worksystem.

It must be emphasised that the structure supporting the user is independent of the structure supporting the computer behaviours. Neither structure can make any incursion into the other, and neither can directly support the behaviours of the other. (Indeed this separability of structures is a pre-condition for expressing the worksystem as two interacting behavioural sub-systems.) Although the structures may change in response to each other, they are not, unlike the behaviours they support, interactive; they are not included within the worksystem. The combination of structures of both human and computer supporting their interacting behaviours is conceptualised as the user interface .

2.3.6 Resource costs of the user

Work performed by interactive worksystems always incurs resource costs. Given the separability of the human and the computer behaviours, certain resource costs are associated directly with the user and distinguished as structural human costs and behavioural human costs.

Structural human costs are the costs of the human structures co-extensive with the user. Such costs are incurred in developing and maintaining human skills and knowledge. More specifically, structural human costs are incurred in training and educating people, so developing in them the structures which will enable their behaviours necessary for effective working. Training and educating may augment or modify existing structures, provide the person with entirely novel structures, or perhaps even reduce existing structures. Structural human costs will be incurred in each case and will frequently be borne by the organisation. An example of structural human costs might be the costs of training a secretary in the particular style of layout required for an organisation’s correspondence with its clients, and in the operation of the computer by which that layout style can be created.

Structural human costs may be differentiated as cognitive, conative and affective structural costs of the user. Cognitive structural costs express the costs of developing the knowledge and reasoning abilities of people and their ability for formulating and expressing novel plans in their overt behaviour – as necessary for effective working. Conative structural costs express the costs of developing the activity, stamina and persistence of people as necessary for effective working. Affective structural costs express the costs of developing in people their patience, care and assurance as necessary as necessary for effective working.

Behavioural human costs are the resource costs incurred by the user (i.e by human behaviours) in recruiting human structures to perform work. They are both physical and mental resource costs. Physical behavioural costs are the costs of physical behaviours, for example, the costs of making keystrokes on a keyboard and of attending to a screen display; they may be expressed without differentiation as physical workload. Mental behavioural costs are the costs of mental behaviours, for example, the costs of knowing, reasoning, and deciding; they may be expressed without differentiation as mental workload. Mental behavioural costs are ultimately manifest as physical behavioural costs.

When differentiated, mental and physical behavioural costs are conceptualised as the cognitive, conative and affective behavioural costs of the user. Cognitive behavioural costs relate to both the mental representing and processing of information, and the demands made on the individual`s extant knowledge, as well as the physical expression thereof in the formulation and expression of a novel plan. Conative behavioural costs relate to the repeated mental and physical actions and effort required by the formulation and expression of the novel plan. Affective behavioural costs relate to the emotional aspects of the mental and physical behaviours required in the formulation and expression of the novel plan. Behavioural human costs are evidenced in human fatigue, stress and frustration; they are costs borne directly by the individual.

 

2.4. Conception of Performance of the Interactive Worksystem and the User.

In asserting the general design problem of HF (Section 2.1.), it was reasoned that:

“Effectiveness derives from the relationship of an interactive worksystemwith its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs incurred by it. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed. ”

This statement followed from the distinction between interactive worksystems performing work, and the work they perform. Subsequent elaboration upon this distinction enables reconsideration of the concept of performance, and examination of its central importance within the conception for HF.

Because the factors which constitute this engineering concept of performance (i.e the quality and costs of work) are determined by behaviour, a concordance is assumed between the behaviours of worksystems and their performance: behaviour determines performance (see Ashby, 1956; Rouse, 1980). The quality of work performed by interactive worksystems is conceptualised as the actual transformation of objects with regard to their transformation demanded by product goals. The costs of work are conceptualised as the resource costs incurred by the worksystem, and are separately attributed to the human and computer. Specifically, the resource costs incurred by the human are differentiated as: structural human costs – the costs of establishing and maintaining the structure supporting behaviour; and behavioural human costs – the costs of the behaviour recruiting structure to its own support. Structural and behavioural human costs were further differentiated as cognitive, conative and affective costs.

A desired performance of an interactive worksystem may be conceptualised. Such a desired performance might either be absolute, or relative as in a comparative performance to be matched or improved upon. Accordingly, criteria expressing desired performance, may either specify categorical gross resource costs and quality, or they may specify critical instances of those factors to be matched or improved upon.

Discriminating the user’s performance within the performance of the interactive worksystem would require the separate assimilation of human resource costs and their achievement of desired attribute state changes demanded by their assigned task goals. Further assertions concerning the user arise from the conceptualisation of worksystem performance. First, the conception of performance is able to distinguish the quality of the transform from the effectiveness of the worksystems which produce them. This distinction is essential as two worksystems might be capable of producing the same transform, yet if one were to incur a greater resource cost than the other, its effectiveness would be the lesser of the two systems.

Second, given the concordance of behaviour with performance, optimal human (and equally, computer) behaviours may be conceived as those which incur a minimum of resource costs in producing a given transform. Optimal human behaviour would minimise the resource costs incurred in producing a transform of given quality (Q). However, that optimality may only be categorically determined with regard to worksystem performance, and the best performance of a worksystem may still be at variance with the performance desired of it (Pd). To be more specific, it is not sufficient for human behaviours simply to be error-free. Although the elimination of errorful human behaviours may contribute to the best performance possible of a given worksystem, that performance may still be less than desired performance. Conversely, although human behaviours may be errorful, a worksystem may still support a desired performance.

Third, the common measures of human ‘performance’ – errors and time, are related in this conceptualisation of performance. Errors are behaviours which increase resource costs incurred in producing a given transform, or which reduce the quality of transform, or both. The duration of human behaviours may (very generally) be associated with increases in behavioural user costs.

Fourth, structural and behavioural human costs may be traded-off in performance. More sophisticated human structures supporting the user, that is, the knowledge and skills of experienced and trained people, will incur high (structural) costs to develop, but enable more efficient behaviours – and therein, reduced behavioural costs.

Fifth, resource costs incurred by the human and the computer may be traded-off in performance. A user can sustain a level of performance of the worksystem by optimising behaviours to compensate for the poor behaviours of the computer (and vice versa), i.e., behavioural costs of the user and computer are traded-off. This is of particular concern for HF as the ability of humans to adapt their behaviours to compensate for poor computer-based systems often obscures the low effectiveness of worksystems.

This completes the conception for HF. From the initial assertion of the general design problem of HF, the concepts that were invoked in its formal expression have subsequently been defined and elaborated, and their coherence established.

 

2.5. Conclusions and the Prospect for Human Factors Engineering Principles

……..The conception for HF is a broad view of the HF general design problem. Instances of the general design problem may include the development of a worksystem, or the utilisation of a worksystem within an organisation. Developing worksystems which are effective, and maintaining the effectiveness of worksystems within a changing organisational environment, are both expressed within the problem. In addition, the conception takes the broad view on the research and development activities necessary to solve the general design problem and its instantiations, respectively. HF engineering research practices would seek solutions, in the form of (methodological and substantive) engineering principles, to the general design problem. HF engineering practices in systems development programmes would seek to apply those principles to solve instances of the general design problem, that is, to the design of specific users within specific interactive worksystems. Collaboration of HF and SE specialists and the integration of their practices is assumed.

Notwithstanding the comprehensive view of determinacy developed in Part I, the intention of specification associated with people might be unwelcome to some. Yet, although the requirement for design and specification of the user is being unequivocally proposed, techniques for implementing those specifications are likely to be more familiar than perhaps expected – and possibly more welcome. Such techniques might include selection tests, aptitude tests, training programmes, manuals and help facilities, or the design of the computer.

A selection test would assess the conformity of a candidates’ behaviours with a specification for the user. An aptitude test would assess the potential for a candidates’ behaviours to conform with a specification for the user. Selection and aptitude tests might assess candidates either directly or indirectly. A direct test would observe candidates’ behaviours in ‘hands on’ trial periods with the ‘real’ computer and domain, or with simulations of the computer and domain. An indirect test would examine the knowledge and skills (i.e., the structures) of candidates, and might be in the form of written examinations. A training programme would develop the knowledge and skills of a candidate as necessary for enabling their behaviours to conform with a specification for the user.Such programmes might take the form of either classroom tuition or ‘hands on’ learning. A manual or on-line help facility would augment the knowledge possessed by a human, enabling their behaviours to conform with a specification for the user. Finally, the design of the computer itself, through the interactions of its behaviours with the user, would enable the implementation of a specification for the user.

References Ashby W. Ross, (1956), An Introduction to Cybernetics. London: Methuen.

Bornat R. and Thimbleby H., (1989), The Life and Times of ded, Text Display Editor. In J.B. Long and A.D. Whitefield (ed.s), Cognitive Ergonomics and Human Computer Interaction. Cambridge: Cambridge University Press.

Card, S. K., Moran, T., and Newell, A., (1983), The Psychology of Human Computer Interaction, New Jersey: Lawrence Erlbaum Associates.

Carey, T., (1989), Position Paper: The Basic HCI Course For Software Engineers. SIGCHI Bulletin, Vol. 20, no. 3.

Carroll J.M., and Campbell R. L., (1986), Softening up Hard Science: Reply to Newell and Card. Human Computer Interaction, Vol. 2, pp. 227-249.

Checkland P., (1981), Systems Thinking, Systems Practice. Chichester: John Wiley and Sons.

Cooley M.J.E., (1980), Architect or Bee? The Human/Technology Relationship. Slough: Langley Technical Services.

Didner R.S. A Value Added Approach to Systems Design. Human Factors Society Bulletin, May 1988. Dowell J., and

Long J. B., (1988a), Human-Computer Interaction Engineering. In N. Heaton and M . Sinclair (ed.s), Designing End-User Interfaces. A State of the Art Report. 15:8. Oxford: Pergamon Infotech.

Dowell, J., and Long, J. B., 1988b, A Framework for the Specification of Collaborative Research in Human Computer Interaction, in UK IT 88 Conference Publication 1988, pub. IEE and BCS.

Gibson J.J., (1977), The Theory of Affordances. In R.E. Shaw and J. Branford (ed.s), Perceiving, Acting and Knowing. New Jersey: Erlbaum.

Gries D., (1981), The Science of Programming, New York: Springer Verlag.

Hubka V., Andreason M.M. and Eder W.E., (1988), Practical Studies in Systematic Design, London: Butterworths.

Long J.B., Hammond N., Barnard P. and Morton J., (1983), Introducing the Interactive Computer at Work: the Users’ Views. Behaviour And Information Technology, 2, pp. 39-106.

Long, J., (1987), Cognitive Ergonomics and Human Computer Interaction. In P. Warr (ed.), Psychology at Work. England: Penguin.

Long J.B., (1989), Cognitive Ergonomics and Human Computer Interaction: an Introduction. In J.B. Long and A.D. Whitefield (ed.s), Cognitive Ergonomics and Human Computer Interaction. Cambridge: Cambridge University Press.

Long J.B. and Dowell J., (1989), Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering. In Sutcliffe A. and Macaulay L., Proceedings of the Fifth Conference of the BCS HCI SG. Cambridge: Cambridge University Press.

Marr D., (1982), Vision. New York: Wh Freeman and Co. Morgan D.G.,

Shorter D.N. and Tainsh M., (1988), Systems Engineering. Improved Design and Construction of Complex IT systems. Available from IED, Kingsgate House, 66-74 Victoria Street, London, SW1.

Norman D.A. and Draper S.W. (eds) (1986): User Centred System Design. Hillsdale, New Jersey: Lawrence Erlbaum;

Pirsig R., 1974, Zen and the Art of Motorcycle Maintenance. London: Bodley Head.

Rouse W. B., (1980), Systems Engineering Models of Human Machine Interaction. New York: Elsevier North Holland.

Shneiderman B. (1980): Software Psychology: Human Factors in Computer and Information Systems. Cambridge, Mass.: Winthrop.

Thimbleby H., (1984), Generative User Engineering Principles for User Interface Design. In B. Shackel (ed.), Proceedings of the First IFIP conference on Human-Computer Interaction. Human-Computer Interaction – INTERACT’84. Amsterdam: Elsevier Science. Vol.2, pp. 102-107.

van Gisch J. P. and Pipino L.L., (1986), In Search of a Paradigm for the Discipline of Information Systems, Future Computing Systems, 1 (1), pp. 71-89.

Walsh P., Lim K.Y., Long J.B., and Carver M.K., (1988), Integrating Human Factors with System Development. In: N. Heaton and M. Sinclair (eds): Designing End-User Interfaces. Oxford: Pergamon Infotech.

Wilden A., 1980, System and Structure; Second Edition. London: Tavistock Publications.

This paper has greatly benefited from discussion with others and from their criticisms. We would like to thank our collegues at the Ergonomics Unit, University College London and in particular, Andy Whitefield, Andrew Life and Martin Colbert. We would also like to thank the editors of the special issue for their support and two anonymous referees for their helpful comments. Any remaining infelicities – of specification and implementation – are our own.

MUSE(SE) – MUSE for Software Engineers 150 150 John

MUSE(SE) – MUSE for Software Engineers

MUSE(SE)

MUSE for Software Engineers

Introduction to the Paper

MUSE is a Method for USability Engineering. It seeks to integrate usability into the development of interactive systems. It provides an environment in which human factors contributions can realise their full potential. (Lim and Long, 1994 – Cambridge University Press: Cambridge). MUSE comprises three phases: 1. Elicitation and Analysis; 2. Synthesis; and 3. Design Specification. MUSE is intended for application by human factors engineers. MUSE (SE), the version presented here, is intended for application by software engineers. It contains guidance, for example, concerning why and how to perform task analysis, as well as how to apply heuristics, with both of which human factors engineers would be assumed already familiar. The version of MUSE(SE) presented here was used to evaluate the method against target users. Hence, the specific, testing format.

James Middlemass and John Long, Ergonomics and HCI Unit, University College London

Introduction to James Middlemass

James Middlemass was an MSc student at UCL in the class of 1992/3 and a Research Fellow on European Systems and Software Initiative project 10290, ‘Benefits of Integrating Usability and Software Engineering Methods’. His subsequent work on integrating design knowledge into the MUSE (SE) method led to the version presented here.

Thank you for taking part in the trial application of MUSE(SE).

 

As the trial is part of a research project, it is important that you follow the procedures as closely as possible.

Please feel free to write on the procedures. Write a note next to any procedures that you find problematic; any comments you want to make whilst following the method, whether positive or negative, will be particularly valuable.

 

When the application is complete, your comments will be a valuable aspect of the evaluation, and will be used as an input towards future improvements to the method.

If you require help or advice on the method at any point during the test application, please feel free to contact me:

 

Phone: 0171 504 5316

Fax: 0171 580 1100

Email: j.middlemass@ucl.ac.uk


Contents

Introduction to MUSE(SE)            6

Notations used in MUSE(SE)            9

MUSE(SE) Procedures            12

Introduction            12

Phase 1            14

Extant Systems Analysis Stage            15

Examine Documents            16

Examine systems            17

Familiarise investigator with the system            18

Interview user representatives            20

Record findings            22

Construct ‘typical’ tasks            23

Study the systems            24

Decompose tasks            26

Identify usability requirements            26

OMT Cross-Checking Point            26

GTM stage            28

Generifying tasks            28

GTM Heuristics            30

Generification            32

Preparing GTM(y)            32

Preparing GTM(x)            33

Verify models            33

Phase 2            35

SUN stage            36

Document user problems            36

OMT Cross-Checking Point            39

DoDD(y) stage            41

Production of the DoDD(y)            42

Production of the user object model            43

OMT Cross-Checking Point            47

CTM(y) stage            49

Decompose task            49

Task Synthesis            50

CTM(y) supporting table            50

Allocation of function            51

Verify model            51

CTM Heuristics            52

OMT Cross-Checking Point            53

System and User Task Model            55

Decomposition of the CTM(y)            55

Assessing the design            56

Refering back to SUN and DoDD(y)            57

Completing the STM table            57

Document functionality            58

Handshake with SE            58

Phase 3            60

ITM(y) stage            61

Reviewing the STM(y)            61

H-C leaves            61

Referring to the DoDD(y)            62

H leaves            62

ITM diagram and table            63

Iterating the design            63

Locating screen boundaries            64

OMT Cross-Checking Point            66

ITM heuristics            67

Display Design stage            70

Defining screen layouts            71

Specifying IM(y)s            72

Dictionary of Screen Objects            72

Window management and errors            73

The DITaSAD            74

Display Design Stage Heuristics            75

Design Evaluation stage            79

Analytic evaluation            79

Empirical evaluation            80

Deciding where to redesign            84

Finalise documentation            84

Iteration Heuristics             86

Example            89

Extant Systems Analysis Stage            90

Statement of requirements            90

Examining the systems            91

Observational studies            92

Interviewing user representatives            96

‘Mind maps’ from interviews            97

TD(ext) products            98

TD supporting table            100

Tasks for test subjects            101

Usability Testing            102

Extract from the Ravden and Johnson Checklist            103

Choosing related systems            104

TD(ext) example: Finder             105

Identifying usability requirements            106

GTM stage            106

GTM(ext) for Finder            106

GTM(ext) for ResEdit            106

GTM(ext) for Microsoft Internet Explorer            107

GTM(ext) for NetScape Navigator            107

GTM(y)            108

GTM(x)            109

SUN stage            110

Statement of User Needs            110

DoDD(y) stage            113

DoDD(y)            113

User object model            114

Action – Object Matrix            114

CTM(y) stage            115

Composite Task Model            116

CTM Table            117

SUTaM stage            118

Extract from the STM            118

STM table            119

ITM(y) stage            120

Extract from the ITM            120

Decomposing the STM            121

ITM Table            122

Determining screen boundaries            123

Display Design stage            125

Pictorial screen layouts            125

Dictionary of Screen Objects            126

Dialog and Error Message Table            127

Extract from the DITaSAD            127

Design Evaluation stage            128

Analytic evaluation            128

Empirical evaluation            128

Paper prototyping            129

Impact analysis            132

Rank ordering problems            133

Using iteration heuristics            133

Reviewing PLUME categories            134

The Ravden & Johnson Evaluation Checklist            135

Blank Tables            154

Task Description Table            155

Generalised Task Model Supporting Table            156

Statement of User Needs            157

DoDD(y) Supporting Table            163

Composite Task Model Supporting Table            164

System and User Task Model Supporting Table            165

Interaction Task Model Supporting Table            166

Dialog and Error Message Table            167

Dictionary of Screen Objects Table            168

 

 

 

 

 

 

 

 

 

 

 

Introduction to MUSE(SE)

MUSE is a structured method for usability engineering. The method was developed to address the problem of Human Factors inputs to software design being ‘too-little-too-late’, where the input is mainly advice instead of specifications, and arrives too late in the process to be implemented. MUSE(SE) is an enhanced version of MUSE, intended for use by software engineers. Not only does it contain most of the knowledge needed to design effective user interfaces, it also contains procedures for checking the evolving design against the software engineering specifications. Although a certain amount of time must be devoted to MUSE(SE) during the early stages of a project, the benefits should justify the investment; the system should require fewer design iterations due to the user requirements being more clearly understood and the user interface having a better relationship to the requirements.

Many current Human Factors (HF) contributions to design are limited to a stage of design where the product developed by Software Engineers is available for usability assessment. Unhappily, this stage of design is one at which changes to the product may be prohibitively expensive. MUSE addresses this problem by specifying the user interface design process and the points at which HF and SE designs should be checked against each other.

The design of the user interface is approached ‘top-down’ based on information derived ‘bottom-up’. Design progresses in defined stages from specification of general features of the tasks to be performed (derived from analysis of the User Requirements and any existing systems) to specification of the specific details of the user interface to be implemented. The user of the method is provided with the techniques to apply at each stage, and any checklists or guidelines required by the method. Points at which certain features of the MUSE and SE design products should be cross-checked to ensure that the functionality specified in the software engineering design is compatible with that required by the user interface design is specified. Thus, the likelihood that the user interface under development will be implementable and provide the appropriate functionality to support the user’s task is maximised.

The diagram on the following page shows a schematic view of the MUSE method. A brief description of the method follows, outlining the three main phases of the method and the main products produced.

Screen shot 2016-06-17 at 16.03.59

The first phase of the method is called the Information Elicitation and Analysis Phase. It involves collecting and analysing information intended to inform later design activities, and consists of two stages, the Extant Systems Analysis stage and the Generalised Task Model stage. During the Extant Systems Analysis stage background design information is collected that relates both to the system currently in use and to other systems that are related in some way, for example by having a similar task domain. The information concerns the users of the systems, the devices used and the tasks performed. The objective is to identify those features of the systems that are problematic for users, or that may provide good ideas suitable for re-use in the target system. During the Generalised Task Model stage, a device independent task model of the existing systems (GTM(x)) is generated using the task descriptions from the previous stage, and this is used in conjunction with the Statement of Requirements to produce a Generalised Task Model for the system to be designed (GTM(y)).

The second phase of MUSE , the design synthesis phase, begins by establishing the human factors requirements of the design, in terms of performance criteria, likely user problems or required task support, and these are recorded in the Statement of User Needs (SUN(y)). The semantics of the application domain as it relates to the worksystem are also analysed in this stage, and are recorded as a semantic network called the Domain of Design Discourse, or DoDD(y). The Composite Task Model (CTM) stage expresses the conceptual design of the target system, and is produced using the GTM(x) and the GTM(y). The process is informed by the SUN(y) and the DoDD(y) produced in the previous stage. The resulting design is checked against that of the software engineering stream, to ensure that the correct functionality will be provided. The conceptual design addresses error-free task performance only, in order to avoid obscuring the overall structure of the task.

During the System and User Task Model stage, the Composite Task Model is decomposed to separate the subtasks that are to be performed using the system under development from those that are performed using other devices. The subtasks performed using the ‘target’ system are represented in the System Task Model, while the remaining (‘off-line’) tasks are represented in the User Task Model. Within the STM, allocation of function between user and computer is performed, and represented by designating actions as belonging to either ‘H’ (the user) or ‘C’ (the computer).

The final phase of MUSE is termed the Design Specification phase, and develops the conceptual design further to arrive at a device-specific implementable specification which includes error-recovery procedures. In the Interaction Task Model stage, the leaves of the STM representing user (‘H’) actions are decomposed further to produce a device-level specification of the interaction. This specification is mainly informed by the selected User Interface Environment, but the SUN(y) and DoDD(y) may also be used to further inform design decisions. The ITM(y) is annotated to indicate the locations of intended major screen transitions, which in practice are generally the boundaries of individual sub-tasks. During the Interface Model stage, the leaves of the STM(y) representing computer (‘C’) actions are decomposed to produce a set of Interface Models. These are detailed descriptions of the behaviours exhibited by screen objects, and the conditions that trigger them. In the Display Design stage, a set of Pictorial Screen Layouts (PSL(y)) are defined to correspond with the screen boundaries identified in the ITM(y). The interface objects that make up the screens are described in the Dictionary of Screen Objects (DSO(y)). A further product called the Display and Inter-Task Screen Actuation Diagram is produced, and details the conditions under which screen transitions may occur together with the conditions that would trigger the presentation of an error message. The error messages and dialogues are listed in the Dialogue and Error Message Table (DET).

 

Notations used in MUSE(SE)

The main notation used by MUSE(SE) is Jackson Structure Diagram Notation (SDN). Some other notations are used during domain modelling, but these will be described in the course of the procedures.

SDN is a hierarchical notation used in MUSE(SE) for representing the structure of tasks and the behaviour of user interfaces. A supporting table is usually generated for each SDN diagram to provide additional detail; the recommended format of the table for each product will be given at the appropriate point in the procedures.

 

2. Sequence

 

Task 1 consists of a sequence of A, B, and C. C consists of a sequence D, E. Task 1 is therefore a sequence A, B, D, E.

 

3. Selection

 

Task 2 also consists of a sequence A, B, C. However, C consists of a selection over D and E (indicated by the ‘o’; D and E describe the actions, but could be used to describe conditions with the ). Task 2 therefore consists of either A, B, D, or A, B, E.

 

4. Iteration

 

Once again, the task consists of a sequence A, B, C. C consists of an iteration of D and E (indicated by the ‘*’), which is repeated until the user is ready to stop. Task 3 consists of a sequence such as A, B, D, E, D, E, D, E.

Finally, combinations of constructs can be used to represent more complicated behaviours. The most useful of these is lax ordering, where parts of a task can be completed in any order.

5. Lax ordering

 

Task 4 consists of a sequence A, B, C, as before. This time, C consists of an iteration over a selection between D and E. Depending on the conditions applicable to the iteration and selection, this construct can represent an instance where neither D or E are performed, either of D or E is performed one or more times, or a sequence D, E or E, D is performed one or more times. In the case of Task 4, the sequence of events could be any of A B E D, A B E E D, or A B D E, because the condition on the iteration is ‘until both done’.

Note: MUSE(SE) uses SDN in a fairly informal manner to describe behaviours of the user. As a result, diagrams can sometimes contain ambiguities, and this is one reason why it is important that supporting tables are used to provide additional information about the diagrams.

MUSE(SE) Procedures

Introduction

The next section of this document describes the procedures for MUSE(SE). Before you start, you should understand how to draw the SDN diagrams used in MUSE(SE), and you should have a basic understanding of the purpose of each of the MUSE products. Refer to the example after the procedures if you need to see what a product should look like.

Each section of the document contains a summary of the procedures for a phase or stage of MUSE(SE), followed by the detailed procedures. Some stages are provided with a set of heuristics, or ‘rules of thumb’ after the procedures; these have been selected because they offer guidance that may be relevant at that point in the method. Several of the heuristics are included more than once; this is because they are relevant at more than one point in the method.

Within the detailed procedures, procedures in bold are described in more detail afterwards; where this involves several steps to be followed, they are listed either as bullet points or as sub-procedures, e.g. 1a, 1b, etc. Procedures in plain text are not described further, but may be followed by commentary.

Every so often there is an ‘OMT cross-checking point’. If you are working in a team, then you should arrange to meet with the person responsible for the OMT products at these points to compare designs. If you are working on your own, then you should update your OMT products at these points, using the cross-checking procedures to indicate the MUSE(SE) products that should be used to inform the development of the OMT products[1]. If it turns out that it isn’t possible to make the OMT products agree with the MUSE(SE) products, the cross-checking procedures can be used to determine which MUSE(SE) products will need to be amended.

Where you see a note like this, in square brackets:

[Refer to xxx]

…it means you have to refer to another document, which will be included at the back of the procedures. Note that proformas for all of the tables required by the method are also included at the back of the procedures so that they can be photocopied and used for making handwritten notes during the design process. Do not be tempted to omit completion of the tables supporting each product.  The tables are at least as important to the design process as the diagrams, because they contain the design rationale.

Every so often there is a table like the one below for you to rate the procedures you have just followed. If a particular procedure causes difficulty, please make a note of it so that you remember to record it in the comments section of the table. (Documents referred to in the square bracketed comments should be treated as part of the procedures).

The table asks you to rate each section of the method according to how ‘coherent’ and ‘complete’ you found the procedures, and to rate the extent to which the procedures ‘concerned what was desired’. You are also asked to record how long each stage took (in person hours, or days). Coherent refers to how understandable the procedures were; if they made little sense, then you would disagree with the statement that they were coherent, whereas if they were perfectly clear then you would agree. The completeness of the procedures refers to whether or not they seemed to miss anything out; you would disagree with the statement that they were complete if you had to work out what to do yourself because the procedures were insufficiently detailed, or if you had to refer to guidelines that weren’t mentioned in the method. The extent to which the procedures ‘concern what is desired’ refers to how relevant you felt they were to the MUSE design process; if the procedures were clear and detailed, but still didn’t enable you to produce the appropriate design product, then you would disagree that they concerned what was desired. The space at the bottom of the table is provided for your comments on your answers, or on other aspects of the stage.

 

Example Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 

 

 

 

 

 

 

 

Phase 1

Information Elicitation and Analysis

1. MUSE Overview

 

MUSE(SE) Phase 1 Procedures: Extant Systems Analysis Stage

Screen shot 2016-06-23 at 13.20.28

 

These steps involve applying some techniques to elicit the information, which are summarised below.

The detailed procedures on the following pages will describe how to carry out each of these steps:

  1. Examine Documents:                        Obtain the statement of requirements
    Establish the requirements
  2. Examine the systems:                        Identify Users
    Identify Systems
    Identify Tasks
    Identify circumstances of use

2.1             Familiarise investigator with the system to find out how it works                                     by:            Observational studies
Task execution
2.2            Interview user representatives to obtain problems and task                                     objects using:
Card sorting
Structured interviews
2.3             Record findings of 2.1 as preliminary TD products, and                                               separate those of 2.2 into problems and domain information
2.4            Construct ‘typical’ tasks for use during testing.
2.5            Study the systems using:
Informal / Observational studies / Usability tests
Concurrent verbal protocol
Task execution
PLUME, Guidelines and heuristics
Checklist

  1. Decompose tasks to:                  produce TD(ext)
    process TD(ext) into GTM(ext
  2.  Identify usability requirements

Detailed procedures

The following paragraphs provide detailed procedures describing the information to be gathered during each of the steps in the analysis stage, and also describe how to record the information in the appropriate MUSE(SE) product for later reference.

It is recommended that you read the procedures through before performing them, so that you can plan each stage. It is assumed that a project plan has been produced; this should be consulted to obtain details of how quality control is to be addressed, and the number and scope of any planned design iterations. The effort allocated to each stage of the method should be noted so that it can be reflected in the detailed plans for each stage of the method. Access to users should be arranged as early as possible in the project, and a file should be opened to store the products of each stage of the method.

The procedures for steps 1 to 5 will now be discussed in detail.

  1. Examine Documents:            Obtain the statement of                                                                                                                                      requirements
    Establish the requirements

The statement of requirements should be obtained, and reviewed in order to gain an understanding of what the target system will be required to do, in terms of the functionality that the system will have, and the types of tasks it will support. The requirements document will need to be consulted during the course of design, so it should be filed with the MUSE(SE) design documents for reference.

  1. Examine the systems:            Identify Users
    Identify Systems
    Identify Tasks
    Identify circumstances of use

Identifying the users

The following information concerning the users of the system should be obtained, by asking the ‘client’, by consulting user representatives, or by conducting a straw poll of users. If there are a number of different groups who will use the system, then the information should be collected for each group. If the user group is expected to contain a lot of variation within any or all of the categories, then you should make a note of this and attempt to estimate the most likely range of variation.

Number of users

Type of users

Experience level

Computer skills

Other systems used (now)

Education level

Tasks Performed using system

Age

Sex

 

Any other information that may be relevant should also be noted

 

 

Identifying the tasks

The following aspects of the task the system is intended to support should be noted:

Who does the task

Task goals

Frequency

Duration

How often errors occur, and how critical this is

What subtasks there are

Identifying the circumstances in which the system is used

An understanding should be gained of the circumstances surrounding use of the system; whether it is used once a week or every five minutes; whether using the system is considered enjoyable or a chore, and whether the users can choose whether or not to use the system. Any other observations of this kind should also be noted.

Use Pattern

Frequency of use

Motivation for use: what the system means to the users

Whether use is mandatory or discretionary

These preliminary notes should be treated as forming part of the statement of user needs, which will be constructed later in the method following detailed analysis.

 

2.1  Familiarise investigator with the system by:

Observational studies
Task execution

Select systems to examine based on the task that the target system is required to support. The current system is always selected, and similar systems can be selected as well if they appear likely to prove informative. (You might want to pick the related systems after examining the current system. Only the relevant parts of related systems are analysed, and only to the level of detail that is likely to be informative).

To determine which related systems should be examined, the Statement of Requirements should be examined. By considering the key characteristics of the system (i.e. what general type of system it is), together with any relevant constraints, it should be possible to produce a list of systems which have something in common with it from the user’s point of view. Systems that involve doing a similar sort of task, or which impose similar constraints on the user are the most likely to provide good design ideas.

Once you have a list of candidate systems, select which ones to examine bearing in mind the time available and the ease with which access can be arranged. It is suggested that at least three systems are chosen: the current system, the ‘next best’ system, or the closest available alternative, and a system where users do a similar task, but which either works well from the users point of view or presents similar problems (this might provide insight into the cause of the problems).

Following selection of systems, informally observe users performing the tasks to obtain information as follows:
The main tasks the users have to achieve
Whether these tasks have any subtasks
The main behaviours of the user and the computer when performing the tasks
How the behaviours are decomposed in relation to the tasks and over time
The work domain objects, their attributes, values, and properties (methods)

The investigator performs the task, to assess likely levels of:

User costs:             How difficult the system is to learn, i.e. training requirements,
How much physical effort is needed to use the system, i.e.                                                 fatigue and physical effort involved
How much mental effort is needed to use the system, i.e. costs                                                 of mental fatigue, correcting errors, and time taken to perform task
Device costs            Structural i.e. wear and tear on device, such as repetitive key                                     operations
Resource costs i.e. processor use

(This evaluation of costs should be used to flag areas for later investigation, following greater familiarisation with the device. Resource costs incurred by the device are of relevance only in circumstances where they are likely to constrain the solution, for example where a very slow processor is being used or memory is severely limited).

Whilst performing the task and experimenting with the device, you should seek to understand the functionality and structure of the device. This is not necessarily equivalent to gaining knowledge of the structure of the task or the subtasks, because the device may not support the user’s task very well at all, and will frequently have surplus or inappropriate functionality. Whilst examining the user interface, try to identify the main objects that are presented to the user and what their properties appear to be. You will need these before you interview the users, so now would be a good time to read procedures for the following step (2.2).

Don’t attempt to construct TD products based solely on experimentation with the device, as this can lead to replicating the problems of the existing system in the new design. Information about the structure of the task obtained by this means must be regarded as unreliable until validated by observation of real users, but is nonetheless a very useful preliminary activity .

To continue the process of familiarising the investigator with the system before user testing commences, a small number of users should be interviewed:

2.2    Interview user representatives to obtain problems and                                     task objects using:            Card sorting
Structured interviews

The investigator interviews a small number of representative users (about 2 or 3 should be sufficient, or enough to get a cross section of the users if the user group is very varied). The objective of the interview is to obtain more information on the main tasks that are carried out using the system, and what the semantics of these tasks are (i.e. what the task involves, at a fairly high level – without going into the details of using the device, because this will be studied by direct observation). The investigator should also find out whether the users think that there are any problems with the task as it is currently performed. The investigator should then discuss the task with the users to discover the main objects that are transformed during the task, and any other entities involved; as well as finding out the attributes that get transformed, the properties of the objects and the rules concerning them should be elicited.

Cards are prepared for each of the objects identified during the initial familiarisation of the investigator with the system. Each card will contain the name of the object together with the attributes, values and properties (i.e. methods) previously identified; spare blank cards should be provided for new objects or relationships uncovered during the interview. The objects should have abstract attributes as well as physical (i.e. ‘safe’, ‘unsafe’, ‘urgent’, ‘done’ or ‘ready’). These cards are used during the interview to help elicit further information about the objects by correcting the descriptions, sorting the cards into groups, and naming or relating the groups with the extra cards provided; this is described in more detail on the next page. A whiteboard and some Post-It notes should be obtained before the interview starts.

The users are interviewed (with the system present) to obtain information on:

  • The goals of the task in terms of the objects and attributes transformed
  • The main IWS behaviours performed (i.e. task and semantic level behaviours)
  • The user’s mental processes and representations
  • Particular problems experienced
  • The work domain objects, and their attributes, etc.

Arrange access to a number of users (ensure enough are interviewed to represent a good cross-section of the user group for the target system) so that you can interview them with the system present. Video or audio recording the interviews may help with later analysis, and it would be useful to have a whiteboard available.

  • Begin by introducing yourself and telling them the purpose of the discussion. Let them know that they’re the expert on their job, and you’re designing a system to help them do it better, so you need their input. It’s important that they don’t feel you’re there to evaluate them and they realise it’s the system that’s under scrutiny. Say you’re interested in what their job involves (i.e. the main tasks), the entities that get modified by the task or that have a bearing on the tasks, the way they actually do their job, and where and how the current system supports the job; the idea is for them to help you to determine whether the new system would benefit from any modifications, or whether it should be like the old one. Explain that you’re going to draw a diagram to show the structure of their task, a list of good and bad features of the system, and a ‘mind-map’ diagram to illustrate the rules that they need to know to do the task and how they think about the properties of the ‘objects’ involved.
  • Get them to describe briefly and in general terms the tasks that they do, which of them they use the system to support, and what the general goals of the tasks are. Make a list of the tasks, and note any potential objects they mention whilst they are speaking. Check if the tasks must be performed in any set order, and make a note of this. List the goals of the tasks.
  • Sketch an initial task description diagram. The top node should describe the overall diagram, i.e. ‘Widget stock controller’s tasks’. The top row of the task model should consist of the main tasks that they mentioned, i.e. ‘Check stock levels’, ‘Establish widget requirements’, ‘Generate orders’, ‘Process a delivery’, ‘Notify accounts department’, ‘Update stock levels’. Make sure that the diagram reflects any constraints on the ordering of the tasks. Lead them through the diagram, explaining the notation, and ask them if it’s correct. If it isn’t, change it so it is. Now mark the tasks that they use the system to support, and ask them to show you how they would perform each task.
  • Start a new diagram for each task, labelling it to agree with the corresponding node on the main diagram. Ask them to demonstrate the task bit by bit, so that you can start to decompose the task description, carrying the decomposition down to a level where the diagram would be sufficient to enable someone else to perform the task. As they go, ask them to point out where they find the task problematic; note the problems so that you can record them in the tables later on. Make a note of any new objects or attributes that are revealed whilst they demonstrate the task. Show them the task description, and ask them whether it describes the way they would normally do the task, and if it’s incomplete or incorrect in any way. Continue until the whole task is documented as a task description diagram.
  • Write the name of each object and entity on the cards onto a Post-It, and stick the Post-Its to the white board. With the user’s help, arrange them on the whiteboard so that the relationships between them can be indicated by connecting lines, and annotate the diagram to indicate what the relationships are, as in an entity-relationship diagram. Continue until the user is happy that the diagram is complete and reflects their view of the task. (Remember that you’re trying to elicit the user’s view of the task domain at this point; you’re not trying to construct the software engineering object model (or even necessarily a ‘correct’ entity-relationship diagram), so it doesn’t matter if there are some objects that you won’t be implementing, some that will need to be decomposed further when the system design progresses, of if the relationships in the model are more like the methods of some of the objects. The attributes of the objects will probably inform the SE model, even if the objects themselves are differently organised, as will the ‘methods’).
  • Copy the completed model onto paper so that you can refer to it later when the MUSE(SE) DoDD(y) is produced. Any additional attributes or methods discovered should be added to the appropriate card, and any new objects discovered should be recorded.

The interviewer should aim to find out whether the categories of information above are completely represented, perhaps by getting the users to think of exceptions that aren’t covered.

2.3      Record findings of 2.1 as preliminary TD(ext) products, and                                                 separate those of 2.2 into behaviours and domain information

The goals of the task are used with the information about behaviours gathered from the interview to form the top level of a preliminary TD(ext). The IWS behaviours and decomposition information from the observation and interview is added to complete the initial structured diagrams.

Use the task descriptions from the interviews to derive a single model for each system studied that describes the behaviour of the users observed, showing where choices exist or alternative orders can be used for performing the task. It may be possible to base this on the most complete model from the interviews conducted about each system; alternatively, you will need to build the model up based on several interviews.

A table like the one shown below should be prepared for each diagram, and any notes about the diagram entered into the cells. The tables can be referred to later in the design process, to avoid losing ideas or observations.

 

Name Description Observation Design
Implication
Speculation
Which cell is referred to Further description as necessary Any notes Any implications, based on ESA work Any design speculations occurring at this stage

The information from the interview concerning the user mental behaviours is used to elaborate the appropriate points in the diagram. The information on mental representations from the interview should be filed for later inclusion into the DoDD(y). The information concerning costs from the task performance by the investigator can be used to prime collection of information during usability tests by suggesting particular things to look out for, as should the user problems discussed during the interview. Where differences existed in the order of task performance between individuals, this indicates that the task is lax ordered and the fact should be noted in the table and recorded in the SUN when it is produced later in the method. Using the TD(ext), it should be possible to follow the sequence of the contributing TDs; where it is not possible to do so, this must be noted in the table and recorded in the SUN when it is produced later in the method so that the Composite Task Model can be checked to ensure that the problem has not been ported along with the high level structure of a TD(ext).

2.4     Construct ‘typical’ tasks to be used during testing.

Information from the preliminary TD(ext) and the other procedures above is used to construct realistic examples of tasks for the users to perform whilst the investigator records them. The tasks can be used to obtain more information about potential user problems noted earlier, by designing them in such a way that the user is likely to encounter the problem as they do the task. The descriptions of the tasks should not dictate the manner of task execution, only the task to be achieved by the users and sufficient contextual information to give the task meaning. (For example: ‘You need to email a Word document to x, who works at y; you know they use a PC, but you’ve no idea what word processor they have’). Before using the tasks for testing, they should be checked with a user representative to ensure that they are realistic. As well as constructing sufficient tasks for current testing needs, some should be prepared ready for testing the design at the end of the method (if possible, use different tasks for testing now and at the end of the method; this will provide greater confidence that the design supports the full range of tasks, not just the instances that were studied in detail).

2.5     Study the systems using:
Informal / Observational studies / Usability tests
Concurrent verbal protocol
Task execution
PLUME, Guidelines and heuristics

More than one user should be studied for each system that is to be examined, whether related or current. You should make sure you have your preliminary task description for the relevant system available, and that a notepad is handy to write down any additional observations.

Recruit some typical users to use the system whilst you observe them. If possible, the session should be recorded on video (or at least audio tape, if a video camera is not available). Make sure the user understands that it is the system that is being evaluated and not them.

Provide each user with one of the descriptions of typical tasks that were generated in the previous step. Ask them to perform the task described as they usually would, but tell them that it’s not a test and you’ll help them if they get into difficulties; whilst they are doing the task, ask them provide a running commentary describing what they are thinking about and any assumptions they are making about the task or the system. You may find you need to remind the user to keep their commentary going from time to time, particularly if they start getting into difficulty. If they get into severe difficulties, it may be necessary to give them a hint, or even to stop the trial and discuss the problem.

Observe the users performing the task to uncover any mistakes or incompleteness in the TD(ext); where found, these should be noted. Video (or at least audio) recordings of the subjects should be made wherever possible, to support later analysis of interesting events or things that happened too quickly to be noted in real-time. New domain objects or attributes that are observed are also noted for the DoDD(y). User problems or errors noted during the test are noted, so that they can be investigated further in later trials, and recorded in the Statement of User Needs when it is constructed.

The verbal protocol is used to annotate the TD(ext) product with the mental processes of the user, as are the user problems, errors, and performance shortfalls. The notes made during observation of users should be written up in the tables for the TD(ext) product so that they will not be forgotten later in the design.

The notes gathered in this stage also form an input to the Statement of User Needs. As much as possible, group the problems according to which of the following categories they appear to concern most directly:

Productivity
Learnability
User satisfaction
Memorability
Errors

These categories are known as the PLUME categories, and will be revisited later in the method when the Statement of User Needs is produced.

Users’ mental representations (i.e. the notions they have about objects, their properties and the rules for manipulating them) should be noted for use during construction of the Domain of Design Discourse product (DoDD(y)).

[Obtain a copy of the Ravden and Johnson checklist, which is reproduced at the back of these procedures]

Finally, the investigator uses the system once again, this time employing the Ravden and Johnson checklist. In addition, a styleguide and any relevant guidelines or heuristics may be used to assess the device, paying particular attention to areas where errors were noted under PLUME categories, with the goal of diagnosing the source of the problem. The information resulting from this is used to annotate the TD(ext), and filed ready for inclusion in SUN(y). If the user’s workstation is to be redesigned, it should be assessed against an appropriate set of guidelines such as those found in the US MIL-STD or the EC Directive; relevant findings from this assessment may be used to annotate the TD, and should be filed for inclusion in the SUN along with an assessment of any relevant user physical limitations, also derived from guidelines or standards.

Repeat procedures 2.1, 2.3, and 2.5 for any related systems identified.

  1. Decompose tasks to: produce TD(ext)
    process TD(ext)

The information from the second set of observational studies (step 2.5) is used to complete the TD(ext), which should be constructed following the above procedures for the preliminary TD(ext) given in steps 2.2 and 2.3.

The TD(ext) table should now be completed further with the evaluation information on behaviours from the observational studies, and the information on mental processes gained in the interviews and from the card sorting and protocol activities. The tables are also annotated with information on the quality of task performance (i.e. how well the users were able to achieve the task) from the usability testing and domain objects from observation, interviews, and card sorting. The TD(ext) is then summarised and abstracted to a device independent level to form the GTM(ext); GTM(ext) production will be discussed as part of the GTM stage.

  1. Identify usability requirements

At this point, identification of the usability requirements can be performed, and acceptable levels for productivity, learnability, user satisfaction, memorability and errors should be decided. A means of determining the acceptability of these properties should be decided, and the they should be prioritised and recorded. The styleguide that the target design will be expected to follow should be selected at this stage, and this should be noted as one of the usability requirements.

 

OMT Cross-Checking Point:

Refer to the Use Cases and scenarios generated as part of the OMT process, and carry out the following checks, considering the models as a whole in both cases.

  • Make sure that user and device actions (and device semantics) documented in the TD products are described correctly in the use cases and scenarios (to the extent that these are likely to remain unchanged in the new system; it’s more important that the models do not contradict each other rather than that they are identical).
  • Make sure that domain objects and their attributes documented in the task descriptions are correctly described in the use cases and scenarios (to the extent that they are likely to remain unchanged in the new system), particularly where user inputs are concerned.

 

 

 

 

 

 

 

ESA Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 

 

 

 

MUSE(SE) Phase 1 Procedures: GTM stage

Following Extant Systems analysis, the next stage of the method involves abstracting from the task models generated from each system studied (the TD(ext)s) to produce a device independent view of each system called a Generalised Task Model, or GTM(ext). These models are then combined to result in one that describes all the features of interest of the current systems, called the GTM(x). A similar model (GTM(y)) will be produced of the target system, based on the statement of requirements for the purposes of comparison. The following diagram summarises the stage.

Screen shot 2016-06-23 at 17.00.07

Generifying tasks to produce GTM(ext)s

Generification involves raising the level of description of the tasks so that they are device independent and can be compared with each other more easily. A GTM(ext) represents the manner in which tasks are currently performed, so one GTM(ext) is required for each type of task studied (i.e. if related tasks were examined, each requires a GTM(ext)). Frequently, much of the work of producing a GTM(ext) involves summarising the lowest levels of description and making sure that terms are used consistently both within and between diagrams. Where this is made difficult by a large or complicated task description, the following procedures can be used:

 

 

 

  • List out the objects and actions
  • Eliminate redundant items (so each item is listed once)
  • Group the terms that appear similar
  • Name each group (the group names can be validated by showing them to users, or the users could help with the grouping process if this is convenient)
  • Reconstruct the model, using the generic terms
  • Validate the model by asking users if it is a description of the original task

Some rules of thumb to be borne in mind when preparing GTM(x) and GTM(y) are presented on the next two pages, followed by the procedures for production of the GTM(x) and GTM(y).
GTM Heuristics

 Consistency:

The GTMs need to be internally consistent:

 

  • Use terminology consistently; make sure that descriptions of objects or actions don’t change within, or between, the GTMs.
  • Comparable operations should be activated in the same way, and should work in the same way everywhere.

 

…but also need to be consistent with the user’s knowledge of the task, so that users will be able to see what they can do and what state the machine is in at any point…

 

  • Object names mentioned in the GTM should be concrete and recognisable
  • Use the same word to describe actions (functions) that seem similar to the user
  • When using metaphors, ensure properties of objects are appropriate

 

The target system should also be consistent with other applications…

 

  • Follow conventions for the environment, so users can reuse knowledge from elsewhere
  • Use terminology that is consistent with the styleguide; be careful about using words which are the names or system objects (or menus), unless you are really referring to them.

Simplicity:

Remember that the aim is to design an interface that will be simple, easy to learn, and easy to use; users shouldn’t be surprised by the behaviour of the system.

 

Promote simplicity by using the following rules of thumb:

 

  • Remember that maximising functionality works against maintaining simplicity.
  • Reduce the number and complexity of necessary actions to a minimum;
  • Reduce presentation of information to the minimum needed to communicate adequately.
  • Disclose information to the user progressively to they only see it at the appropriate time.
  • Use natural mappings and semantics in the design.
  • Use verbs in the GTM to describe actions (e.g. ‘sort items’ instead of ‘sorter’; avoid describing components of the system when it would be more appropriate to describe the task.

The heuristics shown on the previous two pages should be borne in mind whilst preparing the GTMs.

  1. Generify (scope system at task level)

This involves the following steps, which are described in more detail afterwards.

Prepare GTM(y)
obtain SoR; note temporal and conditional aspects
summarise task in device independent terms
summarise subtasks in device independent terms
prepare documents
Prepare GTM(x)
obtain GTM(ext)s
compare to GTM(y)
identify elements of (ext) relevant to (y)
identify compatible GTM(ext) components
synthesise parts into GTM(x)

Preparing GTM(y)

GTM(y) is based on the Statement of Requirements (SoR). The SoR should be reviewed and the main tasks identified. Any requirements concerning the ordering of the tasks or conditions under which they should be performed should be noted, and a diagram similar to those generated for the GTM(ext)s should be produced, summarising the requirements in device independent terms.

If the GTM(y) is unexpectedly simple, this should not necessarily be regarded as indicating a error of production, but may indicate that subsequent enhancement of aspects of the requirements specification may be required.

A supporting table should be prepared for the GTM(y), which should follow the structure shown below.

 

Name Description Observation Design
Implication
Speculation
Which cell is referred to Further description as necessary Any notes Any implications , based on ESA work Any design speculations occurring at this stage

 

Preparing GTM(x)

GTM(x) is a device independent model of the aspects of the existing systems that might be suitable for incorporation in the target system. The model is based on the GTM(ext) products that were prepared for each system studied during the extant systems analysis. The information in the supporting tables for the Task Descriptions (TD(ext)) may be useful when deciding which parts of the GTM(ext)s to include, particularly any comments in the implications or observations columns. The comments from the TD tables can be copied into the supporting tables for the GTM(x), but care should be taken to updated the names of the nodes where necessary. If appropriate, the GTM table can be cross-referenced to the original task description to provide additional information. Information about the problems experienced by users gathered during the interviews should be reviewed in case it contains relevant information not in the TD tables.

A supporting table should be prepared for the GTM(x), which should follow the same structure as the GTM(y) table.

Once the GTM(x) has been produced, it can be compared to the GTM(y). If the two models look very different, it may indicate that the new system will seem unfamiliar to the users, who will either require additional training or extra support from the design of the interface, perhaps either through descriptions printed beside buttons, on-line help, or maybe a wizard or agent. If the GTM(x) is not very extensive, it probably indicates that the systems studied during analysis did not provide many promising ideas, and it may be an idea to revisit the analysis stage unless GTM(y) is particularly complete and the system is well understood.

  1. Verify models

Partial verification of the models has already been performed, when the users were interviewed and shown the partly completed task descriptions. The completed TD(ext)s and GTMs may be checked with user representatives to provide additional confidence concerning their completeness and accuracy before further work is based upon them.

 

 

 

 

 

 

 

 

 

GTM Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was


 

 

 

 

 

 

 

Phase 2

Design Synthesis

1. MUSE Overview

 

 

 

 

 

 

 

 

 

 

MUSE(SE) Phase 2 Procedures: SUN stage

The purpose of the SUN is to summarise the ESA findings so that they can easily be referred to during the remainder of the design process; in effect, the SUN presents a human factors perspective on the Statement of Requirements. The process of producing the SUN mostly involves summarising the findings from previous stages, and is quite straightforward as the following diagram shows.

 

SUN

  1. Document user problems

The information gathered during ESA analysis, particularly that marked for inclusion in the Statement of User Needs, is now collated to form SUN(y).   It is important that the SUN lists both good and bad aspects of the systems studied, so that the good features are preserved in the target system and the bad aspects of the existing system do not reappear. Insights gained into problems or benefits caused by the relationships between aspects of the worksystem, such as mismatches between the users mental model of the task and the way it is represented by the system, or the association between actions and the objects that perform or suffer them, should have been uncovered both during assessment with the styleguide, guidelines and related heuristics and during the observational studies; these are recorded in the various sections of the SUN. The information collected concerning the characteristics of the target user groups is also incorporated into SUN(y), as are the ‘usability requirements’ (PLUME categories and the styleguide chosen) that define the acceptable properties for the target system.

The SUN is divided into six sections, which are listed on the next page; each section contains guidance about which of the activities carries out during examination of the existing systems is most likely to provide the relevant information.

Each section of the finished SUN should contain a pair of tables. The tables describe the good and bad features of the existing system and how these are to be reflected by the target system. The tables are shown after the sections on the next page.

 

The SUN is divided into the sections shown in the following table:

 

Statement of User Needs Sections
User and Device Actions
(from checklist sections 1-8, observational studies, interviews, and             the task models)
User mental processes and mental model
(from interviews, card sorting, verbal protocol, and task models)
Task (Domain) Objects
– Goals (from interviews and card sorting)
– Domain objects (from observation, interviews and card sorting)
– Task quality (from usability tests) (Performance from PLUME –                                          record target level from Usability requirements)
User and device costs
(from observations, task execution, usability tests, informal                          tests, as well as sections 1, 3, 5 , 6 and 10 of the checklist)
– Learnability (also record target level from Usability requirements)
– User satisfaction (also record target level from Usability requirements)
– Memorability, Learnability (also record target level from Usability                                       requirements)
– Errors (and time on task);(also record target level from Usability                                         requirements)
Physical aspects; device construction, appearance and layout.
(from physical guidelines, and sections 1, 5, and 10 of checklist)
Miscellaneous
(from sections 3-10 of the checklist).

 

 

 

Each section of the SUN should follow the format shown below:

 

 

Problem Caused by Consequences Addressed by
What problem the users suffer

(complete now)

Feature of the existing system that causes the problem

(complete now)

Impact on the target system; what will have to be done to avoid recurrence
(complete either now or later)
How the target system has addressed the problem
(complete later)

 

Feature Caused by Consequences Addressed by
Desirable aspect of existing system that the target system should keep

(complete now)

Feature of the existing system that causes the feature

(complete now)

Potential impact on the target system; what will have to be done to preserve feature
(complete either now or later)
How the target system has addressed the problem
(complete later)

 

OMT Cross-Checking Point:

Refer to the object model, event flow (or object message) diagram and event (or message) trace generated as part of the OMT process, and carry out the following checks. (It may be more convenient to perform this check at the same time as the DoDD(y) check in the next stage).

 

Review the SUN to ensure that users did not report difficulties communicating with the system (i.e. with the language or the semantics of the old user interface). Consider whether these are likely to recur in the new system, by looking at the event flow and event trace and assess whether good points of the old system have been reused as appropriate.

Check that any objects from the domain and their attributes mentioned in the SUN are treated appropriately in the Object model.

Ensure that associations between actions and objects noted in the SUN are treated appropriately in the Object model, by considering whether each object has appropriate attributes and methods. (Check that there is a ‘User Interface’ class, as well as the interface-related classes in the DoDD(y); it won’t be very detailed yet, but it will be required later on).

 

 

 

 

 

 

 

 

 

 

SUN Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 


MUSE(SE) Phase 2 Procedures: DoDD(y) stage

The user’s view of the task domain is modelled to provide insight into their mental model of the task and allow the user interface to be specified in such a way that it will be easily understood by the user and be easy to learn and use. Two models of the task domain are produced, a semantic net called the DoDD(y), and the user object model. The user object model resembles a software engineering model more closely than the DoDD(y), and in fact uses software engineering notations. The main difference between the two models is that the user object model describes the objects together with their attributes and actions performed and suffered (i.e. the operational relationships between objects), whereas the semantic net describes the objects and the semantics of the relationships between them.

 

The following diagram summarises the production of the DoDD(y) and user object models:

 

DoDDy

Production of the DoDD(y) user object model and semantic net is based on the information derived during ESA analysis. The object of constructing the DoDD(y) is to represent the aspects of the task domain that are important from the user’s point of view. The DoDD(y) uses a fairly informal notation, and its content is determined more by what is useful in a particular instance than by a set recipe. The DoDD(y) is used as an informal ‘mind-map’ to help the designer understand and reason about the problem.

The DoDD(y) should not merely reproduce material in the software engineering specifications (e.g. the object model), because whereas software engineering specifications concern how the system will actually work, the DoDD(y) should reflect how the user thinks it works. The DoDD(y) is used to help the designer reason about the design at later stages, and the process of creating the DoDD(y) can suggest questions to ask the users that might not otherwise occur. For example, when constructing a DoDD(y) to describe the domain of an email client, the password would probably appear as an entity with an association concerning ‘security’. Questioning users further might reveal that they consider that ‘security’ has to do with all their mailboxes rather than just the new messages on the server, which might prompt consideration of whether the design should reflect this in its treatment of the password.

The following information may be included in the DoDD(y):

  • the main (high-level) task behaviours derived from observation and interviews
  • mental representations derived from interviews, verbal protocols, and card sorting,
  • information on domain objects and attributes derived from observations, interviews and card sorting.

In addition, the following relationships uncovered during assessment using guidelines should be recorded: the associations between actions and the main task objects; the task goals, and work domain objects; the relationships between abstract IWS structures and task goals, work domain objects, and physical IWS structures, derived from the relevant parts of the checklist and the interviews. The relationship between physical IWS structures and domain objects and performance may also be of relevance to the DoDD(y).

Production of the DoDD(y) should be largely a matter of consolidating the semantic nets produced during the interviews. The DoDD(y) should be device independent, in that it should refer to the objects manipulated by users to perform the work rather than the specifics of how the task is done using any of the devices studied. The level of description should be sufficient to explain the tasks from the user’s point of view, but need not go into technical detail.

To produce the DoDD(y) semantic net, the following procedures should be employed:

  • Check for multiple models

The first activity in defining the user object model is to assess whether multiple models are required, by considering the user groups identified at the start of the extant systems analysis stage. In a large system there may be two or more user classes for whom the ‘objects in the system’ are almost completely different. Although it is sometimes necessary to define two or more user object models to form the basis of different subsystems, it is not always necessary to have a separate user object model for every user class. An object model should be broad enough to cover the requirements of several user classes concerned with the same objects.

  • Obtain the Statement of Requirements, the GTMs, and the products generated during extant systems analysis (particularly the semantic nets produced when the users were interviewed).
  • Review the documents listed above to extract a list of domain objects, concepts, events, and processes.
  • Arrange the domain objects on the page and insert arrows to show their relationships with one another. Number the arrows, and describe each relationship in a table like the one below.

 

Node Description Number Relation
The name of the object as shown in the diagram Description of the object sufficient to identify it in the task Number on the arrow The relationship between the object and the one pointed to.
  • Add the concepts, events, and processes, and draw lines connecting them to their associated object, documenting them in the table as shown above; it doesn’t matter if they look the same as the objects, as long as the diagram makes sense to the users and is understood by the interface designer.

Once the DoDD(y) is complete, prepare the user object model [2]. The notation for the user object model is based on that of OMT (Rumbaugh, 1991), although any notation that includes object types or classes, subtypes, association relationships and aggregation or composition relationships could be used. Note that attributes and actions are part of the user object model but are not usually shown on the diagram.

The notational constructs used in the user object model are shown in the following diagram.

[1]The user object model is taken from Redmond-Pyle, D., and Moore, A., (1995) ‘Graphical User Interface Design and Evalution (GUIDE): A practical Process’, Prentice Hall, London, and the user object model procedures reproduced here are based on those by Redmond-Pyle

User Object Model

To produce the user object model, the following procedures should be employed:

  • Identify objects

Refer to the objects in the DoDD(y). For each object consider the following questions:

  • Does the user need to see and interact with the object to perform their tasks?
  • Does the object group together related information in a way that helps the user to perform a specific task?
  • Does the object exist in the business world, and will it continue to exist with the new system?
  • Is the object a useful system object, which the user needs to see and interact with (e.g. printer, fax machine) or should it be invisible to the user (e.g. modem)?
  • Is the object just an artifact of the old system, which will be made redundant by the new system? (If so it is probably not required in the user object model, unless it is still a helpful illusion for the end-user.)

If the object is merely a source or recipient of information in the task and the user does not need to see or manipulate the object, then the object may not be required as a user object. An alternative is to interact with the object via some standard communication mechanism such as an electronic mail mailbox.

  • Create user object model diagram

Take care to give each object the name that the user wants to call it in the interface. Analyze the relationships between the objects. For each user object, consider which other types of user object it is directly related to. For example, a Person object may ‘own’ a Car object. Define the cardinality of the relationships (one-to-many, many­to-many, etc). For example, one Person may own many Cars, but each Car is owned by one Person. Use a user object model diagram to show all the user objects and the relationships between them. There will often be ‘contains’ relationships, showing container objects (such as lists) related to the objects they contain. Many-to-many relationships are common and one­to-one relationships are quite acceptable. Note the number of occurrences of each user object (e.g. there is only one System object, but there are 1000 Customers and 9000 Orders.)

  • Define user object attributes

Define the attributes of each object, i.e. the pieces of information the user knows about the object. For example, a Person object might have a Name, an Address, an Employer, a Date of Birth, a Photograph, a Signature and a List of Leisure Activities. Note that Photograph and (handwritten) Signature are perfectly sensible attributes, even though they are not conventional database fields.

The criteria to use in deciding whether a piece of information should be an attribute of a particular user object are whether it is useful to support a task, and whether it seems sensible to the user. (Avoidance of redundancy, extent of normalization, etc., are not appropriate quality criteria for user object models.)

  • Define user object actions

Identify the actions the user will need to perform on (or using) the object, such as Print, Calculate, Authorize, Send to, Allocate to, Add.

User object actions are identified from user tasks, and from discussions with users. Most user objects will have actions to Create or Delete. Establishing (or removing) a rellationship between one user object and another is another common action. Some actions relate to the whole user object, while other actions may only relate to part of the object.

Additional user object actions may be identified and added later, while expressing ask scenarios as sequences of user object actions, and during prototyping. Define each action in terms of the following:

  • A brief narrative description
  • Any input
  • The required effect on object attributes and relationships
  • Any output

User object actions describe the ‘behaviour’ of objects in the system. They are the main means of specifying required system functionality. The actions on a user object are considered to be part of the object.

  • Create action–object matrix

Create a matrix to show how update actions affect objects.

The action–object matrix provides a useful way of checking the scope and complexity of actions. Most user object actions only affect one user object. However, where an action does affect more than one object, this is significant for GUI design. When the user performs the action on one object, will they expect the effects on other objects to occur?

Construction and review of the matrix often leads to additional actions being identified, to actions being redefined, or to additional effects being noted.

  • Check for dynamic behaviour

For each object in turn, consider whether there is significant dynamic behaviour. For the actions of an object, consider the following:

  • Can the actions be invalid, depending on the prior state of the object? (Make a note of this for later. This will help during the detailed design of the user interface.
  • Are there any constraints on the sequence in which the actions can occur?
    (Check that the ordering constraints are represented in the GTM(y)).

 

 

OMT Cross-Checking Point:

Refer to the object model, scenarios and use cases generated as part of the OMT process, and carry out the following checks.

 

Review the DoDD to establish the conceptual entities and operations that form part of the user’s model. Check the OMT object model to ensure that the entities are present as objects, and that the operations are likely to be supported by the methods.

Check the object model against the DoDD(y) to ensure that the objects and their associations agree with the users’ mental representations of the task domain as much as possible.

Check the objects in the DoDD(y) are present in the object model, and in the scenarios and use cases used by OMT. Objects that perform or suffer actions in the DoDD(y) should have dynamic models, as they change state from the user’s point of view. Physical attributes of objects may appear in the DFD (functional model) as data flows, and should appear in the object model as attributes of their objects. Abstract attributes should appear in the object model, and as control flows in the DFD, and may appear in the state diagram as events, attributes or conditions on transitions or within states. (Attribute values derived from user inputs may appear in the event (or message) trace as event parameters, and those actions associated with objects that initiate events may also need to appear in the event trace). Actions associated with objects in the DoDD(y) should be present in the object model as operations.

The actions from the DoDD(y) should be correctly associated with the objects in the object model; in the state diagrams the correct objects should be undergoing transformations or participating in event passing. The methods that initiate events in the DoDD(y) should feature in the scenarios and use cases, and the data transformed by methods in the DFD should agree with the DoDD(y). Similarly, state transitions in the DoDD(y) should be represented in the state diagram.

 

 

 

 

 

 

 

 

 

DoDD(y) Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 

 

 

Procedures for phase 2 of MUSE(SE): CTM(y) stage

The high level procedures for the CTM(y) stage of MUSE(SE) may be summarised as shown in the following diagram:

CTMy

Most of the information required to specify the CTM(y) should be present in the SUN and DoDD(y), particularly where arbitration between alternative design options contained in the GTMs is required.

  1. Decompose task

Decomposition of the task involves increasing the level of detail so that the designed task satisfies the statement of requirements; this is achieved by selecting components of the GTM(x) and GTM(y) and describing them in more detail to arrive at a conceptual design for the system.

Where a detailed statement of requirements exists, CTM(y) may be very similar to GTM(y). However, the statement of requirements may sometimes be vague, incomplete or even almost non-existent, which results in an impoverished GTM(y). In these circumstances, the CTM(y) should be based more on GTM(x) and the requirements must be updated to reflect this. Even where the statement of requirements provides a detailed functional specification of the target system, it may not contain sufficient information to enable the structure of the task and ordering of subtasks to be specified. In this case, the CTM would reflect the content of GTM(y), but those aspects of the structure of GTM(x) found to be unproblematic during extant systems analysis should be reused; the remainder of the structure should be revised in such a way as to avoid any problems noted.

 

 

1a             Synthesis:            Obtain SoR, DoDD(y), and SUN
Compare GTM(x) and GTM(y)
Extend GTM(y)
Incorporate parts of GTM(x)

The SUN(y) should inform arbitration between the GTM(x) and the GTM(y) by virtue of information concerning evaluation or commentary on the IWS behaviours and the decomposition gathered in the observational studies conducted during the ESA stage, as well as the heuristic evaluation findings. The heuristics that are presented after the procedures for this stage should be used to help selection from the GTMs and elaboration of the CTM.

The objective is to elaborate the GTM(y), describing the subtasks in greater detail to arrive at a more detailed conceptual design for the system.

The correct level of detail in the CTM is where the tasks are described in sufficient detail that all of the steps are included. The CTM should not describe which tasks are done by the user and which are done by the computer or the turn-taking in the interaction; this level of detail will be dealt with later.

1b             Record in table:
Design rationale
Design decisions

The CTM(y) supporting table should record the rationale for decisions made concerning the structure of the task. Any porting from GTM(x) or TDs should be noted in the table. If any design decisions made involve changing the structure inherited from GTM(y), the statement of requirements may require modification; this should be noted in the ‘Design Comments’ column and the person responsible for the requirements should be consulted as soon as possible.

The table should take the following form:

 

Name Description Design Comments
Name of the node Description of the node Any commentary required, such as the rationale

2             Perform allocation of function on basis of ESA and SUN(y)

Refer back to the observations and design implications columns of the GTM and TD tables, to identify information gathered in the ESA stage relevant to allocation of function decisions.

Perform the preliminary allocation of function between the user and the device by marking up the CTM, bearing in mind the heuristics on the following page. Refer also to the SUN(y) for relevant information noted during extant systems analysis.

3            Record functionality decisions

Functionality decisions are recorded in the CTM table together with the rationale, to inform subsequent checks against the software engineering specifications to ensure compatibility.

4            Verify model with:
GTM
SoR
SE stream

The CTM is checked against the software engineering specifications (see below), as well as the statement of requirements and the GTMs, to ensure that the requirements are likely to be satisfied by the resulting artefact, that the software engineering specification will support the device under specification, and that the content of the CTM can be either be traced back to the GTMs with rationale for the porting, or that the design decisions made to arrive at a novel structure have been recorded with their rationale. Where possible, additional validation of the CTM with user representatives would serve to increase the level of confidence that could be expressed in the design at this stage.

CTM Heuristics:

 

Consistency

  • Modes should be avoided; operations should have the same effect whenever they are invoked
  • Functions should work in the same way everywhere in the application.
  • Comparable operations should be activated in the same way; use the same word to describe functions that seem similar to the user
  • Promote a good match between system and real world: speak the user’s language, and use terms and concepts drawn from the experience of the anticipated class of user.
  • Follow conventions for the environment, so users can reuse knowledge. Use familiar metaphors to allow users to use their experience; don’t be too literal about the metaphor, but extend it to support the task in an intuitive way.
  • Support recognition rather than recall of information

 

Simplicity

  • The interface should be simple, easy to learn, and easy to use. Reduce the number and complexity of necessary actions to a minimum.
  • Reduce presentation of information to the minimum needed to comminicate adequately. Disclose information to the user progressively to they only see it at the appropriate time.
  • Support orientation: if information is too complex or covers more than you can present at one time, the user should be helped to find relevant information by supporting them in orienting themselves.

 

User control and freedom:

  • Aim for minimal surprise: users shouldn’t be surprised by the behaviour of the system.
  • Organise sequences of actions with a beginning, a middle, and an end.
  • Avoid the user being able to make serious errors by designing them out, and make it easy to correct what non-serious errors are still liable to occur
  • Allow users to exit from unwanted dialogues chosen by accident
  • Permit easy reversal of actions: as many actions as possible should be reversible

 

 

OMT Cross-Checking Point:

Review the CTM to obtain a list of the conceptual entities and operations that appear, as well as any attributes or values. Check the OMT object model to ensure that the entities are present as objects, and that the operations are likely to be supported by the methods. Consider whether the attributes are physical (e.g. temperature in degrees C), or abstract (e.g. ‘ready’). Check the Object model to ensure that the objects possess the relevant physical attributes. Consider whether the abstract attributes are likely to be supported by the model, and whether it would worthwhile adding additional attributes (and possibly operations, additional relationships, or classes) to support them (e.g. consider an object representing a vat in a chemical process; in order to support a ‘readiness’ attribute, it might be necessary for it to know how long it has been at a certain temperature, which would require a timer object).

N.B.   Some of the operations may not make sensible methods, particularly if they refer to actions that the user would be expected to perform rather than the device (e.g. answering a ringing telephone in a helpdesk application). Where this is the case, it should be noted so that it is not forgotten during the next stage of the method. A copy of the object model may prove useful for reference during production of the STM.

 

 

 

 

 

 

 

 

 

CTM Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 

 

 

Procedures for phase 2: SUTaM stage

The System and User Task Model is a decomposition of the CTM(y) in which the parts of the task to be performed using the target system are separated from the parts of the task that are performed using other devices or systems. The CTM is divided into two models, which are the System Task Model (STM), and the User Task Model (UTM). The STM describes the ‘turn-taking’ in the interaction; what the user does and what the device does in response, but still treated at a device-independent level. The UTM describes the tasks that the user performs ‘off-line’, and is generally not decomposed further, but used as a reference to make sure that the later design remains compatible with the existing systems to be used in conjunction with the target system.

The high level procedures for the SUTaM stage of MUSE(SE) may be summarised as follows:

SUTaM

The detailed procedures follow:

  1. Decompose the CTM(y):
    For each node of the on-line task, designate as a H  or C node.
    Decompose the off-line tasks if required,                      after constructing the UTM from the marked up areas of the STM.

First, work through the CTM(y), marking out those parts of the task that will not be performed using the target system (these are most frequently either subtasks such as using a telephone or a diary, or parts of other tasks that are interleaved with the current task. A good way of marking the off-line tasks is just to draw a line around them). The off-line tasks should be removed as the STM is produced, although they can be left in where they make the task clearer. The UTM is built by reassembling the off-line tasks to form a separate model.

The next step is to allocate nodes either to the user or the computer to form the basis of the dialog design. Most of the time this is a matter of designating leaves in the CTM(y) as either user or computer actions, but this is sometimes made easier by decomposing parts of the CTM(y) to arrive at a more detailed description.

A useful rule of thumb as you allocate ‘H’ and ‘C’ nodes is to remember that each time the user does something they will generally require some feedback from the device, so each ‘H’ action should normally be followed by a ‘C’ action unless there is a good reason to the contrary (e.g. ‘H’: select document; ‘C’: indicate selected document).

Work through the CTM(y), designating each node as one of the following:

‘H’: user action (e.g. entering information or indicating readiness to proceed)

‘C’: a device action (e.g. executing a command or providing feedback to a user action)

H-C: composite user-computer action, to be used where the actions of the user and device are sufficiently obvious to not need further description

 

1a             Consider whether decompositions of design comply with ‘design             principles’ (feedback, etc.)

Once the off-line tasks have been identified and the user and device actions identified with ‘H’ and ‘C’ leaves, the STM should be sufficiently detailed to enable a fair impression of the target system to be gained. Before more detailed design is carried out, the STM should be reviewed to check that there are no issues that need clearing up at this stage. Return to the heuristics specified for use during production of the CTM, and consider whether the design still looks likely to comply with them. SUN should inform design at this stage; details of the type of expected users and any styleguide identified could be used as a guide to the level of support required and the general style of the interaction required.

 

1b             Ensure that STM contains all relevant domain objects and attributes             by reference to SUN and DoDD(y). Check SUN for user problems with             existing system, and ensure they are not likely to recur.

 

The SUN is likely to contain some details of the users’ mental processes, and the DoDD(y) will contain details of the semantics of the task as well as the relevant domain objects involved in transformations and the attributes transformed. Examination of the DoDD(y) should allow the users’ mental processes in the SUN (derived from the ESA analysis of concurrent verbal protocols) to be placed in the context of the task and domain, and allow determination of the items of information required in order to support the user during the task; at points in the interaction where the user must make selections or modify an attribute of a domain object, the semantics of the task may require that information additional to the current value of the attribute being modified or the options to be selected from be displayed. As an example, consider a dialog for saving a file to a disk. The user must be provided with a means of specifying a name for the file as well as determining the most suitable directory for it in order to be able to perform the task. However, the task can be performed more easily if the user is provided with additional information such as the other files in the directory, the size of the file that they are about to create, and the available space on the volume.   Whilst the CTM may not suggest that such information would be of value, the DoDD(y) would, and the SUN would indicate whether users had suffered from omission of the information in the existing system, or from other features of the design.

1c             Complete STM table

The STM table takes the same form as the CTM table; an example is shown below. Any decisions that may require additional functionality (or where the reasons for the decision are less than obvious) should be recorded in the table – particular care should be taken to note the addition (as opposed to decomposition) of nodes. Where extra functionality has been specified, it will be necessary to update the SE specifications that were checked against the Composite Task Model.

 

Name Description Design Comments
Name of the node Description of the node Design rationale

 

 

  1. Document the required interface functionality

The purpose of checking the Composite Task Model against the Software Engineering design products was to ensure that appropriate functionality would be specified to support the target system. However, it is normal for some parts of the CTM not to require a user interface. Examine the STM, categorising areas according to which of the following categories the functionality referred to falls into:

User only: subtasks the user performs without the device. Most of these should have been moved into the User Task Model; those remaining should have been left in the STM for purposes of clarity. Do not require a user interface; should not be in the SE models.

User and computer: tasks the user performs using the device. Will require a user interface.

Computer only: Tasks the device performs without the user. If the task is requested                         by the user, then an interface is required (e.g. progress formatting a                         disk). If the task is performed automatically, then a user interface is                         only required if the user needs to be aware of the task (e.g. periodic                         requests to save work).

  1. Handshake with SE.

If production of the STM has resulted in a requirement for additional functionality (or possibly a realisation that there is surplus functionality) then the modifications should be communicated to the SE stream. If the modifications are not straightforward, the STM should be checked against the SE products using the checks in the CTM procedures.

 

 

 

 

 

 

 

 

 

SUTaM Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 

 

 

 

 

 

 

 

 

 

Phase 3

Design Specification

1. MUSE Overview

Phase 3 Procedures: ITM(y) stage

The Interaction Task Model is based on the STM, and specifies the actions of the user in sufficient detail to inform implementation. Computer behaviours are not specified in detail at this stage because they are described by later products; the actions of the user should be specified in terms of the interface objects to be used in the final interface.

For the present purposes, pictorial screen layout (PSLs) construction will be treated as part of specification of the Display Design products and is described in the following section. Specification of the ITM(y) can be performed in parallel with specification of later products; concrete representations such as the PSL(y)s can simplify the process of producing the ITM(y) quite considerably, particularly where the task is fairly straightforward and the required windows and dialogs can be easily visualised. Where it is less easy to decide which windows and dialogs will be needed, the later part of this stage includes procedures to support window and dialog allocation.

The high level procedures for the ITM(y) stage are as follows:

These procedures will now be described in more detail:

 

  1. Select nodes of the STM(y) for decomposition (H or H-C leaves)

Look through the STM, and note where the H and H-C leaves are. These leaves will be decomposed to produce the ITM. (It may be useful to mark the nodes on a printout in pen).

  1. For each H-C leaf: if standard behaviour, – study ‘standard’ package
    – analyse behaviour
    – document behaviour
    – rename items in ITM &                                                                                                  DoDD(y)

If the H-C leaves describe behaviours that are standard for the environment that the target system is to be implemented in (e.g. a file dialog from Windows ‘98), then the behaviour of an existing package should be examined and documented so that the target system will behave correctly. If the implementers are totally familiar with the environment in question, then it may be sufficient to provide an unambiguous reference to the feature required.

3.1             Obtain DoDD(y)

The DoDD(y) should be referred to whilst the ITM(y) is being produced, as it summarises the attributes and relationships of domain objects that are relevant from the user’s point of view. As the STM is decomposed, refer to the DoDD(y) to determine the type of relationships that the object being manipulated has with other objects, considering whether there is a need for related objects to be visible at the same time, whether it is enough to be able to navigate between the views of the objects, or whether there is no need for either. Consider also the nature of the attributes that are to be manipulated and how this will influence the most suitable type of control.

Refer also to the heuristics for the ITM stage, which are reproduced after the procedures for this stage. You may find it useful to have the styleguide at hand if one is available, particularly if you are not totally familiar with the user interface environment.

3.2             For each H leaf :

  • Decide if it is an active (control) or passive (read display) action for the user. (Different coloured highlighter pens can be used to mark the nodes distinctively)
  • If it is passive display reading, make a note of it for the Pictorial Screen Layout
  • If it is an action involving a control:

– determine object and attribute modified in DoDD(y) or user object model
– Check DoDD(y) semantic net to see if other attribute displays must be visible, and note this for when the PSL is being produced.

– Check if object already has controls specified (If so, ensure consistency)

– Determine nature of attribute change from DoDD(y) models

– Using styleguide, select most appropriate control.

– Enter the appropriate ‘H’ action, based on the styleguide

– Record choice of interface object (or a reference to the styleguide)    to enable later ‘C’ action specification and PSL construction

– If an action-object matrix was constructed, identify action, object, and        attribute, and check the matrix against the ITM(y) if it is absent.

  1. Note important features for later

Ensure that any behaviours that aren’t immediately obvious from the ITM are recorded in the table. Check each ‘C’ leaf, and decide if the operation refers to a purely user interface related function or whether it will involve a process that is not part of the user interface. Mark, or make a note of, those functions that will require services from outside the user interface.

  1. Document in diagram and table

The description in ITM(y) is supposed to continue to a level where it can be understood by the design team, taking into account the characteristics of the user interface, the existing system’s user interface, and the earlier HF descriptions. Therefore, the ITM table is usually rather less detailed than those for other products. However, the designer may have to select between a number of design options of equal merit. Documentation of the basis of these decisions is desirable, as it may assist design review by others in the design team or save ‘reinventing the wheel’ during design iterations. It is suggested that the ITM(y) table should follow the layout shown below:

 

Name Description Design Comments
Name of the node Description of the node Design rationale
  1. Iterate with: CTM(y) (task features)
    STM(y) (allocation of function)
    UTM(y) (off-line tasks)
    Tell SE stream about iterations

Iteration at this point is optional, and has been included in the procedures because producing the ITM sometimes results in insights into places where the structure of the diagram could be simplified or where modifications are necessary. Major iterations are best avoided, particularly where they have implications for the SE design stream, so a trade-off needs to be made between the cost of revising earlier products and the benefits likely to result.

Some of general rules of thumb for the ITM stage:

  • Iterate the stage as necessary with earlier stages, and return to the ITM after later stages if necessary to modify the node names to preserve consistency.
  • STM nodes more than 2 from the bottom are unlikely to change; those less than 2 nodes from the bottom are most likely to change.
  1. Demarcate screen boundaries

Once the ITM has been produced to the level where the detailed input behaviours are specified, it is usually quite straightforward to determine the windows and dialogs necessary to support the task. Refer to the heuristics given at the end of the stage for guidance when carrying out the following procedures.

  • Define window for each major object in DoDD(y) user object model
  • For each user object:

– Define menu header
– Define user object views; consider using multiple views if:

  •       there is too much information for a single view
    • some information is used frequently and some is used less                        frequently ( consider putting the less frequently used information in a
    supplementary view)
    • the user object is used in different contexts or tasks
    • providing graphical visualisations of the object may help the user to                  manipulate it more effectively
    • different subtasks involve using the object differently (i.e. inputting             information might require a different view to reviewing information                   already in the system
    • some information is restricted to certain types of user
    – Decide window basis – if part of larger object use pane or own window, else             use own window
    – Decide views: either single or multiple, and if multiple, simultaneous or                   selectable
    – Refer to styleguide for appropriate window style
    – Select attributes for representation in window
    – Define window(s) by listing attributes to be included
    – Inspect action object matrix (or DoDD(y)) for actions on the object

 

  • Identify the subtask containing the object in the ITM(y)
  • For each subtask:
    – Refer to subtask
    – If the action initiates a subtask, and the subtask can be initiated at nearly             any point in the task sequence (indicated by lax ordering or iteration             over a selection in the ITM), or can be performed on more than one             type of selected object or subobject (indicated by repetition of subtask             in ITM), consider using a menu item or keystroke to invoke the subtask.
    – If the subtask consists of several steps, or feedback / confirmation is                   required before the subtask can be completed, use one or more modal             dialogs.
    – Allocate subtask related interface objects to dialogue
    – Determine whether undo or cancel options are required
    – Document design decisions

 

  • For each action:

– Refer to subtask

– Consider appropriateness of control based on attribute changed
– Discard empty menu headers
– Use DoDD(y) and card sorting to determine optimum menu organisation
– Record menu organisation in PSL(y)

The DoDD(y) can help selection of controls based on the ITM(y); inspect the object that is being manipulated to uncover related objects that are be involved from the user’s point of view; the related objects (or their relevant attributes) should be visible when the user is interacting with the first object, in order to avoid them having to interrupt the task in order to check on related objects. Styleguides often contain guidance concerning selection of controls based on the type of variable being manipulated, so check the styleguide if one is available.

Having specified ITM(y), it now remains to derive the remaining part of the user interface specification. First, the ITM(y) should be checked once more against the software engineering products, using the checks on the following page .

 

 

OMT Cross-Checking Point:

 

Check the OMT object model to ensure that the entities are present as objects, and that the operations are likely to be supported by the methods. Refer to the ‘C’ actions that were marked on the STM or noted as requiring services from outside the user interface, and check that the user interface class is associated with the objects with the appropriate methods so that it can request them when needed. Check that the event (or message) trace would support the task in the ITM. Check the DFD (functional model), event trace, scenarios, state diagram and event flow (object message) diagram to ensure that the commands and arguments used by the user (as well as state variables and any important contexts) will be supported. The physical actions of the user and display actions of the system should be present in the scenarios, and specified in the state diagram. Areas where significant decomposition of the CTM has occurred to produce the ITM may indicate that decomposition into sub-states should have occurred in the state diagram.

Abstract attributes mentioned in the ITM should be consistent with control flows in the DFD, and with the state diagram. Attribute values should agree with event parameters (particularly user inputs) in the event trace, state diagrams and scenarios.

Ensure that relationships between processes and data in the ITM agree with those in the DFD, and that state transitions implied by the ITM are described in the state diagram. Check that the objects transformed by operations in the DFD agree with the description in the ITM, and that the transitions and event passing in the state diagram (and event flow diagram) are also compatible.

 

ITM heuristics

 

Consistency:

  • Interface objects should be named so that they will be concrete and recognisable
  • Be consistent in terminology and follow the styleguide
  • Comparable operations should be activated in the same way, and should work in the same way everywhere. Use the same command to carry out functions that seem similar to the user
  • Use identical terminology in prompts, menus and help sections and consistent commands; follow conventions for the environment, so users can reuse knowledge from elsewhere
  • When using metaphors, ensure properties of objects are appropriate

 

Simplicity:

  • Reduce number and complexity of necessary actions to a minimum; the interface should be simple, easy to learn, and easy to use.
  • Maximising functionality works against maintaining simplicity, and needs a balance.
  • Reduce presentation of information to the minimum needed to comminicate adequately.
  • Use natural mappings and semantics in the design.
  • Provide information not data
  • Disclose information to the user progressively to they only see it at the appropriate time, but don’t require the user to use special techniques (or keys) to reveal information vital to the task
  • Minimal surprise: users shouldn’t be surprised by the behaviour of the system
  • Reduce short term memory load: keep displays simple, consolidate multiple page displays, reduce window-motion frequency, and allow sufficient training time.
  • Salience: present critical information in a sufficiently intrusive way

 

Menus:

  • Use verbs for menu commands that perform actions
  • Don’t make up your own menus and give them the same names as standard menus
  • Flexibility and ease of use: use accelerators and allow users to tailor the system if appropriate.

 

Feedback

  • Offer informative feedback: for every operator action there should be some system feedback (visual and/or audio) – this can be minor for frequent and minor actions, and more substantial for infrequent and major actions
  • Ensure feedback is timely.
  • Show progress of lengthy operations.
  • Ensure feedback is appropriate to the task (or operation).

 

Error prevention:

  • Prevent errors from occurring in the first place
  • Help users recognise, diagnose and recover from errors: plain error messages which are informative

 

Put the User in Control

  • As far as possible, the user should initiate actions, not the computer
  • The user should always be able to see what they can do and what state the machine is in.
  • Accommodate users with different levels of skill; provide shortcuts for frequent users
  • Avoid modes, and where they are unavoidable make them obvious, visible, the result of user choice, and easy to cancel.

 

User guidance:

  • Consider providing on-line help, and decide what documentation will be required.

 

 

 

 

 

 

 

 

ITM Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 

 

 

Phase 3 of MUSE(SE): Display Design stage

The Display Design stage involves specifying the user interface in sufficient detail such that implementation can begin. A number of products are prepared, each describing a different aspect of the interface:

  • The Pictorial Screen Layouts (PSL(y)s) show the layout of screen objects within each window and dialog. They can be either be produced using a tool such as Visual Basic, or by using pen and paper, depending on what is most convenient.
  • The Interface Models (IM(y)s) show the behaviour of individual screen elements using the structured diagram notation common to the other products of MUSE. Each screen object (or group of screen objects) has its own Interface Model. If a screen object exhibits standard behaviour for the environment (e.g. buttons that highlight when the mouse is clicked on them), then there is no need to create an IM for that object; only objects with non-obvious behaviour should be documented.
  • The Dictionary of Screen Objects (DSO) lists all the screen objects specified, whether they have an IM or not. A brief description of the behaviour of each object is provided, and cross-references to IMs made as appropriate.
  • The Dialog and Inter-Task Screen Actuation diagram (DITaSAD) summarises what triggers each screen and dialog (including error messages) to appear and disappear. It is mainly used to specify the appearance of error messages, but also shows the combinations of screens that are allowed. The DITaSAD is specified using SDN.
  • The Dialog and Error Message table is a list of all the error messages that can appear. The format of the table is provided in the procedures.

Heuristics for use during the stage are provided following the procedures.

 

The high level procedures for the Display Design stage may be summarised as follows:

Display Design Stage

 

The procedures will now be described in more detail:

  1. Define screen layouts

1.1. For each screen boundary, prepare a PSL(y):

In general, it is a good idea to start off by designing windows to be as simple as possible; don’t try to make each window do too much, or it will be confusing for the user. If necessary, the window boundaries in the ITM should be revised.

Produce a Pictorial Screen Layout for each screen allocated in the ITM(y), as follows. (PSLs should also be produced for each menu, to show the ordering of items).

For each screen allocated in the ITM(y):

  • refer to styleguide for the standard window behaviours

(in addition to the standard window controls, don’t forget to include any applicable shortcuts for more expert users)

  • note how each PSL is invoked and dismissed
  • identify the screen objects that have been specified by examining the ITM; make a note of each object for the Dictionary of Screen objects
  • refer to each subtask in the ITM
  • group subtask related objects in window according to subtask order (work left to right and top to bottom, like reading a page of text)
  • within the subtask groupings, arrange objects according to DoDD(y) relationships or task order, as appropriate
  • if there is more than one subtask in a dialog, use lines to separate the objects into related groups, or consider a selection mechanism.
  • put the button that dismisses the window to the bottom right of the dialog

 

Where screen layouts are to be designed in colour, a degree of caution should be used. Colour is useful for distinguishing or classifying items, as well as gaining attention or indicating context or status of objects. Colours should be chosen so that there will be sufficient contrast between foreground and background objects, and particularly striking combinations such as red on blue are avoided. In general, a fairly muted palette should be used and bright colours reserved for specific circumstances where the user needs to be alerted to something important.

  1. Specify IM(y)s

Decide if each object in the window is to behave according to the standard manner for the environment – if so, no IM(y) will required for that object.

For each non-standard object, prepare an IM(y) as follows, bearing in mind that similar objects should exhibit similar behaviours:

 

2.1            for each menu & object

– determine when visible to user during task

– determine if the object or menu item is always a valid selection

– when the object is invalid but visible, it should be disabled, and a visible indication (such as dimming) used to inform the user

– ensure that objects are enabled (i.e. not dimmed) when they are a valid selection, and that they are visible to the user

-record the enabling/disabling behaviours using SDN

– reference in DSO and link to styleguide

for each menu item:

– specify the behaviour triggered by selecting the menu item as SDN

  1. Prepare Dictionary of Screen Objects

For each screen object used, complete an entry in the DSO as follows (n.b. refer to the heuristics at the end of the stage, as well as the styleguide to help determine the behaviours of the objects):

 

Screen object Description Design Attributes
Identify the screen object Description of the screen object Describe the attributes/behaviour of the object

 

  1. Store items together

Group the PSLs together with the Dictionary of Screen Objects and the relevant Interface Models.

  1. Deal with window management and errors

 

  • Study the PSLs (refer to the ITM(y) for the points in the interaction where they are used)
  • identify potential errors, and list them out
  • refer to the IM and ITM, and see if the error can be designed out
  • iterate until – error potential removed (revise products)

– error not removed, in which case:

Extend DET

– compose an error message

– add it to the DET

– prepare a PSL for the error message dialog

– note the cross-reference to the point in the ITM(y) where the error occurs

5.1 Window management and errors:

For each menu, and each PSL:

  • Document what causes it to be triggered and dismissed
  • Document what happens when it is dismissed action (for object windows, decide if a warning dialog is required, for instance if there is a danger of losing work)
  • For non-modal dialogs: Decide if another screen should be triggered as a default, and document it.

 

Decide how errors are to be treated:

  • obtain the ITM, the IMs and the PSLs
  • step through the ITM

for each subtask: determine enabled controls from PSL and IM

Determine if error results directly from control operation:

If error results, either revise design to remove error, disable control, or specify error behaviour

For each H action: determine if error possible (e.g. invalid data entry format)

If error possible, devise error message

For each C action: determine if non-completion error result possible

If error possible, devise error message

List all of the error messages in the Dialog and Error Message Table (DET), which should take the following form:

 

Message number Message
Message number (assign a number to the message, and cross-reference to the DITaSAD or PSL(y)) Content of the message as it will appear on the screen

 

6 Produce the DITaSAD

(tip: the DITaSAD can be based on the ITM structure by removing the bottom nodes apart from those that cause screens to appear or disappear, or where an error message might be triggered. It is easiest to produce the DITaSAD in two stages first ignoring the errors, and then adding them in by referring to the DET)

  • obtain the ITM, IM(y) and the notes on PSL activation
  • note transition triggers for activation and consumption
  • summarise in diagram

The above procedures complete the derivation of the user interface specification; in the remaining stage of MUSE(SE), this specification will be evaluated to ensure its suitability for the intended users and to reveal any areas where improvements need to be made.

Display Design Stage Heuristics

 

Consistency

  • Modes should be avoided, operations should have the same effect whenever they are invoked
  • Functions should work in the same way everywhere in the application.
  • Use the same command to carry out functions that seem similar to the user
  • Use identical terminology in prompts, menus and help sections and consistent commands
  • Follow conventions for the environment, so users can reuse knowledge
  • Ensure properties of objects are appropriate
  • Do use verbs for menu commands that perform actions
  • The user should be able to determine what tasks can be performed and the state of the machine all at times.
  • Don’t change the way the screen looks unexpectedly, especially by scrolling automatically more than necessary

 

User in control

  • Design dialogues to yield closure: organise sequences of actions with a beginning, middle, and end. Support contexts – based on data or tasks
  • User should initiate actions, not the computer.
  • Users should be able to personalise the interface
  • Accommodate users with different levels of skill; provide short-cuts for frequent users
  • Avoid modes, and where they are unavoidable make them obvious, visible, the result of user choice, and easy to cancel.

 

Errors

  • Prevent errors from occurring in the first place by designing them out
  • Help users recognise, diagnose and recover from errors.
  • Do make alert messages self-explanatory

 

Simplicity

  • The interface should be simple, easy to learn, and easy to use.
  • Reduce the number and complexity of necessary actions to a minimum
  • Reduce presentation of information to the minimum needed to comminicate adequately. Disclose information to the user progressively to they only see it at the appropriate time, but don’t require the user to use special techniques (or keys) to reveal information.
  • Use natural mappings and semantics in the design.
  • Support orientation: if information is too complex or covers more than you can present at one time, the user should be helped to find relevant information by supporting them in orienting themselves.

 

Use of Colour

  • Use colour coding in a thoughtful and consistent way.
  • Use colour change to show a change in system status. If a display changes colour, this should mean that a significant event has occurred. Colour highlighting is particularly important in complex displays with many entities. If one part of the interface shows error messages in red (say), then all parts should do likewise. Be aware of the assumptions which the users may have about the meaning of colours.
  • Use colour coding to support the task which users are trying to perform, for example when identifying similarities or anomalies in data.

 

Directness

  • Use direct manipulation, and make consequences of actions visible
  • Use familiar metaphors to allow users to use their experience; don’t be too literal about the metaphor, but extend it to support the task in an intuitive way. • Support recognition rather than recollection

 

Feedback:

  • The user should be informed of the consequences of their actions, and for every operator action there should be some system feedback (visual and/or audio) – this can be minor for frequent and minor actions, and more substantial for infrequent and major actions, but must be timely. Ensure feedback is appropriate to the task (or operation).
  • Show progress of lengthy operations.

 

Redundancy:

  • Wherever possible, provide summary information in several ways
  • Support orientation: if information is too complex or covers more than you can present at one time, the user should be helped to find relevant information by supporting them in orienting themselves.

 

 

Flexibility:

  • The user should be able to choose modality of task performance, and should have as much control as possible over the appearance of objects on the screen
  • Do make alert messages self-explanatory
  • Don’t use the keyboard where the mouse would be easier (or vice-versa)

 

 

 

 

 

 

 

 

 

Display Design Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 

 

 

Phase 3 of MUSE(SE): Design Evaluation stage

The design evaluation stage involves assessing the user interface design to ensure that it will provide satisfactory support for the intended users carrying out the tasks specified in the requirements. The evaluation consists of two stages; analytic evaluation, in which the specifications are reviewed, and an empirical evaluation in which a prototype is constructed and tried out on users. The techniques used in the empirical evaluation have already been described in detail in the procedures for extant systems analysis, and should therefore be familiar. The findings of the evaluation are used to determine whether any aspects of the design require revision, and following this the documentation of the design is finalised ready for final implementation.

The procedures for the Design Evaluation stage can be summarised as follows:

Design Evaluation Stage

These procedures will now be outlined in more detail:

  1. Analytic evaluation:

Draw general conclusions:

Practical

Meets usability requirements

(Check SUN, and complete final column)

Syntax: simple, objective, consistent?

Semantics: computer imposed on UI?

Good relationship with task

 

Obtain the SUN, and review the specifications generated in the Display Design stage, ensuring that all the usability requirements have been met as far as possible. Complete the final column of the SUN, saying how each of the requirements has been met by the design. Appraise the specifications, considering whether the interface will behave consistently and appear simple and objective to the user, and whether the design seems to have a good relationship with the task it is supposed to support. Having followed the design process, the user interface should be based on the user’s view of the task, and should not be driven by the computer-specific aspects; see whether this is the case, and whether the terminology used in the interface is that of the user’s domain or the computer’s.

 

Evaluate specifications

all states reachable

feedback

input sequences all catered for Default states

Functional requirements

– identify device behaviours

– check if UI function

 

Review the specifications, ensuring that every point in the interface can be reached, and that there are no ‘dead ends’, such as dialogs with no way back from them. Check that each valid user action is followed by some type of feedback from the device, whether visual or audible; at any point in the interaction, the user should be able to determine the state of the system from the feedback on the screen. Make sure that all the likely sequences of user input are catered for by looking at the original Task Descriptions and checking that those users would be able to use the new system.

When the system is started, establish what the default state will be, and ensure that it will make sense to the user. Similarly, make sure that the default states of dialogues will make sense. Finally, make sure that the system will meet the functional requirements in the Statement of Requirements by identifying all the device behaviours; collectively, those behaviours that are not wholly connected with the operation of the user interface should comprise the functionality listed in the Statement of Requirements.

  1. Empirical evaluation

The empirical evaluation involves building a user interface prototype and testing it on users. The type of prototype will depend on the objectives for the evaluation, and doesn’t necessarily have to be very complicated – a paper prototype consisting of hand-drawn screens is often sufficient for simple evaluations; tools such as Visual Basic or Director can be used to create prototypes to suit most requirements ranging from simple prototypes consisting of windows and dialogs with no functionality suitable for testing navigation through the system, to sophisticated prototypes with functionality very close to that of the final implementation.

Prototype GUI: – define objectives

– choose tool

– build prototype

 

The objectives for the prototype are dependent on what needs to be known about the user interface design. Initially, a prototype may be used to ensure that users find the icons and screen layouts meaningful, and would know what to do if faced with the design. Evaluation at this level does not require the prototype to have any functionality, and hand-drawn paper prototypes or printouts of screen designs may be entirely adequate. With a well-designed paper prototype, much can be learned about the system; by producing a number of paper ‘screen-shots’ showing the intended appearance of the system at each stage of a pre-specified task; simple evaluations can be performed by asking the user to indicate where on the paper screen they would click or what key presses they would make, and presenting them with the appropriate screen in response. When planning the evaluation, consideration should be given to what should happen if the user clicks in the ‘wrong’ place; in some cases it may be appropriate merely to inform them that part of the prototype isn’t implemented, but in many cases presenting them with the screen that would appear in the full system is worthwhile, as it allows investigation of whether they realise their error, and whether they will be able to recover from it.

The objectives of the evaluation should be to determine whether the usability requirements set for the system at the end of extant systems analysis (activity 4), and recorded in the SUN, have been satisfied. The main requirements for which levels should have been set are: Productivity, Learnability, User satisfaction, Memorability and Errors. The priority of the requirements should have been determined earlier, and testing should aim to provide evidence that the most important requirements have been satisfied; this has implications for the type of prototype that is required. A prototype intended to assess productivity might need to simulate the functionality of the target system, and behave realistically over time, whereas assessment of the number of user errors might be performed satisfactorily with a paper prototype. Consideration should be given to how realistic the prototype needs to be in terms of appearance, functionality, and temporal behaviour, and how well the available tools would support this. The scope of the prototype needs to be decided before it is constructed; consider if the prototype needs to represent all the screens or just some of them, and whether it needs to simulate the functionality accurately or merely contain mock data. The fidelity of the prototype might also be important; does it need to bear a close resemblance to the target system in terms of visual appearance and response times? A further factor that might influence the choice of tool is whether the prototype will need to interact with other systems such as databases or word processors, or whether it will be sufficient to simulate this.

– investigate prototype with design team and users:

user training

scenario briefing

data collection (PLUME)

data analysis

report results

 

Investigation of the prototype with the design team is essentially the same as the activity performed during extant systems analysis, when the investigator familiarised themselves with the system (activity 2.1). Members of the design team should experiment with the system to form a general impression about the usability of the interface. Experimentation with the prototype should also provide an input to planning the evaluation, and should inform decisions about who much training the users involved in testing should have prior to the evaluation, and how much information should be contained in the task that the users will be required to perform during the evaluation. The data to be collected should be determined prior to the evaluation, as well as the way it is to be analysed and reported.

 

Design evaluation:

– select approach

expert heuristic evaluation

user testing / observation

user survey

– identify participants

– decide session protocol

– pilot evaluation

 

Having determined the data to be collected it should be possible to decide the form of the evaluation. If users are unavailable or a fairly straightforward initial assessment of usability is required, an heuristic evaluation may be appropriate. If users are available, observational studies should be conducted of them using the prototype, similar to those conducted during extant systems analysis. If desired, a questionnaire could be administered to the users after they complete the task to elicit their opinions about the user interface by asking them to rate how easy they found it to use, note aspects they liked or disliked, and compare aspects of the new and old systems.

The plan for the evaluation should contain the number of participants and where they are to be drawn from, and the way in which the sessions are to be conducted should be decided before the event. A pilot study should be conducted, even if only on a single user, to ensure that the evaluation can be conducted as planned.

 

Collect data:

– real-time note taking

– video recording

– thinking aloud

– heuristic evaluation

 

The data collection techniques listed above were described as part of the procedures for extant systems analysis. The evaluation should consist of observation of users following a predetermined task (possibly the task used during extant systems analysis) whilst using the ‘thinking aloud’ technique discussed earlier. The investigator should observe, noting events of interest, errors made by the users, and time taken to perform the task. Video recordings of the users may prove useful during the analysis of findings. If a heuristic evaluation is to be performed, one or two of the design team should evaluate the interface against the heuristics (the heuristics used in the display design stage would be a suitable set for this purpose).

Analyse data: user testing:

time to complete

number and nature of errors

user problems

user comments

user survey statistics

 

Analyse the data collected in the evaluation to produce a summary of the findings. The above categories are intended as a starter set, and other categories can be added as appropriate.

 

impact analysis

analyse problems wrt usability criteria (SUN/PLUME)

rank order problems

generate design requirements

estimate resource requirements

review

 

Once the data has been summarised, the findings should be reviewed in the light of the usability criteria in the Statement of User Needs, and the usability requirements determined at the end of the extant systems analysis stage. An assessment should be made of the likely effort required to rectify each of the usability problems noted. The heuristics provided at the end of this stage allow estimation of the products that are likely to require revision based on the types of problem observed.

  1. Agree redesign

Assess problem (prioritise according to severity)

Agree action – solve next cycle

– solve now

– no action

 

Once the effort required to rectify the usability problems noted during evaluation has been estimated, the severity of the problems should be assessed and the problems should be prioritised. By comparing the severity of the problems with the effort required to rectify them, decisions can be made about whether to solve a problem now, wait until the next cycle, or take no action.

  1. Finalise documentation

Once the design has been finalised and the revisions made, the user interface specification should be checked to ensure that it is complete and correct prior to implementation of the finished system. The Display Design products can now be used to define an OMT dynamic model according to the following scheme:

  • The Dialog and Inter-Task Screen Actuation diagram (DITaSAD) can be used to derive the main states and transitions in the dynamic model for the user interface class (and any subclasses), to determine the default states of the device, and to determine the extent of any concurrency. There should be a state for each of the windows and dialogs specified, as well as for the menus; it should be possible to derive the transitions directly from the model.
  • The Dictionary of Screen Objects lists all the interface objects specified; in conjunction with the PSLs and IMs, it can be used to derive the substates in the diagram, using the ITM for reference.
  • The Pictorial Screen Layouts and the Dialog and Error Message Table should be kept with the SE products, and used as a specification of the user interface components to be implemented.

 

 

 

 

 

 

 

 

 

 

 

Evaluation Rating table

Please rate the above procedures according to the extent they fit the descriptions in the left hand column

 

  Agree strongly Agree Neutral Disagree Disagree Strongly
Coherent
(i.e. understandable)
* *
Complete

(i.e. there was nothing missing)

* *
Concerned what was desired

(i.e. did the procedures allow you to do what you were supposed to?)

* *
 

Time taken:

 

Diagrams Tables Revision Other (specify)
 

Further
Comments:

 

 

 

 

 

 

* Please describe what the problem was

 

 

Heuristics for determining the likely extent of design iterations based on evaluation of the prototype design

Part 1: Problems with the behaviour of the user or device noted during observation and by styleguide assessment.

  1. a) If the system does not support the task appropriately (i.e. forces the user to perform the task in an unnatural order, or does not support all aspects of the task), investigate the CTM(y)
  2. b) If users experienced at the task do not understand how to perform it using the prototype system, investigate the CTM(y)
  3. b) If the dialogue structure is problematic, or the system does not provide appropriate feedback, investigate the CTM(y) and the SUTaM
  4. c) If the content of dialogues confuses the user, or if the user inputs appear to be problematic, revise the ITM(y)
  5. d) If the layout of windows or dialogues is problematic, revise PSL(y)

 

Part 2: Problems interfering with the user’s ability to think about the task or                         to use their existing task knowledge, noted during verbal                                     protocols.

  1. a) If the user’s thought processes appear to be disrupted by performing the task with the system, check the CTM(y) and SUTaM(y) against the SUN.
  2. b) If the users make incorrect assumptions about the target system, check the SUN and DoDD(y).

Part 3:   Problems concerning the task objects and their attributes, noted                         during observation of the users or by questionnaire .

  1. a) If the representation of task objects or their attributes is problematic, or does not appear to match the goals of the task, check the products from CTM(y) onwards against the DoDD(y)
  2. b) If users do not achieve an acceptable level of quality (PRODUCTIVITY)when performing the work, check the products from CTM(y) onwards against the SUN(y)

Part 4: Problems related to the costs incurred by the user or device when                         performing the task, noted during observational studies.

  1. a) If the users find it difficult to learn the new system, check the products from CTM(y) onwards against the SUN(y). (LEARNABILITY, MEMORABILITY)
  2. b) If the users spend too long doing the task, make an unacceptable number of errors, check the products from CTM(y) onwards against the SUN(y) (ERRORS, USER SATISFACTION)

 

Part 5: Problems with the physical aspects of the worksystem, noted during                         assessment using guidelines or heuristics:

  1. a) If there are problems related to the physical aspects of the system, check the SUN(y). Problems relating to the appearance or layout of the device may require revisions to DSO and PSL(y)

Part 6: Problems related to mismatches between aspects of the design                         uncovered by assessment with the styleguide or guidelines (n.b these             problems can be difficult to diagnose, and may result from errors in             any one of a number or products. If the diagnoses below do not                         appear to describe the problem, suspect errors or omissions in the             SUN)

  1. a) If the behaviours specified for the user or device appear inconsistent with the types of interface object chosen, the domain objects or the task goals, check the products from CTM(y) onwards against the SUN(y)
  2. b) If the interface objects appear inconsistent with the goals of the task or the users knowledge or mental processes, check the products from CTM(y) onwards against the SUN(y)
  3. c) If the user or device behaviours appear inconsistent with the users knowledge or mental processes, check the products from CTM(y) onwards against the SUN(y)

 

 

 

 

 

 

 

 

MUSE(SE)

Example

 

MUSE(SE) Phase 1: Extant Systems Analysis Stage

The following example concerns a notional redesign of the bookmark editing facilities of NetScape Navigator 2.0. The example was selected firstly because it concerned an application that would be familiar to most readers, secondly because bookmark management had been noted to cause problems for a number of users (and thus there would be a problem to solve), and finally because the design products to be generated would be concise and easily understood.

  1. Examine Documents:            Obtain the statement of                                                                                               requirements
    Establish the requirements

A notional set of requirements (shown below) was prepared; the ‘designer’ who was to apply the method had not been involved in setting the requirements.

 

 

Statement of requirements

 

The current system for bookmark management of NetScape 2.0 is unwieldy for users with large bookmark collections.

 

The target system should support the bookmark management facilities of the bookmark window of NetScape 2.0, so that the user can re-organise their bookmarks into a form such that they are happy with the ‘Bookmarks’ menu, and can use it to navigate effectively. The target system should be designed to be compatible with the Apple Macintosh styleguide.

 

The functionality is as follows:

Display bookmarks

Select a bookmark (or bookmarks)

Change order of bookmarks

Collect into groups (using folders and separators)

Add a bookmark

Edit a bookmark change name label

change URL

add comments

show age and date last visited

Delete a bookmark

Create an alias

 

(Merging bookmarks won’t be considered in the current study.)

 

 

  1. Examine the systems:            Identify Users
    Identify Systems
    Identify Tasks
    Identify circumstances of use

 

Users

 

number = millions

Type of users: Highly variable; internet users

Experience level: Variable – novice to expert

Systems used: Assume Apple Macintosh MacOS 7.x experience.

Education level: variable from young children to postdoctoral level

Age: All (likely to be predominantly 17-35)

Classes: Novices

Experienced users (experience problems due to difficulty managing large bookmark files; categorisation problems, navigation during browsing, obsolete links, long list unless folders used.

(etc.)

 

Tasks

 

Reorganise bookmarks

Navigate through to select desired bookmark

Storing for reference

Export bookmarks (or subset) for use by others

Use bookmark as a placeholder (temporary) between sessions – can add with one operation, but takes several to delete

Deleting bookmarks

(more about tasks in the following section; information elicited by observing a user)

Circumstances of use

managing bookmarks – housekeeping (infrequent)

If bottom of bookmark menu is longer than the screen, need to rearrange it.

tasks include:

Moving items nearer to the top of the menu

Deleting obsolete (or no longer functional) bookmarks if they are very old and not used for a long time [in the existing system a ‘?’ appears after some length of time]

Putting into folders, moving from one folder to another, duplicating

Just bookmarked (i.e. management of 1 or 2 bookmarks) want to put straight into folder or delete as desired (once or twice a week, frequently)

The more frequently the second is done, the less frequently the first needs to be done.

 

Discretionary use – can stick with big long list

 

Motivation:

Provide quick and easy access to large number of information sources.

Make sense of the internet

Empowerment – enhance speed of access to information and understanding of the information sources collected. This is manifested as a sense of control of the categorisation methods and understanding of their resource capabilities.

2.1             Familiarise investigator with the system by:
Observational studies
Task execution

 

NOTES ON OBSERVING ONE USER OF NS 2.0

 

Delete bookmark is under ‘Edit’ menu – makes errors in selecting menu, although shortcuts known.

 

Sorting: Moves bookmarks by dragging from bottom of list to desired position, either in the list or in a folder.

Inside the folder, the bookmarks are not sorted alphabetically, although NS offers the facility to do so. Dropped items go to the top of the list, unless explicitly placed elsewhere inside the folder.

Can write comments about the bookmark so they can be seen only when ‘Edit Bookmark’ window is opened.

 

Creates folder, slots in under current position, drag and drop bookmarks into folder.

Deleting folder deletes folder and contents.

Not vital for menus to be visible on one screen, but if the menu is too long, it takes time for the menu to be drawn and scroll and the user may slide mouse off the menu and have to repeat the selection attempt.

 

(etc.)

 

Following observation of one user, the tasks and subtasks were identified. (The following is a transcript of the hand-written notes made during observation of the user).

 

Task: Add Bookmark

Task Goal: Add a bookmark to facilitate later retrieval of the document.

Frequency: Often

Duration: Short

Error frequency: Apparently few errors

Decomposition:

Add bookmark consists of: Get the desired document in the active window

then either:

–   Press �–D

–   select ‘Add Bookmark’ from the      ‘Bookmark’ menu

 

 

Domain objects:

 

 

 

 

Task: Sort Bookmarks

Subtasks: 1. Display bookmarks

  1. Add folder
  2. Add separator
  3. Move bookmark in list
  4. Add bookmark to folder
  5. Remove bookmark from folder
  6. Delete bookmark
  7. Duplicate/alias bookmark
  8. Edit bookmark info

 

These subtasks are now decomposed to give a complete description of each, and also the task ‘sort bookmarks’

 

Sort Bookmarks

Performer: User

Task Goals: Arrange bookmarks in window so that bk menu supports easy access to bookmarks: creating useful subgroups, ensuring bk list is not too long, ensuring menu items support identification of the appropriate URL.

Frequency: Approximately once a week, although this varies greatly between users.

Duration: This varies: if the task is performed frequently then duration is shorter. Large scale reorganisation of bookmarks to different categories is a different subgoal.

 

Error frequency: ?

 

Subtasks: 1 to 9 as above

 

Criticality: None, although if bookmarks list is too long, browser may sometimes crash when bookmarks menu is opened.

 

 

 

Subtask: Selecting bookmark

Performed by: User

Goal: To access the page which the bookmark refers to

Frequency: Varies

Duration: Very quick and simple

Error frequency: Occasionally the pointer slips off the menu, especially if the menu is very long. The item next to the desired bookmark is occasionally selected, the wrong bookmark is chosen due to the title not corresponding to the users knowledge of the page, ambiguous titles, etc.

 

Subtasks: Click on bookmarks to access menu. Hold down mouse and scroll down menu to item. Release the mouse button to select the item.

 

Criticality: Not vital, the user may simply select another bookmark to recover from an incorrect choice.

 

(etc.)

 

 

User costs:

Structural – training. Some similarities to finder, enabling use of prior experience, but this was partial.

Needed prompting on some tasks (delete)

Didn’t know what the question marks on Bookmarks meant.

Physical: Holding down mouse whilst navigating large menu structures is difficult, as can slip off and have to repeat.

Mental: Not so high for adding task, using task (though finding bookmark when name is not useful relies on memory of all bookmarks added to infer likely candidates). Bookmark management: Some errors caused by use of ‘Finder’ like look for window, although it has different functionality.

Device costs: Not overly repetitive or costly.

 

Candidate Domain Objects:

Internet Page

URL

Title (Useful | Not useful)

Bookmark Name (Unsorted | Sorted)

Bookmark window (Contains bookmarks)

Folder (Open | Closed)

Separator

 

 

 

Observational studies:

User: [The user was identified by initials]

Used hold down mouse button menu [automatic pop-up] to create bookmarks

Edit bookmarks

Used cut and paste to transfer between folders

Had trouble locating delete – dragged to wastebasket instead (this worked). However, differs from finder functionality as the wastebasket didn’t get fat.

Had trouble identifying bookmarks from the name only, instead, used location in menu (i.e. 2 from bottom to infer the right bookmark).

2.2            Interview user representatives to obtain problems and                                     task objects using:            Card sorting
Structured interviews

 

Following the initial observation, 3 users were interviewed about their bookmark management. 2 users used NetScape 2.0, and one used Internet Explorer

A transcript of the notes from interviewing one user of NetScape is shown below.

 

 

Notes from Interview with User 2:

 

I use it so I can go back to interesting or useful pages later.

 

Use window to sort into related groups or folders

 

Groups are put into folders, which have a name (may put items into folder prior to naming it, and then move the folder to the location where I want it and give it a name) and bookmarks inside

 

I put the best bookmarks at the top of the menu. The best bookmarks are the ones I use most often.

 

I use dividers to split the list up a bit so it looks right.

 

When I organise bookmarks, I alphabetise selected bookmarks. Only available when the folder is selected or set of adjacent bookmarks is selected . Arranges these in alphabetic order, in the same place as the original block in the list.

 

Problems:

 

Naming decisions: If I can’t decide on a name, it occurs to me that this grouping might not be appropriate and I move things around again.

Deleting things using menus: It’s in the Edit menu, and I always look under ‘Item’ first.

Renaming: I have to go to edit bookmark; this is a frequent task, or rather it would be if it was easier to do.

The question mark appears in the Bookmark window, but there’s nothing in the menu. This would be as useful in the menu, to show links that I’ve never visited.

There is no way of sorting or collating bookmarks as you add them, and there’s also no way you can change the name or add a description at the time of adding them either.

The finder metaphor doesn’t work properly.

It would be useful if it stored the title of the link rather than the URL, which it does when you bookmark a link.

 

Below are three ‘Mind Maps’ taken from interviews with browser users. Notice that although each map is different, there are similarities between them, and that the mind maps vary in their completeness or ‘correctness’.

‘Mind map’ from interview with user using NetScape on Mac

Mind Map1

‘Mind map’ from interview with second user using NetScape on Mac

Mind map2

‘Mind map’ from interview with a third user using Microsoft Internet Explorer on PC

Mind Map3

2.3             Record findings of 2.1 as preliminary TD(ext) products, and                                     separate those of 2.2 into behaviours and domain                                                 information

The following diagrams were derived from the study of a user on Microsoft Internet Explorer. (The diagram was originally one large diagram, and has only been split up for the purposes of the example). Other diagrams were produced for each user of NetScape, and for the related systems studied. Diagrams are created to document the actions of individual users at this stage, and the users combined into a single diagram later on.

TD

TD:B

TD:C

TD:DTD:E+F

 

 

 

 

 

 

 

 

 

Example entry from supporting table:

 

 

 

Title:_MSIE Task Description for user 3________________       Page:_1__________

Date:_28/11/97__________                                                                              Author:_SC_______

 

Name Description Observation Design Implication Speculation
Accept title page as name User has to choose a name Giving it an existing name will delete the old one without warning This should be avoided in the target system Allow multiple names which are the same
(etc.)

 

 

2.4            Construct ‘typical’ tasks to be used during testing.

 

Note: When the users sat down at the machine, it was already running the browser, which was displaying a page of information about the day’s news. The browser had been set up with a large bookmarks file. The task was designed so that the users would use as many of the functions that had been identified as being of interest, whilst the task retained reasonably realistic. (The task shown below lacks any context or introduction; this is because the users received verbal instructions as well as the written task).

 

 

Please use the browser to perform the following task:

 

  • Make a bookmark for the current page
  • View the current homepage
  • Use the bookmark to return to the first page

 

Using the ‘Bookmarks’ window:

 

  • Add a folder so it appears near the top of the menu, and call it ‘UCL Pages’. (Put it after the second ‘Kite Shop’ bookmark).
  • Insert a separator above the folder
  • Move the bookmark into the folder, and rename it ‘MSc Homepage’
  • Change the URL of the bookmark to

“http://www.ergohci.ucl.ac.uk/msc-info/”

  • Delete the bookmark.

 

2.5            Study the systems using:
Informal / Observational studies / Usability tests
Concurrent verbal protocol
Task execution
PLUME, Guidelines and heuristics

The following notes were made based on a user observed using Finder. As they were made in real-time whilst the user was observed, they are somewhat confused, but allowed the task to be reconstructed after the observation to produce a Task Description diagram:

 

 

Finder Analysis

 

User: XX

Task: Tidying up games on Hard Drive.

 

Create Folder

 

Opens File menu and selects ‘New Folder’

Names it by typing immediately (the name is highlighted by default, which means that it will change to whatever is typed before the mouse is clicked or return or enter is pressed).

Opens folder and modes over on screen

Makes the original window active

Added at top of list, as in ‘last modified’ view – at end if viewed alphabetically

Shift-clicks on ‘Games’ folder visible without scrolling. Drags and drops into folder in that window.

Scrolls…

finds another, clicks on it and drags into games folder window, the games folder remains inactive but now contains the new item,

 

View by icon – jumbled screen of icons, not in list

 

(See procedure 4 for an example of observations grouped into the PLUME categories)
Extract from the Ravden and Johnson Checklist, completed for NetScape Navigator 2.0 (the other systems were not evaluated using the checklist), with evaluator’s comments in italics.

 

 

SECTION 3: COMPATIBILITY

 

The way the system looks and works should be compatible with user conventions and expectations.

 

 

 

1 Are colours assigned according to conventional associations where these are important? (e.g. red = alarm, stop) N/A
2 Where abbreviations, acronyms, codes and other alphanumeric information are displayed:

(a) are they easy to recognize and understand?

N/A
(b) do they follow conventions where these exist? N/A
3 Where icons, symbols, graphical representations and other pictorial information are displayed:

(a) are they easy to recognise and understand?

 

 

 

 

 

Not bkmk and unused bkmk (‘?’ icon)

(b) do they follow conventions where these exist?  
4 Where jargon and terminology is used within the system, is it familiar to the user?  
5 Are established conventions followed for the format in which particular types of information are displayed? (e.g. layout of dates and telephone numbers) Bkmks arranged by user, unlike most which are alphabetic
6 Is information presented and analysed in the units with which the users normally work?   (e.g. batches, kilos, dollars) N/A
7 Is the format of displayed information compatible with the form in which it is entered into the system? Sometimes the bookmark title is not the filename, users sometimes have difficulty finding these bkmks
(etc.)

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

They extrapolated from ‘Finder’ style commands which are not all similar

Repeat procedures 2.1, 2.3, and 2.5 for any related systems identified.

 

In addition to observing the 3 users as they used browsers, a couple of related systems were selected for study.

 

ResEdit, a resource editing program used on Apple Macintosh computers, was selected for its menu editing features (the remainder of the package was not studied).

 

Finder, an application that runs all the time on Apple Macintosh computers performing the role of the Microsoft Windows File Manager and Program Manager by supporting the desktop and representation of file structures on mounted drives, was selected on the basis that it is a very familiar application for the intended user group

 

  1. Decompose tasks to: produce TD(ext)
    process TD(ext)

 

Diagram derived from related system analysis: Finder TD(ext)

TD (finder)

Subdiagram A

  1. Identify usability requirements

 

 

Notes for SUN

 

– the task is lax ordered

 

Identify Usability Requirements

 

Productivity

Must not be less productive than current implementation. Measure this by number of operations required for test scenarios.

Learnability

Must be better than current system. The menu items are difficult to use unless experienced, as are several other functions (e.g. clicking on bk in window opens page rather than allowing rename as in Finder). Use bk test scenarios – user should be able to perform these actions and be unambiguous about how (or at least one way of doing each operation).

User Satisfaction

Should be able to easily regroup items such that they support the task of information retrieval. Current system is lacking only in learnability and the easy access to a description. Change these so description etc. can be accessed, and used as grouping aid.

Memorability

User should be able to identify bks from title easily. Must be consistent operations across objects to be transformed.

Errors

No errors on user testing scenarios for naive subjects (re new users) though should have computer experience to allow reasonable degree of pretraining. No non-recoverable errors: should be able to undo.

 

MUSE(SE) Phase 1: GTM stage

Generifying tasks to produce GTM(ext)s

GTM(ext) for Finder

25.GTM(ext)(finder)

GTM(ext) for ResEdit

26.GTM (ResEdit)

 

 

 

 

GTM(ext) for Microsoft Internet Explorer

27.Bookmark Tasks

GTM(ext) for NetScape Navigator

28.NetscapeGTM(ext)

29.SubdiagramA30.Subdiagram B+C

  1. Generify (scope system at task level)

This involves the following steps, which are described in more detail afterwards.

Prepare GTM(y)

31.Bookmarks GTMy

Prepare GTM(x)

 

 

 

32.Bookmarks GTMx

GYMxABCD

The models were checked by asking the users of each system if they described their tasks, and if they could think of anything to add to the diagrams.

MUSE(SE) Phase 2: SUN stage

  1. Document user problems

The Statement of User Needs is reproduced below. Some of the sections have been shortened for the current example, and most of the ‘Features’ tables have been omitted apart from those useful for the purposes of illustration. The final column was completed during the evaluation stage.

 

Title:____SUN: NetScape Bookmark window___       Page:_1__________

Date:____15/11/97_____________                                                                                              Author:_SC_________

User and Device Actions

 

Problem Caused by Consequences Addressed by
No feedback if ‘delete key pressed (must use cmd-delete in extant system) Functionality differs from user model of functionality (possibly from other applications) User frustration Different Keys

Delete is delete Key now

Change folder name & change bookmark name done differently Use of ‘Edit bkmk’ rather than highlight and type new one Could be difficult to change names – edit bookmark is an obscure menu item name More finder-like i.e. change name in list window.

 

Feature Caused by Consequences Addressed by
User may not realise that they can sort bkmks as desired, i.e. non-alphabetically Lack of auto alphabetisation, although this is a desirable feature. Ordering is part. Modal, dialog indicates they can choose location

User and device actions

 

Problem Caused by Consequences Addressed by
Delete folder deletes contents without warning No warning message Not addressed-decided consistent with Finder functionality
Unrecoverable errors Can’t undo unsuccessful sorting command Now you can – and if no selection prior to sort shows error message 4 . Apple-Z to undo
Menu slow to appear If too long, device delay causes sluggish response Hardware issue. Controlling menu length is a goal of task performance, which has been addressed
Duplicated Bkmk titles System not prompting for alternative to default OK to have multiple, then choose name & can see any duplications in folder window (Screen 2)
‘Add bookmark’ can be done with bookmark window open (adds for front-most browser window, which is not visible at the time) Not disabling the menu item when the bookmark window opens Apple-D disabled when bookmark window is active.
(etc.) (etc.) (etc.) (etc.)

Task (Domain) Objects

 

Problem Caused by Consequences Addressed by
Description not accessible unless in edit Bookmark – offers poor support for identification of page when browsing

 

Only one can be viewed at a time, and not moved; not useful for comparisons Bookmarks are ordered as saved, which is better (this wasn’t directly addressed, because it doesn’t interfere with task performance that seriously).
‘?’ Causes confusion ? means unvisited, but user may think differently Might think it means ‘no longer valid’ and delete it Listing in bookmark window now has ‘Date last visited’ – this reads ‘Not visited’ instead of using the icon.

User and device costs

 

Problem Caused by Consequences Addressed by
Menu items difficult to identify Poor menu categories/ inconsistency, Poor titles for menu items Menus reorganised

 

Feature Caused by Consequences Addressed by
Target: More learnable than current system Yes: fewer errors and less confusion
Target: Memorability.   Bkmk names sometimes incomplete Auto naming Prompt user for better name when making bookmark in Sc2
Target: Computer users who have no experience of browsers should be able to use the bookmarks without training If have finder experience, then functionality is similar enough.

Physical aspects; device construction, appearance and layout.

 

Problem Caused by Consequences Addressed by
Try to do things which have different procedures. Visual similarity to Finder Emphasis on ‘Bookmark’ instead of ‘File’. Functionality is now more Finder-like.

Miscellaneous

 

Problem Caused by Consequences Addressed by
Delete Bookmark hard to find

 

It’s in ‘Edit’ whereas all other Bookmark operations are under ‘Item’ Changed to delete key.
Sort bookmarks not easy to find Bad menu choice Changed menu design

MUSE(SE) Phase 2: DoDD(y) stage

Analyse task domain.

DoDD(y):

DoDDy

Node Description Number Relation
Title The title of the bookmark which identifies the page 1 shown in
Bookmark An instance of a bookmark 2 has a
Bookmark window The window that the bookmarks are edited in 3 shows
Bookmark menu The menu that the bookmarks are chosen from whilst browsing 4 shows
Title The folder title 5 shown in
Bookmark An instance of a bookmark 6 has
Folder A folder in the bookmark window 7 contains
Bookmark list Ordered collection of bookmarks 8 contains
Folder A folder in the bookmark window 9 contains
Folder A folder in the bookmark window 10 has
URL The internet location referred to by a bookmark 11 refers to
Page A www page or internet resource 12 has
Rename bookmark Behaviour 13 changes
View menu Behaviour 14 shows
Open window Behaviour 15 shows
Change description Behaviour 16 changes
Delete bookmark Behaviour 17 deletes
Move Bookmark Behaviour 18 changes
Add separator Behaviour 19 creates
Delete Separator Behaviour 20 deletes
Add bookmark Behaviour 21 creates
Rename folder Behaviour 22 changes
Change URL Behaviour 23 changes
Open Bookmark Behaviour 24 opens appropriate
Delete folder Behaviour 25 deletes
Add folder Behaviour 26 creates

 

34 User Object Model

Extract from Action – Object Matrix

 

    Bookmark Bk list Bk menu Bk window (etc.)
add bk C U U U
add folder U U
add sep.tor U U
K delete bk D U U
K delete folder U U
K delete sep.tor U U
sort bks U U
make alias C U U
F Rename bk U U
F rename folder U U
S change descr.
open bk page R R
view menu R C
open window R C
F move bk U U
S change URL

Key: K= key only F: Finder Functionality S: Subtask involked by edit bk details

MUSE(SE) Phase 2: CTM(y) stage

The CTM(y) is reproduced in full on the following page.

  1. Decompose task

Notice that the level of decomposition of the CTM(y) is slightly lower than either of the GTMs; in the present example, the ‘Edit Bookmarks’ task has been described in slightly more detail.

1a             Synthesis:            Obtain SoR, DoDD(y), and SUN
Compare GTM(x) and GTM(y)
Extend GTM(y)
Incorporate parts of GTM(x)

The CTM(y) is composed from GTM(y) and GTM(x). In this case, the CTM has taken most of its structure from the GTM(x), because the requirements were not specific enough to enable a detailed GTM(y) to be constructed. Some low-level detail of the GTM(y) has been incorporated, to ensure that the target system will meet the requirements. Folder management and the use of separators have been carried over from the GTM(x), as they were not present in the GTM(y), but were a useful feature of the existing system. This would need to be negotiated with the ‘owner’ of the requirements. The extant systems analysis revealed that renaming bookmarks was problematic for users, and the CTM(y) has added an alternative method of renaming items which is compatible with the Finder application studied during the analysis of existing systems and present in GTM(x).

Composite Task Model

 

 

 

 

 

 

 

(photocopy CTM printed @about 30% onto this page)

1b             Record in table:
Design rationale
Design decisions

CTM Table:

 

Name Description Design Comments
Acquire bookmarks body Ported from GTM(x) Required by SUN(y), avoids new bookmarks appearing at end of long list, or having an inappropriate name
Manage bookmarks Ported from GTM(x) Required as a result of adding acquire bookmarks body. (Disp. bookmarks and manage menu structure have moved down).
Assess changes Ported from GTM(x) Users must be able to assess the aspects they will change
Decide to make changes Ported from GTM(x) Structure taken from GTM(x)
Add to folder Ported from GTM(x) Required as consequence of SoR, but not in GTM(y)
Create alias From GTM(y) Uses structure of adding new bookmark from GTM(x)
Move item Adapted from GTM(x) Detail from GTM(x)
Add separator Adapted from GTM(x), but in GTM(y) anyway Moving separator is new
Edit bookmark The user changes the attributes of the bookmark, or creates a new one. New bookmark from GTM(x), edit structure in GTM(y).
Rename item From GTM(x); an alternative way of renaming bookmarks consistent with Finder (as prompted by heuristics); also consistent with folder renaming From GTM(x)
Delete Item Decompose from GTM(y) Needs to be consistent with metaphor (heuristics)
New bookmark Can add new bookmark not necessary for page currently active in the browser Two ways: Menu and accelerator keys. Menu gives Untitled bookmark (then as ‘edit bkmk’), accel. key gives bkmk for most recently active window, which can then be edited. Accel key disabled if no browser open.

 

Differences between GTM(y) and CTM(y)

  • CTM features acquire bkmk procedures, ported from MSIE
  • GTM assesses structure (& adds separators/folders) prior to sorting bookmarks. These structure related tasks are in with bookmark editing in CTM(y).
  • Add new bookmark is separated from edit bookmark in GTM, but as procedure is same the CTM approach of using same procedures for both appears viable

Phase 2: SUTaM stage

  1. Decompose the CTM(y):
    For each node of the on-line task, designate as a H                                                 or C node.
    Decompose the off-line tasks if required,                                                             after constructing the UTM from the marked                                                             up areas of the STM.

At this point, the design is specified in more detail, and as a consequence the diagram will become significantly larger. Compare the following extract from the CTM:

CTM+STM

 

1a             Consider whether decompositions of design comply with ‘design             principles’ (feedback, etc.)

This is largely a matter of stepping thorough the diagram and checking, for example, that every time the user does something, the device does something to provide feedback.

1b             Ensure that STM contains all relevant domain objects and attributes             by reference to SUN and DoDD(y). Check SUN for user problems with             existing system, and ensure they are not likely to recur.

 

Once again, a matter of stepping through the diagram to track down the items in the DoDD(y), ensuring that none have been forgotten. In our example, all the items from the DoDD(y) were located in the SUN.

1c             Complete STM table

Notice how the heuristics have been used to provide rationale for design decisions.

 

Name Description Design Comments
Decide name/location User gets shown the default name and location If all this is on once screen, it yields closure. Also, much faster than current system, where location/name would have to be effected later in the bookmark window.

• Prevents errors
• User in control
• Preserves context
• Salience

Check bookmark window open Have to look and see, and open it if it’s not Must open the bookmark window to manipulate bookmarks – can’t do it from menu

• Directness

Create alias Makes a pointer to the original, which may then be placed Using Finder metaphor for this so it’s •Consistent (although not with ‘add bookmark). Reuses knowledge, from knowledge of the Finder.
Add to folder Behaves just like Finder • Consistent
• Reuse of knowledge
• Non-surprising
Move Items
Sort Items
Same as Finder as above
Separator None in Finder, but they are in menus Behaves as bookmark or folder in Finder, following metaphor (though of course you can’t ‘get info’ or ‘Edit’ them)
New bookmark In with Edit Bookmark, as the URL specified with name – this is consistent, as cannot use add blank screen due to no URL • Prevent errors
• Reduce number of actions
• Yields closure
Rename item Folders and bookmarks are same here – refer to later procedures for spec of this
Delete item Direct manipulation operates as Finder

Phase 3 of MUSE(SE): ITM(y) stage

  1. Select nodes of the STM(y) for decomposition (H or H-C leaves)

The STM(y) can be marked up using a highlighter pen to identify the leaves for decomposition, as shown in the following diagram. One colour of highlighter was used to mark ‘Active/control’ actions, and a different pen was used to mark ‘Passive/read display’ actions.

STMy

  1. For each H-C leaf: if standard behaviour, – study ‘standard’ package
    – analyse behaviour
    – document behaviour
    – rename items in ITM &                                                                                                  DoDD(y)

The following extract from the ITM illustrates how H-C leaves are decomposed to ensure that the standard behaviour is specified.

Assess Position

3.1             Obtain DoDD(y)

3.2             For each H leaf : (Decomposition)

The following extract from the ITM should be compared with the STM extract to illustrate the process of decomposing the STM into the ITM.

AddSeperator

  1. Note important features for later

Hand-written notes were kept of each significant decision made whilst the ITM was produced. These were filed with the design products using plastic wallets to keep them together. A table was produced which described the subtasks identified as the ITM was decomposed (this was based on the ITM table, but the ‘Description’ heading was amended to read ‘Subtask’).

  1. Document in diagram and table

The ITM diagram became quite extensive, as was the table. As with the other tables produced during the design process, the ITM table was hand-written on a photocopied proforma. The ITM table was produced in three sections: those comments about the H-C leaves, comments about the ‘Active’ H leaves, and comments about the ‘Passive’ H leaves. The following table presents extracts from each section to indicate the type of comments made. Notice the cross-references to pages of Inside Macintosh, the programmer’s reference containing a version of the Apple Macintosh styleguide.

ITM Table:

Name Description Design Comments
H-C leaf decomposition
Drag item to folder H moves cursor to item, presses mouse button & moves cursor to new location, then releases. If illegal move, the item’s ‘ghost’ springs back to the original location. Standard functionality
Drag item (twice, for items and separators) As above Standard functionality
Select bookmark H double-clicks on selected bookmark or clicks once to highlight then uses menu to open Standard functionality
Activate menu H moves cursor to menu title on bar and presses mouse button.   C displays menu Standard functionality
Close bookmark window & Close bookmarks H ensures window active, either click box on top left of window or press Apple-W or selects close from menu Standard functionality
H leaves: Active leaves
Add bookmark Creates new bookmark Apple guide: Inside Mac [rel] menus or button [I-51]

C: Create bkmk attrs: name, URL, descr. + store info

Naming body Allows user to accept default name or change to new name Inside Macintosh [I-67].
modal dialog box, as choice must be made before bk can be stored (shows other bks to ensure names).
Location body As above As above
Open bkmks Opens bk window Inside Mac [I-51] menu or button
Open window Same as open bkmks
(etc.)
H leaves: Passive leaves
Inspect page (Whilst browsing) The page to be bookmarked
Inspect name location Default name and location for new bookmark Like std dlg?
Inspect bookmarks Menu or window (Whichever is open, but in window need attributes visible).
Inspect location Look at default loc which is displayed In a mode here;have to click OK
(etc.)
  1. Iterate with: CTM(y) (task features)
    STM(y) (allocation of function)
    UTM(y) (off-line tasks)
    Tell SE stream about iterations

In the present example, the iteration consisted of a certain amount of renaming of items in earlier products to maintain consistency and traceability.

  1. Demarcate screen boundaries

The following extract from the ITM shows how screen boundaries are marked on the ITM(y).

Screen Boundaries

In the example here, rough sketches of screens were drawn whilst the ITM was being produced as an aid to decision making. The following extracts from the notes show the reasoning behind one decision concerning screen allocation:

Screen Allocation

The design rationale was noted so that the decision could be justified later on:

 

This has 1 window for bk window and each bk. However, only one bk details can be opened at once. So to compare 2 bk descriptions + URLs etc., would need 2+ windows available. This could get confusing.

So, stick with single instances, as above

Create BkMark

 

Phase 3 of MUSE(SE): Display Design stage

  1. Define screen layouts

1.1. For each screen boundary, prepare a PSL(y):

Pictorial screen layouts were sketched by hand (as for the examples in the ITM stage). Once the design appeared satisfactory, more realistic screen layout diagrams were produced either by cutting and pasting parts of screenshots of standard applications using Adobe PhotoShop, or by using a user interface prototyping tool (in this case, HyperCard) to produce the dialogs and then capturing them as screenshots.

The following PSL was produced using HyperCard:

BKMarkDetails

  1. Specify IM(y)s

No Interface Models were produced, as there were no bespoke items; all of the novel items specified had been based on ‘Finder’, which is effectively a part of MacOS, and no items such as check buttons toggling the dimming of other controls, or groups of radio buttons which would merit production of an IM(y) had been specified. Behaviours of menus were described in the ITM supported by text, and were entirely standard.

  1. Prepare Dictionary of Screen Objects

Extract from the dictionary of screen objects

 

Screen object Description Design Attributes
Screen 2

Dialog box

As for ‘Save file’ in standard applications Has scrolling window to navigate folder structures, a box for containing the default name (highlighted). OK and cancel buttons. Has folder title at top as standard
Screen 3

Plain scrolling window

 

 

Menus:

File

Edit

Bookmark

Item

Add Bookmark

Add Folder

Add Separator

Make Alias

­­­­­­­­­­­­­­­­

Delete Item

­­­­­­­­­­­­­­­­

Open bk details…
Open bked page

Sort Bookmarks…

 

As Finder window

[Resource name = DocumentProc Window]

 

 

 

As before

Loses ‘Delete Bookmark’

As before

Resizing handles, scrollbars, etc., as standard Finder window

 

  1. Store items together

All of the products comprising the user interface specification were put in the ring-bound file in plastic wallets behind a divider marked ‘Display Design’.

  1. Deal with window management and errors

(A certain amount of iteration with earlier products resulted in some potential errors being designed out)

5.1 Window management and errors:

Dialog and Error Message Table:

‘EM’ refers to error messages; ‘W’ refers to the window or dialog where the message is liable to appear.

 

Message number Message
EM1 [W3] To delete an item, select item(s) then press delete key or select ‘Delete Item’ in Item menu
EM2 [W2,3 & 4] Bookmarks cannot have a blank name
EM3 [W2, 3 & 4] Bookmark names must be shorter than [x] characters
EM4 (dialog) [W3] Sort items will sort all items in list if no items are selected. This action is irreversible if you then change the item order

 

6 Produce the DITaSAD

The following extract from the DITaSAD shows how screen transitions and error message presentations are dealt with:

DITaSAD
Phase 3 of MUSE(SE): Design Evaluation stage

  1. Analytic evaluation:

Draw general conclusions: Practical

Meets usability requirements

(Check SUN, and complete final column)

Syntax: simple, objective, consistent?

Semantics: computer imposed on UI? Ggood relationship with task

Evaluate specifications

all states reachable

feedback

input sequences all catered for Default states

Functional requirements

– identify device behaviours

– check if UI function

 

The design was reviewed to ensure that it met the above criteria; refer back to the SUN for the notes in the final column, which were completed at this point.

 

  1. Empirical evaluation

Prototype GUI: – define objectives

– choose tool

– build prototype

 

The user interface was prototyped by animating the PSLs by pasting them into HyperCard and scripting them with hidden buttons. Due to the limited speed of the available computers, this prototype ran too slowly to make its use in user testing viable, but it proved valuable for allowing the designer to evaluate the design. A second prototype was made; this one was a paper-based which took the form of a pair of booklets containing ‘screen shots’ of the target system in all the states required for the evaluation (this involved having a separate screen shots for folders open and closed, and so on). One of the booklets was plain apart from having the pages clearly numbered. The other booklet was annotated with the page numbers of the screen shots that should be presented in response to user actions, or other device behaviours such as beeping in response to errors (in which case the investigator would say ‘beep’!). The following diagram is an extract from the annotated booklet.

Booklet

 

 

The user was instructed to indicate where mouseclicks would be made by pointing at the page using a pen to indicate the mouse pointer and saying something like ‘I’ll click on that menu there”. The evaluator would refer to their (annotated) copy to find out which page should be presented next and place the corresponding (unannotated) page in front of the user (obscuring or removing the other ‘screens’ already there, as appropriate). The user would then indicate their next response, such as “I’ll select ‘Sort Items’ ” and so on.

The user required a small amount of training in the technique at the start of testing, but overall the paper prototype was found to work well and the short delays whilst the experimenter found the next page were considered acceptable.

 

– investigate prototype with design team  and users:

user training

scenario briefing

data collection (PLUME)

data analysis

report results

 

The designer experimented with the HyperCard prototype, and used the paper prototype with the confederate as the system to ensure that all the screens that would required had been specified; this also allowed the confederate to practice the technique, but only a small amount of practice was required before the confederate felt confident enough to attempt trials with a real user. The final paper prototype required 23 interlinked pages to depict the behaviour of the system in the various states required by the task to be used for testing. See below for the notes taken on the PLUME categories.

 

Design evaluation: – select approach

expert heuristic evaluation

user testing / observation

user survey

– identify participants

– decide session protocol

– pilot evaluation

 

The task used for initial testing during the ESA stage was reused at this point; although this would not be recommended in most cases, it was considered that the functionality of the bookmarks window was sufficiently limited that a task designed to test the items of interest would of necessity be very similar to the original task. A pilot examination was conducted using one of the design team, who behaved as a naive user for the purposes of the trial.

 

The user selected for the trial had not been involved in the initial testing, and was chosen because although they had some experience of using NetScape, their experience of using the bookmark features was very limited because they had not used the browser for long enough to accumulate a list that required maintenance. See the following extract from hand-written notes made at the time:

 

 

Notes on Evaluation (XX)

 

Subject has experience with using NetScape 2.0 on Macintosh, however, limited use of bookmarks. Uses add bookmark and the bookmark menu, but rarely uses the bookmark window or sorts bookmarks.

 

 

Collect data: – real-time note taking

– video recording

– thinking aloud

– heuristic evaluation

 

The evaluation was conducted in the usability laboratory; the room was located in a quiet location in the building where the task could be conducted without distractions from nearby activities, so that the user’s comments could be heard and recorded clearly for later analysis. More importantly, the room was equipped with high-quality video and audio recording equipment and an overhead camera; this allowed the designer to review the tapes following the session, and meant that they did not need to have such a detailed view of the table top. A colleague of the designer acted as the ‘system’ by managing the annotated booklet and interacting with the user, whilst the designer acted as observer and made notes as the task progressed. The video tapes of the session were reviewed afterwards; some example images showing the view from the camera are shown below.

Screen shot 2016-07-06 at 13.19.29

Screen shot 2016-07-06 at 13.19.42

 

 

 

 

 

Analyse data: user testing: time to complete

number and nature of errors

user problems

user comments

user survey statistics

 

The video was reviewed, and the following notes were made:

 

 

No probs adding bookmark.

Goes to home with toolbar button

Uses bkmk menu to go to original page again.

 

Had difficulty finding bkmk window – tried bkmk menu originally

Then sees window menu and opens bkmk window

Evidently unfamiliar with adding folders

 

Tries file menu

Tries bkmk menu

Tries Item menu – moves to insert folder

 

types UCL pages (no hesitation)

 

Returns to item menu

Insert separator – it appears

Clicks and drags to location specified

Rubber bands [original page] and drags to ‘UCL pages’ folder

 

 

Change URL:

tries ‘edit’ menu, then goes to item menu – edit bookmark

– bkmk details opens

Retypes details

pressed OK

Delete:

Goes to item

selects delete item

(It disappears)

Thought item menu was ambiguous tried edit sometimes instead

 

Other ways of doing things

Move – might try menu

Thought that double clicking folder might open it

Not surprised if dble clicking bkmk would open it, but thought it might open page (though possibly because NetScape already does this)

Thinks of opening bkmk as opening the Page, rather than bkmk details, but not surprised by this.

Thought Edit Bookmark seemed like an OK name, however.

 

 

Impact analysis analyse problems wrt usability criteria (SUN/PLUME)

rank order problems

generate design requirements

estimate resource requirements

review

 

The problems uncovered by the evaluation were analysed and noted:

 

 

Outstanding problems following evaluation

 

  1. ‘?’ issue – still confusing
  2. No accelerators for add bkmk in bkmk window
  3. Delete folder – problem not addressed, as not very important. Possibly address next time
  4. Didn’t have last visited problem in prototype. Should have been in bkmk window (added during evaluation)

 

  1. Agree redesign

Assess problem (prioritise according to severity)

Agree action – solve next cycle

– solve now

– no action

 

The problems were assessed and rank ordered, and the decisions concerning each were noted (in the event, the decisions were not executed; the method application was performed as an exercise):

 

 

Rank ordered problems

 

1st                  2                  Solve now (add accelerator)

2nd                  4                  Solve now – new prototype

3rd                  1                  Solve now – new prototype

4th                  3                  Solve next time

 

The iteration heuristics were used to determine the extent of the iterations that would be needed to solve each problem:

 

 

Iterations required (using heuristics)

 

1st                  2                  Heuristic 3b – check CTM onwards against                                                                         SUN

2nd                  4                  Heuristic 3a – check CTM against SUN

3rd                  1                  Heuristic 3a – check CTM against SUN

4th                  3                  Heuristic 2b – check SUN and DoDD(y)

 

Finally, the PLUME categories were revisited to check that the design had met the objectives set at the end of extant systems analysis.

 

 

PLUME categories revisited

Productivity

Add bookmark involves location screen, which is extra procedure. However, this obviates the need to change the name and location later. Also, bookmark default location is top of menus, so less movement of the cursor is required to use the most recent bookmarks.

Learnability

Although initial search for correct menu, item names were easily understood once viewed

User Satisfaction

Easier grouping, as it is done when page is in browser. Previous SUN notes required access to description to aid sorting. However, bookmarks are sorted as they are made now, thus this should be easier.

Memorability

Consistent – yes, can identify bookmarks from title more easily because they are named when the page is active.

Errors

No non-recoverable errors, as can undo delete warning before sorting.

 

 

 

 

The Ravden & Johnson Evaluation Checklist:

Ravden, S., Johnson, G., (1989) Evaluating Usability of Human-Computer Interfaces: a practical method. Ellis Horwood.

 

INSTRUCTIONS FOR COMPLETING THE CHECKLIST

 

Sections 1 to 9: Criterion-based questions

(1) Each of these sections is based on a different criterion, or ‘goal’ which a well-designed user interface should aim to meet, The criterion is described at the beginning of the section, and consists of:

– a heading (e.g. ‘Visual Clarity’), followed by

– a statement (e.g. ‘information displayed on the screen should be clear, well-         organized, unambiguous and easy to read’).

(2) A number of checklist questions follow, and these aim to assess whether the user interface meets the criterion.

For example, in section 1 (‘Visual clarity’), the questions check whether information which is displayed on the screen is clear, well-organized, unambiguous and easy to read.

(3) To the right of the checklist question, you will see four columns, labelled ‘Always’, ‘Most of the time’, ‘Some of the time’, and ‘Never’.

For each checklist question, please tick the column which best describes your answer to the question.

(4) Then write any comments which you feel you could make when answering a checklist question in the column labelled: ‘Comments’.

For example, when answering question 12 in section 1: ‘Is information on the screen easy to see and read?’, you may tick the column ‘some of the time’, and you may mention particular screens where information was very difficult to see and read, in the ‘Comments’ column.

(5) If you feel that a checklist question is not relevant to the interface which you are evaluating (e.g. questions relating to colour if the system does not use colour, questions referring to printouts if the there is no printer attached), then please write ‘Not Applicable’ or ‘N/A’ in the ‘Comments’ column beside that question, and move on to the next question.

(6) After the checklist questions in each section, you are asked for: ‘…any comments (good or bad)…’ which you would like to add concerning the issues in that section.

For example, you may wish to describe a particular problem, or make a particular point which you did not have room to make beside the checklist question, or you may feel the checklist questions have not covered a particular aspect of the interface which you feel should be mentioned.

(7) At the end of each section, you will see a rating scale, ranging from ‘Very satisfactory’ to ‘Very unsatisfactory’. Please tick the box which best describes the way you feel about the user interface in terms of the issues in that section.

 

Section 10: system usability problems

(1) The questions in this section ask you about specific problems which you experienced when carrying out the evaluation task(s).

(2) To the right of each question you will see three columns labelled: ‘No problems’, ‘Minor problems’ and ‘Major problems’.

For each question, please tick the column which is most appropriate.

(3) As in Sections 1 to 9, please write any particular comments, descriptions of problems, and so on, in the column labelled ‘Comments’, beside each question.

(4) If there are any questions you feel are not relevant to the interface which you are evaluating, then please write: ‘Not applicable’ or ‘N/A’ in the ‘Comments’ column for that question.

 

Section 11: general questions on system usability

This section asks you to give your views on the interface which you have been evaluating. Please feel free to write as much as you like in answer to each question.

 

SECTION 1: VISUAL CLARITY

Information displayed on the screen should be clear, well-organized, unambiguous and easy to read.

 

1 Is each screen clearly identified with an informative title or description?
2 Is important information highlighted on the screen? (e.g. cursor position, instructions, errors)
3 When the user enters information on the screen, is it clear:

(a) where the information should be entered?

(b) in what format it should be entered?
4 Where the user overtypes information on the screen, does the system clear the previous information, so it does not get confused with the updated input?
5 Does the information appear to be organised logically on the screen?
6 Are different types of information clearly separated from each other on the screen? (e.g. instructions, control options, data displays)
7 Where a large amount of information is displayed on one screen, is it clearly separated into sections on the screen?
8 Are columns of information clearly aligned on the screen? (e.g. columns of alphanumerics left-justified, columns of integers right-justified)
9 Are bright or light colours displayed on a dark background, and vice-versa?
10 Does the use of colour help to make the displays clear?
11 Where colour is used, will all aspects of the display be easy to see if used on a monochrome or low-resolution screen, or if the user is colour-blind?
12 Is the information on the screen easy to see and read?
13 Do screens appear uncluttered?
14 Are schematic and pictorial displays (e.g. figures and diagrams) clearly drawn and annotated?
15 Is it easy to find the required information on a screen?

 

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

 

 

 

 

 

 

 

 

 

  1. Overall, how would you rate the system in terms of visual clarity?

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

 

SECTION 2: CONSISTENCY

 

The way the system looks and works should be consistent at all times

 

 

1 Are different colours used consistently throughout the system? (e.g. errors always highlighted in the same colour)
2 Are abbreviations, acronyms, codes and other alphanumeric information used consistently throughout the system?
3 Are icons, symbols, graphical representations and other pictorial information used consistently throughout the system?
4 Is the same type of information (e.g. instructions, menus, messages, titles) displayed:

(a) in the same location on the screen?

(b) in the same layout?
5 Does the cursor appear in the same initial position on displays of a similar type?
6 Is the same item of information displayed in the same format, whenever it appears?
7 Is the format in which the user should enter particular types of information on the screen consistent throughout the system?
8 Is the method of entering information consistent throughout the system?
9 Is the action required to move the cursor around the screen consistent throughout the system?
10 Is the method of selecting options (e.g. from a menu) consistent throughout the system?
11 Where a keyboard is used, are the same keys used for the same functions throughout the system?
12 Are there similar standard procedures for carrying out similar, related operations? (i.e. updating and deleting information, starting and finishing transactions)
13 Is the way the system responds to a particular user action consistent at all times?

 

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

 

 

 

 

 

 

 

  1. Overall, how would you rate the system in terms of consistency?

(Please tick appropriate box below.)

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

 

SECTION 3: COMPATIBILITY

 

The way the system looks and works should be compatible with user conventions and expectations.

 

 

 

1 Are colours assigned according to conventional associations where these are important? (e.g. red = alarm, stop)
2 Where abbreviations, acronyms, codes and other alphanumeric information are displayed:

(a) are they easy to recognize and understand?

(b) do they follow conventions where these exist?
3 Where icons, symbols, graphical representations and other pictorial information are displayed:

(a) are they easy to recognise and understand?

(b) do they follow conventions where these exist?
4 Where jargon and terminology is used within the system, is it familiar to the user?
5 Are established conventions followed for the format in which particular types of information are displayed? (e.g. layout of dates and telephone numbers)
6 Is information presented and analysed in the units with which the users normally work?   (e.g. batches, kilos, dollars)
7 Is the format of displayed information compatible with the form in which it is entered into the system?
8 Is the format and sequence in which information is printed compatible with the way it is displayed on the screen?
9 Where the user makes an input movement in a particular direction (e.g. using a direction key, mouse, or joystick), is the corresponding movement on the screen in the same direction?

 

 

 

 

 

 

10 Are control systems compatible with those used in other systems with which the user may need to interact?
11 Is information presented in a way which fits the user’s view of the task?
12 Are graphical displays compatible with the user’s view of what they are representing?
13 Does the organisation and structure of the system fit the user’s view of the task?
14 Does the sequence of activities required to complete a task follow what the user would expect?
15 Does the system work the way the user thinks it should work?

 

 

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

  1. Overall, how would you rate the system in terms of compatibility?

(Please tick appropriate box below.)

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

 

SECTION 4: INFORMATIVE FEEDBACK

 

Users should be given clear, informative feedback on where they are in the system, what actions they have taken, whether these actions have been successful and what actions should be taken next.

 

1 Are instructions and messages displayed by the system concise and positive?
2 Are messages displayed by the system relevant?
3 Do instructions and prompts clearly indicate what to do?
4 Is it clear what actions the user can take at any stage?
5 Is it clear what the user needs to do in order to take a particular action?   (e.g. which options to select, which keys to press)
6 When the user enters information on the screen, is it made clear what this information should be?
7 Is it made clear what shortcuts, if any, are possible? (e.g. abbreviations, hidden commands, type ahead)
8 Is it made clear what changes occur on the screen as a result of a user action?
9 Is there always an appropriate system response to a user input or action?
10 Are status messages (e.g. indicating what the system is doing, or has just done):

(a) informative?

(b) accurate?
11 Does the system clearly inform the user when it completes a requested action (successfully or unsuccessfully)?
12 Does the system promptly inform the user of any delay, making it clear that the user’s input or request is being processed?
13 Do error messages explain clearly:

(a) where the errors are?

(b) what the errors are?
(c) why they have occurred?
14 Is it clear to the user what should be done to correct an error?
15 Where there are several modes of operation, does the system clearly indicate which mode the user is currently in? (e.g. update, enquiry, simulation)

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

 

 

 

 

 

 

 

 

  1. Overall, how would you rate the system in terms of informative feedback?

(Please tick appropriate box below.)

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

 

SECTION 5: EXPLICITNESS

 

The way the system works and is structured should be clear to the user.

 

1 Is it clear what stage the system has reached in a task?
2 Is it clear what the user needs to do in order to complete a task?
3 Where the user is presented with a list of options (e.g. in a menu), is it clear what each option means?
4 Is it clear what part of the system the user is in?
5 Is it clear what the different parts of the system do?
6 Is it clear how, where and why changes in one part of the system affect other parts of the system?
7 Is it clear why the system is organised and structured the way it is?
8 Is it clear why a sequence of screens are structured the way they are?
9 Is the structure of the system obvious to the user?
10 Is the system well-organised from the user’s point of view?
11 Where an interface metaphor is used (e.g. the desk-top metaphor in office applications), is this made explicit?
12 Where a metaphor is employed, and is only applicable to certain parts of the system, is this made explicit?
13 In general, is it clear what the system is doing?

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

 

 

 

  1. Overall, how would you rate the system in terms of explicitness?

(Please tick appropriate box below.)

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

SECTION 6: APPROPRIATE FUNCTIONALITY

 

The system should meet the needs and requirements of users when carrying out tasks.

 

1 Is the input device available to the user (e.g. pointing device, keyboard, joystick) appropriate for the tasks to be carried out?
2 Is the way in which information is presented appropriate for the tasks?
3 Does each screen contain all the information which the user feels is relevant to the task?
4 Are users provided with all the options which they feel are necessary at any particular stage in a task?
5 Can users access all the information which they feel they need for their current task?
6 Does the system allow users to do what they feel is necessary in order to carry out a task?
7 Is system feedback appropriate for the task?
8 Do the contents of help and tutorial facilities make use of realistic task data and problems?
9 Is task specific jargon and terminology defined at an early stage in the task?
10 Where interface metaphors are used, are they relevant to the tasks carried out using the system?
11 Where task sequences are particularly long, are they broken into appropriate sub sequences? (e.g. separating a lengthy editing procedure into its constituent parts)

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

  1. Overall, how would you rate the system in terms of appropriate functionality?

(Please tick appropriate box below.)

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

SECTION 7: FLEXIBILITY AND CONTROL

The interface should be sufficiently flexible in structure, in the way information is presented and in terms of what the user can do, to suit the needs and requirements of all users, and to allow them to feel in control of the system.

 

1 Is there an easy way for the user to ‘undo’ an action, and step back to a previous stage or screen? (e.g. if the user makes a wrong choice, or does something unintended)
2 Where the user can ‘undo’, is it possible to ‘redo’ (i.e. to reverse this action)?
3 Are shortcuts available when required? (e.g. to bypass a sequence of activities or screens)
4 Do users have control over the order in which they request information, or carry out a series of activities?
5 Can the user look through a series of screens in either direction?
6 Can the user access a particular screen in a sequence of screens directly?   (e.g. where a list or table covers several screens)
7 In menu-based systems, is it easy to return to the main menu from any part of the system?
8 Can the user move to different parts of the system as required?
9 Is the user able to finish entering information (e.g. when typing in a list or table of information) before the system responds? (e.g. by updating the screen)

 

 

 

10 Does the system prefill required information on the screen, where possible? (e.g. to save the user having to enter the same information several times)
11 Can the user choose whether to enter information manually or to let the computer generate information automatically? (e.g. when there are defaults)
12 Can the user override computer-generated (e.g. default) information, if appropriate?
13 Can the user choose the rate at which information is presented?
14 Can the user choose how to name and organize information which may need to be recalled at a later stage? (e.g. files, directories)
15 Can users tailor certain aspects of the system for their own preferences or needs? (e.g. colours, parameters)

 

 

 

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

 

 

 

 

 

 

 

 

  1. Overall, how would you rate the system in terms of flexibility and control?

(Please tick appropriate box below.)

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

 

 

 

SECTION 8: ERROR PREVENTION AND CORRECTION

 

The system should be designed to minimize the possibility of user error, with inbuilt facilities for detecting and handling those which do occur; users should be able to check their inputs and to correct errors, or potential error situations before the input is processed.

 

1 Does the system validate user inputs before processing, wherever possible?
2 Does the system clearly and promptly inform the user when it detects an error?
3 Doe the system inform the user when the amount of information entered exceeds the available space? (e.g. trying to key five digits into a four-digit field)
4 Are users able to check what they have entered before it is processed?
5 Is there some form of cancel (or ‘undo’) key for the user to reverse an error situation?
6 Is it easy for the user to correct errors?
7 Does the system ensure that the user corrects all detected errors before the input is processed?
8 Can the user try out possible actions (e.g. using a simulation facility) without the system processing the input and causing problems?
9 Is the system protected against common trivial errors?
10 Does the system ensure that the user double-checks any requested actions which may be catastrophic is requested unintentionally? (e.g. large-scale deletion)
11 Is the system protected against possible knock-on effects of changes in one part of the system?
12 Does the system prevent users from taking actions which they are not authorized to take? (e.g. by requiring passwords)
13 In general, is the system free from errors and malfunctions?
14 When system errors occur, can the user access all necessary diagnostic information to resolve the problem? (e.g. where and what the fault is, what is required to resolve it)

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

 

 

 

 

 

 

 

 

 

  1. Overall, how would you rate the system in terms of error prevention and correction?

(Please tick appropriate box below.)

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

 

 

 

SECTION 9: USER GUIDANCE AND SUPPORT

 

Informative, easy-to-use and relevant guidance and support should be provided, both on the computer (via an on-line help facility) and in hard-copy document form, to help the user understand and use the system.

 

1 If there is some form of help facility (or guidance) on the computer to help the user when using the system then:

(a) Can the user request this easily from any point in the system?

(b) Is it clear how to get in and out of the help facility?
(c) Is the help information presented clearly, without interfering with the user’s current activity?
(d) When the user requests help, does the system clearly explain the possible actions which can be taken, in the context of what the user is currently doing?
(e)   When using the help facility, can the user find relevant information directly, without having to look through unnecessary information?
(f) Does the help facility allow the user to browse through information about other parts of the system?
2 If there is some sort of hard-copy guide to the system (e.g. user guide or manual) then:

(a) Does this provide an in-depth, comprehensive description, covering all aspects of the system?

(b) Is it easy to find the required section in the hard-copy documentation?
3 Is the organization of all forms of user guidance and support related to the tasks which the user can carry out?
4 Do user guidance and support facilities adequately explain both user and system errors, and how these should be corrected?
5 Are all forms of user guidance and support maintained up-to-date?

 

  1. Are there any comments (good or bad) you wish to add regarding the above issues?

 

 

 

 

 

 

 

 

 

 

 

 

 

  1. Overall, how would you rate the system in terms of user guidance and support?

(Please tick appropriate box below.)

 

Very satisfactory Moderately satisfactory Neutral Moderately unsatisfactory Very unsatisfactory

 

 

 

SECTION 10:                   SYSTEM USABILITY PROBLEMS

 

When using the system, did you experience problems with any of the following:

 

1 Working out how to use the system
2 Lack of guidance on how to use the system
3 Poor system documentation
4 Understanding how to carry out the tasks
5 Knowing what to do next
6 Understanding how the information on the screen relates to what you are doing
7 Finding the information you want
8 Information which is difficult to read properly
9 Too many colours on the screen
10 Colours which are difficult to look at for any length of time
11 An inflexible, rigid, system structure
12 An inflexible HELP (guidance) facility
13 Losing track of where you are in the system or what you are doing or have done
14 Having to remember too much information whilst carrying out a task
15 System response times that are too quick for you to understand what is going on
16 Information that does not stay on the screen long enough for you to read it
17 System response times that are too slow
18 Unexpected actions by the system
19 An input device that is difficult or awkward to use
20 Knowing where or how to input information
21 Having to spend too much time inputting information
22 Having to be very careful in order to avoid errors
23 Working out how to correct errors
24 Having to spend too much time correcting errors
25 Having to carry out the same type of activity in different ways

 

 

SECTION 11: GENERAL QUESTIONS ON SYSTEM USABILITY

 

Please give your views on the usability of the system by answering the questions below in the spaces provided. There are no right or wrong answers.

 

  1. What are the best aspects of the system for the user?

 

 

 

 

 

  1. What are the worst aspects of the system for the user?

 

 

 

 

 

  1. Are there any parts of the system which you found confusing or difficult to fully understand?

 

 

 

 

 

  1. Were there any aspects of the system which you found particularly irritating although they did not cause major problems?

 

 

 

 

 

  1. What were the most common mistakes you made when using the system?

 

 

 

 

 

  1. What changes would you make to the system to make it better from the user’s point of view?

 

 

 

 

 

  1. Is there anything else about the system you would like to add?

 

Blank Tables

 

 

 

The following pages contain blank tables for the main MUSE products. To avoid alternating between diagram editor and word processor during design, these can be photocopied and used for making hand-written notes whilst the corresponding diagrams are being produced.

Once the diagrams are completed, it is recommended that the tables are typed up so that a complete record of the design process can be maintained on disk.

 

 

 

Task Description Table

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

 

Name Description Observation Design Implication Speculation
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Generalised Task Model Supporting Table

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

 

Name Description Observation Design
Implication
Speculation
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Statement of User Needs: User and Device Actions

Title:________________________________________       Page:____________

Date:_________________                                                                                              Author:__________

User and Device Actions

 

Problem Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Statement of User Needs: User mental processes and mental model

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

User mental processes and mental model

 

Problem Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Statement of User Needs: Task (Domain) Objects

Title:________________________________________       Page:____________

Date:_________________                                                                                              Author:__________

Task (Domain) Objects

 

Problem Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Statement of User Needs: User and device costs

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

User and device costs

 

Problem Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Statement of User Needs: Physical aspects; device construction, appearance and layout.

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

Physical aspects; device construction, appearance and layout.

 

Problem Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Statement of User Needs: Miscellaneous

Title:________________________________________       Page:____________

Date:_________________                                                                                              Author:__________

Miscellaneous

 

Problem Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Feature Caused by Consequences Addressed by
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

DoDD(y) Supporting Table

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

 

Notes:

 

  • The relations in the table are intended to be read in the direction of the arrow in the DoDD(y) diagram

 

Node Description Number Relation
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Composite Task Model Supporting Table

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

 

Name Description Design Comments
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

System / User Task Model Supporting Table

Title:________________________________________       Page:____________

Date:_________________                                                                                              Author:__________

 

Name Description Design Comments
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Interaction Task Model Supporting Table

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

 

Name Description Design Comments
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Dialog and Error Message Table

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

 

Message number Message
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Dictionary of Screen Objects Table

Title:________________________________________       Page:____________

Date:_________________                                                                                               Author:__________

 

Screen object Description Design Attributes
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

[1]Also if particular OMT products haven’t been prepared at the time of the cross-check

[2]The user object model is taken from Redmond-Pyle, D., and Moore, A., (1995) ‘Graphical User Interface Design and Evalution (GUIDE): A practical Process’, Prentice Hall, London, and the user object model procedures reproduced here are based on those by Redmond-Pyle

 

 

 

 

 

 

 

Formulating the cognitive design problem of Air Traffic Management 150 150 admin

Formulating the cognitive design problem of Air Traffic Management

John Dowell
Department of Computer Science, University College London

Evolutionary approaches to cognitive design in the air traffic management (ATM) system can be attributed with a history of delayed developments. This issue is well illustrated in the case of the flight progress strip where attempts to design a computer-based system to replace the paper strip have consistently been met with rejection. An alternative approach to cognitive design of air traffic management is needed and this paper proposes an approach centered on the formulation of cognitive design problems. The paper gives an account of how a cognitive design problem was formulated for a simulated ATM task performed by controller subjects in the laboratory. The problem is formulated in terms of two complimentary models. First, a model of the ATM domain describes the cognitive task environment of managing the simulated air traffic. Second, a model of the ATM worksystem describes the abstracted cognitive behaviours of the controllers and their tools in performingw the traffic management task. Taken together, the models provide a statement of worksystem performance, and express the cognitive design problem for the simulated system. The use of the problem formulation in supporting cognitive design, including the design of computer-based flight strips, is discussed.

1. Cognitive design problems

1.1. Crafting the controller’s electronic flight strip

Continued exceptional growth in the volume of air traffic has made visible some rather basic structural limitations in the system which manages that traffic. Most clear is that additional increases in volume can only be achieved by sacrificing the ‘expedition’ of the traffic, if safety is to be ensured. As traffic volumes increase, the complexity of the traffic management problem rises disproportionately, with the result that flight paths are no longer optimised with regard to timeliness, directness, fuel efficiency, and other expedition factors; only safety remains constant. Sperandio (1978) has described how approach controllers at Orly airport switch strategies in order to sacrifice traffic expedition and so preserve acceptable levels of workload. Simply, these controllers switch to treating aircraft as groups (or more precisely, as ‘chains’) rather than as separate aircraft to be individually optimised.
For the medium term, there is no ambition of removing the controller from their central role in the ATM system (Ratcliffe, 1985). Therefore, substantially increasing the capacity of the system without qualitative losses in traffic management means giving controllers better tools to assist in their decision-making and to relieve their workload (CAA, 1990). Yet curiously, such tools have not appeared in the operational system at large, in spite of sustained efforts made to produce them.

Take the case of the controller’s flight progress strip. The strip board containing columns of individual paper strips is the tool which controllers use for planning and as such occupies a more central role in their task than even the radar screen (Whitfield and Jackson, 1982). Development of an electronic strip has been a goal for some two decades (Field, 1985), for the simple reason that until the technical sub- system components have access to the controller’s planning, they cannot begin to assist in that planning. Even basic facilities such as conflict detection cannot be provided unless the controller’s plans can be accessed and shared (Shepard, Dean, Powley, and Akl, 1991): automatic detection is of limited value to the controller unless it is able to operate up to the extremes of the controller’s ‘planning horizon’ and to take account of the controller’s intended future instructions.

Attempts to introduce electronic flight strips, including conflict detection facilities, have often met with rejection by controllers. Rejection has usually been on the grounds that designs either mis-represent the controller’s task, or that the benefits they might offer do not offset the increases in cognitive cost entailed in their use. The consistency in this pattern of rejection is of interest since it implicates the approach taken to development.

The approach taken in the United Kingdom has been to develop an electronic system which mimics the structures and behaviours of the paper system. This approach has entailed studies of the technical properties of flight strips, and also their social context of use (Harper, Hughes & Shapiro, 1991), followed by the rapid prototyping of electronic strips designs. But electronic flight strip systems cannot hope to match the physical facility of paper strips for annotation and manipulation, particularly within the work practices of the sector team. Rather, electronic flight strips might only be accepted if their inferior physical properties are compensated by providing innovative functions for actively sharing in the higher level cognitive tasks of traffic management. By actively sharing in tasks such as flight profiling, inter-sector coordinations, etc, electronic flight strips might offset the controller’s cognitive costs at higher levels, resulting in an overall reduction in cognitive cost.

These difficulties in the development of the electronic flight strip are symptomatic of the general approach taken to cognitive design within the ATM system. It is an approach which emphasises the value of incremental and evolutionary change. But it is also one which relies, not so much on ‘what is known’ about the system, as on what is ‘tried and tested’. This craft-like approach (Long and Dowell, 1989) has resulted in effective stalemate in respect of the controller’s task, since it excludes innovative forms of cognitive design. Without an explicit, complete or coherent analysis of the Air Traffic Management task, the changes resulting from innovative designs cannot be predicted and therefore must be avoided. An alternative approach is needed, and one which offers the required analysis is cognitive engineering, as now discussed.

1.2. Cognitive engineering as formulating and solving cognitive design problems

The development of the ATM system can be seen as an exemplary form of cognitive design problem, one which subsumes a domain of cognitive work (the effective control of air traffic movements) and a worksystem comprising cognitive agents (the controllers) and their cognitive tools (e.g., flight strips). Moreover, it critically includes the effectiveness of that worksystem in performing its work – the actual quality of the air traffic management achieved and the cognitive costs to the worksystem.

Treating air traffic management as a cognitive design problem is consistent with the cognitive engineering approach to development. Cognitive engineering has been variously defined (Hollnagel and Woods, 1983; Norman, 1986; Rasmussen, 1986; Woods and Roth, 1988) as a discipline which can supersede the craft like disciplines of Human Factors and Cognitive Ergonomics. A review of definitions can be found in Dowell and Long (1998). As a discipline, cognitive engineering can be distinguished most generally as the application of engineering knowledge of users, their work and their organisations to solving cognitive design problems. Its characteristic process is one of ‘formulate then solve’ problems of cognitive design, in contrast with ad hoc approaches to improving cognitive systems. Norman (1986) identifies approximation and the systematic trade-off between design decisions as basic features of this process. Ultimately, cognitive engineering seeks engineering principles which can prescribe solutions to cognitive design problems (Norman, 1986; Long and Dowell, 1989).

This paper presents the formulation of the cognitive design problem for a simulated ATM system. To formulate any cognitive design problem takes two starting points (Figure 1). First, there must be some “situation of concern” (Checkland, 1981), in which an instance or class of worksystem is identified as requiring change. In this paper, a simulated ATM system is taken as presenting such a situation of concern (Section 1.4). Second, there must be a conception of cognitive design problems. A conception provides the general concepts, and a language, with which to express particular design problems. Similarly, Checkland (1981) describes how an explicit system model supports the abstraction and expression of problem situations within the soft systems methodology. In this paper, a conception of cognitive design problems proposed by Dowell and Long (1998) supplies the framework for the problem formulation (see Figure 1). That conception is now summarised.

Figure 1. Formulation of a cognitive design problem. The problem is abstracted over a simulated ATM system which presents a situation of concern. The problem formulation instantiates a conception for cognitive engineering .

1.3. Conception of cognitive design problems

Cognitive design problems can be expressed in terms of a dualism of domain and worksystem, where the worksystem is designed to perform work in the domain to some desired level of performance (Dowell and Long, 1998). Domains might be generally conceived in terms of their goals, constraints and possibilities. Domains consist of objects identified by their attributes. Attributes emerge at different levels within a hierarchy of complexity within which they are related. Attributes have states (or values) and so exhibit an affordance for change. Desirable states of attributes we recognise as goals. Work occurs when the attribute states of objects are changed by the behaviours of a worksystem whose intention it is to achieve goals. However work does not always result in all goals being achieved all of the time, and the variances between goals and the actual outcomes of work are expressed by task quality .

The worksystem consists of the cognitive agents and their cognitive tools (technical sub-systems) which together perform work within the same domain. Being constituted within the worksystem, the cognitive agents and their tools are both characterised in terms of structures and behaviours. Structures provide the component capabilities for behaviour; most centrally, they can be distinguished as representations and processes. Behaviours are the actualisation of structures: they occur in the processing and transformation of representations, and in the expression of cognition in action. There are, therefore, both physical and mental (or virtual) forms of both structures and behaviours. Hutchins (1994) notes that this distinction between structure and behaviour corresponds with a separation of task and algorithm (Marr, 1982); here, however, a task is treated as the conjunction of transformations in a domain and the intentional behaviours which produce those them.
Work performed by the worksystem incurs resource costs. Structural costs are the costs of providing cognitive structures; behavioural costs are the costs of using those structures. Both structural and behavioural costs may be separately attributed to the agents of a worksystem. The performance of the worksystem is the relationship of the total costs to the worksystem of its behaviours and structures, and the task quality resulting from the decisions made. Critically then, the behaviours of the worksystem are distinguished from its performance (Rouse, 1980) and this distinction allows us to recognise an economics of performance. Within this economy, structural and behavioural costs may be traded-off both within and between the agents of the worksystem, and they may also be traded-off with task quality. Sperandio’s observations of the Orly controllers, discussed earlier, is an example of the trade-off of task quality for the controller’s behavioural costs.

It follows from this conception that the particular cognitive design problem of ATM should be formulated in terms of two models,

  • a model of the ATM domain, describing the air traffic processes being managed, and
  • a model of the ATM worksystem, describing the agents and technical sub-systems (tools) which perform that management.

These two models are indicated schematically in Figure 1, as the major components of the ATM problem formulation.

1.4. Simulated air traffic management task

The ATM cognitive design problem formulated here is of a simulated ATM system which presents a situation of concern: specifically, the unacceptable increases in workload, and the losses in traffic expedition, with increasing traffic volumes. The simulation reconstructs a form of the air traffic management task. This task is performed by trained subject ‘controllers’ who monitor the traffic situation and make instructions to the simulated aircraft. The simulation is built on a computational traffic model and provides the common form of ATM control suite (Dowell, Salter and Zekrullahi, 1994). It provides a radar display of the current state of traffic on a sector consisting of the intersection of two en-route airways. It also provides commands via pull-down menus for requesting information from and instructing aircraft (i.e., for interrogating and modifying the traffic model). Last, the control suite includes an inclined rack of paper flight progress strips, arranged in columns by different beacons or reporting points. For each beacon an aircraft will pass on its route through the sector, a strip is provided in the appropriate rack column. The strips tell the controller which aircraft will be arriving when, where, and how (i.e., their height and speed), their route, and their desired cruising height and cruising speed.

Using the radar display and flight strips, the subject controller is able continuously to plan the flights of all aircraft and to execute the plan by making appropriate interventions (issuing speed and height instructions). The subject controller works in a ‘planning space’ in which, reproducing the real system, aircraft must be separated by a prescribed distance, yet should be given flight paths which minimise fuel use, flying time and number of manoeuvres, whilst also achieving the correct sector exit height (Hopkin, 1971). Fuel use characteristics built into the computational traffic model constrain the controller’s planning space with regard to expedition, since fuel economy improves with height and worsens with increasing speed. Because of this characteristic, controllers may not solve the planning problem satisfactorily simply by distributing all aircraft at different levels and speeds across the sector. Additional airspace rules (for example, legal height assignments) both constrain and structure the controller’s planning space. The controller works alone on the simulation, performing a simplified version of the tasks which would be performed by a team of at least two controllers in the real system; the paper flight strips include printed information which a chief controller would usually add whilst coordinating adjacent sectors.

Increasing volumes of air traffic within this system inevitably result in sacrifices in traffic expedition, if safety is to be maintained. Simply, the traffic management problem (akin to a “game of high speed, 3D chess”, Field, 1985) becomes excessively complex to solve. Workload increases disproportionately with additional traffic volumes. The simulated system therefore presents a realistic situation of concern over which a cognitive design problem can be formulated, as now described.

2. Model of the ATM domain

The model of the ATM domain is given in this section. Because of its application to the laboratory simulation, the model makes certain simplifications. For example, the simulation does not represent the wake turbulence of real aircraft, a factor which may significantly determine the closeness with which certain aircraft may follow others; accordingly, the framework makes no mention of wake turbulence. However, the aim here is to present a basic, but essentially correct characterisation of the domain represented by the simulation. Later refinement, by the inclusion of wake turbulence for instance, is assumed to be possible having established the basic characterisation.

2.1 Airspace objects, aircraft objects, and their dispositional attributes

An instance of an ATM domain arises in two classes of elemental objects: airspace objects, and aircraft objects, defined by their respective attributes. Aircraft objects are defined by their callsign attribute and their type attributes, for example, laden weight and climb rate. Airspace objects include sector objects, airway interval objects, flight level objects, and beacon objects. Each is defined by their respective attributes, for example, beacons by their location. Importantly, the attributes of aircraft and airspace objects have an invariant but necessary state with respect to the work of the controller: these kinds of attribute we might call ‘dispositional’ attributes.

2.2 Airtraffic events and their affordant attributes

Notions of traffic intuitively associate transportation objects with a space containing them. In the same way, an instance of an ATM domain defines a class of airtraffic events in the association of airspace objects with aircraft objects at particular instants. Airtraffic events are, in effect, a superset of objects, where each object exists for a defined time. They have attributes emerging in the association of aircraft objects with airspace objects; these minimally include the attributes of:

  • Position (given by airway interval object currently occupied)
  • Altitude (given by flight level (FL) object currently occupied)
  • Speed (given in knots, derived from rate of change in Position and Altitude)
  • Heading (given by next routed beacon object(s))
  • Time (standard clock time)

Unlike the dispositional attributes of airspace and aircraft objects, PASHT attributes of airtraffic events have a variable state determined by the interventions of the controller; they might be said to be ‘affordant’ attributes.

2.3 Airtraffic event vectors and their task attributes

Each attribute of an airtraffic event can possess any of a range of states; generally, each attribute affords transformation from one state to another. However there is an obvious temporal continuity in the ATM domain since time-ordered series of airtraffic events are associated with the same aircraft. Such a series we can describe as an ‘airtraffic event vector’. Whilst event vectors subsume the affordant attributes (the PASHT attributes) of individual airtraffic events, they also exhibit higher level attributes. The task of the controller arises in the transformation of these ‘task attributes’ of event vectors.

The two superordinate task attributes of event vectors are safety and expedition. Safety is expressed in terms of a ‘track separation’ and a vertical separation. Track separation is the horizontal separation of aircraft, whether in passing, crossing or closing traffic patterns, and is expressed in terms of flying time separation (e.g., 600 seconds). A minimum legal separation is defined as 300 seconds, and all separations less than this limit are judged unsafe. Aircraft on intersecting paths but separated by more than the legal minimum are judged to be less than safe, and the level of their safety is indexed by their flying time separation. Aircraft not on intersecting paths (and outside the legal separation) are judged to be safe. A legal minimum for vertical separation of one flight level (500m) is adopted: aircraft separated vertically by more than this minimum are judged to be safe, whilst a lesser separation is judged unsafe.

Expedition subsumes the task attributes of:

  • flight progress’, that is, the duration of the flight (e.g., 600 seconds) from entry onto the sector to the present event ;
  • fuel use’, that is, the total of fuel used (e.g., 8000 gallons) from entry onto the sector ;
  • number of manoeuvres that is, the total number of instructions for changes in speed or navigation issued to the aircraft from entry onto the sector; and
  • ‘exit height variation’, that is, the variation (eg, 1.5 FLs) between actual and
    desired height at exit from the sector.

Three different sorts of airtraffic event vector can be defined: actual; projected and goal. Each vector posseses the same classes of task attribute, but each arises from different air traffic events. Figure 2 schematises the three event vectors within an event vector matrix.

  • First, the actual event vector describes the time-ordered series of actual states of airtraffic events: in other words, how and where an aircraft was in a given period of its flight. Aircraft within the same traffic scenario can be described by separate, but concurrent actual event vectors. Figure 2 schematises an actual event vector (actual0 … actual n, actual end) related to the underlying sequence of air traffic events (PASHT values). For example, actual1 represents the actual task attribute values for a given aircraft at the first instruction issued by the controller to the airtraffic. It expresses the actual current safety of a particular aircraft, the current total of fuel used, the current total of time taken in the flight, and the current total of manoeuvres made. Exit height variation applies only to the final event (actualend) in the event vector, when the final exit height is determined.
  • Second, the goal event vector describes the time-ordered series of goal states of airtraffic events: in other words, how and where an aircraft should have been in a given period of its flight. Figure 2 schematises a goal event vector (goal0 … goal n, goal end) within the event vector matrix. For example, goal 1 represents the goal values of the task attributes at the controller’s first intervention, in terms of the goal level of safety (i.e., the aircraft should be safe), and current goal levels of fuel used, time taken, and number of manoeuvres made. These values can be established by idealising the trajectory of a single flight made across the sector in the absence of any other aircraft, where the trajectory is optimised for fuel use and progress. The goal value for exit height variation applies only to the final event (goalend).
  • Third, the projected event vector describes the time-ordered series of projected future states of airtraffic events: in other words, how and where an aircraft would have been in a given period of its flight, given its current state – and assuming no subsequent intervention by the controller (an analysis commonly provided by ‘fast-time’ traffic simulation studies). In practice, only the projected exit state, and projected separation conflicts at future intermediate events, are needed for the analysis, and only from the start of the given period and at each subsequent controller intervention. In this way, the potentially large number of projected states is limited. Figure 2 schematises a projected event vector (projct0(end), .. projctn(end)) within the event vector matrix. For example, projct1(end) represents the projected end values of the task attributes following the controller’s first intervention. It describes the projected final safety state of the aircraft, total projected fuel use for its flight through the sector, its total projected flight time through the sector, the total number of interventions and the projected exit height variation.

Figure 2. The event vector matrix

An event vector matrix of this form was constructed for each of the controller subjects performing the simulated air traffic management task. It was constructed in a spreadsheet using a protocol of aircraft states and controller instructions collected by the computational traffic model.

The differentiation of actual, goal and projected event vectors now enables expression of the quality of air traffic management by the controller.

2.4 Quality of air traffic management (ATMQ)

The final concept in this framework for describing the ATM domain is of task quality. Task quality describes the actual transformation of domain objects with respect to goals (Dowell and Long, 1998). In the same way, the Quality of Air Traffic Management (ATMQ) describes the success of the controller in managing the air traffic with regard to its goal states.
ATMQ subsumes the Quality of Safety Management (QSM) and Quality of Expedition Management (QEM). Although there are examples (Kanafani, 1986) of such variables being combined, here the separation of these two management qualities is maintained. Since expedition subsumes the attributes of fuel use, progress, exit conditions and manoeuvres, each of these task attributes also has a management quality. So, QEM comprises:

  • QFM: Quality of fuel use management
  • QPM: Quality of progress management
  • QXM: Quality of exit conditions management
  • QMM: Quality of manoeuvres management

These separate management qualities are combined within QEM by applying weightings according to their perceived relative salience (Keeney, 1993).

A way of assessing any of these traffic management qualities would be (following Debenard, Vanderhaegen and Millot, 1992) to compare the actual state of the traffic with the goal state. But such an assessment could not be a true reflection of the controller’s management of the traffic because air traffic processes are intrinsically goal directed and partially self-determining. In other words, each aircraft can navigate its way through the airspace without the instructions of the controller, each seeking to optimise its own state; yet because each is blind to the state and intentions of other aircraft, the safety and expedition of the airtraffic will be poorly managed at best. ATMQ then, must be a statement about the ‘added value’ of the controller’s contributions to the state of a process inherently moving away or towards a desired state of safety and expedition. To capture this more complex view of management quality, ATMQ must relate the actual state of the traffic relative to the state it would have had if no (further) controller interventions had been made (its projected state) and relative to its goal state. In this way, ATMQ can be a measure of gain attributable to the controller.

Indices for each of the management qualities included in ATMQ can be computed from the differences between the goal and actual event vectors. The form of the index is such that the quality of management is optimal when a zero value is returned, that is to say, when actual state and goal state are coincident. A negative value is returned when traffic management quality is less than desired (goal state). For, QPM and QFM, a value greater than zero is possible when actual states are better than goal states, since it is possible for actual values of fuel consumed or flight time to be less than their goal values. Further, by relating the index to the difference between the goal and projected event vectors, the significant difference of the ATM worksystem’s interventions over the scenario are given. In this way, the ‘added value’ of the worksystem’s interventions is indicated.

Two forms of ATMQ are possible by applying the indices to the event vector matrix (Figure 2). Both forms will be illustrated here with the data obtained from the controller subjects performing the simulated ATM task. The analysis of ATMQ is output from the individual event vector matrices constructed in spreadsheets, as earlier explained.

The first form of ATMQ describes the task quality of traffic management over a complete period. It describes the sum of management qualities for all aircraft over their flight through the sector and so can be more accurately designated ATMQ(fl) to identify it as referring to completed flights. It is computed by using the initially projected, goal and actual final attribute values (projct0(end) , goalend, actualend ) for each event vector (i.e., the ‘beginning and end points’). The functions by which these ATMQ(fl) management qualities are calculated are given in Appendix 1.

Figure 3 illustrates the assessment of ATMQ(fl) – in other words the assessment of management qualities over completed flights for the controllers separately managing the same traffic scenario. The scenario consisted of six aircraft entering the sector over a period of 45 minutes. ATMQ(fl) was first computed for each form of management quality, for each aircraft under the control of each controller. Figure 3 presents a summation of this assessment for each of the controllers for each of the five management qualities but for all six individual flights. For example, we are able to see the quality with which Controller 1 managed the safety (i.e., QSM) and fuel use (i.e., QFM) of all six aircraft under her control over the entire period of the task.

Figure 3. Assessment of Air Traffic Management Quality for all completed flights of each controller.

It is important to note that ATMQ(fl) is achronological, in so much that it describes the quality of management of each flight after its completion: hence, it would return the same value whether all aircraft had been on the sector at the same time during the scenario, or whether only one flight had been on the sector at any one time. Whilst this kind of assessment provides an essential view of the acquittal of management work from the point of view of each aircraft, it provides a less satisfactory view of the acquittal of management work from the point of view of the worksystem.

The second form of ATMQ describes the task quality of traffic management for each intervention made by an individual controller. This second kind of task quality is designated as ATMQ(int), to identify it as referring to interventions and is computed from the currently projected end state, previously projected end state, and new goal end state (for example, projct1(end) , projct2(end), goalend for the second intervention). The functions by which these ATMQ(int) management qualities are calculated are given in Appendix 2.

Figure 4 illustrates this second principal form of ATMQ – the assessment of ATMQ(int) for all aircraft with each intervention of an individual controller. For the sake of clarity, only the qualities of safety (QSM), fuel use (QFM) and progress (QPM) are shown. For each management intervention made by the controller during the period of the task, these three management qualities are described, each triad of data points relating to an instruction issued by the controller to one of the six aircraft.

Figure 4. Qualities of: progress management (QPM); fuel use management (QFM); and safety management (QSM) achieved by Controller3 during the task.

Finally, although the analysis of ATMQ requires the worksystem’s interventions to be explicit, it does not require that there actually be any interventions. After all, when no problems are present in a process, good management is that which monitors but makes no intervention. Similarly, if the projected states of airtraffic events are the same as the goal states, then good management is that which makes no interventions, and in this event, ATMQ would return a value of zero.

To summarise, the ATM domain model describes the work performed in the Air Traffic Management task. It describes the objects, attributes, relations and states in this class of domain, as related to goals and the achievement of those goals. The model applies the generic concepts of domains given by the cognitive engineering conception presented earlier. The model describes the particular domain of the simulated ATM task from which derives the example assessments of traffic management quality given here. Corresponding with the domain model, the worksystem model presented in the next section describes the system of agents that perform the Air Traffic Management task.

3. Model of the ATM worksystem

A model of the worksystem which performs the Air Traffic Management task can be generated directly from the domain model. The representations and processes minimally required by the worksystem can be derived from the constructs which make up the domain model. In this way, ecological relations (Vera and Simon, 1993) bind the worksystem model to the domain. Woods and Roth (1988) identify the ecological modelling of systems as a central feature of cognitive engineering, given the concern for designing systems in which the cognitive resources and capabilities of users are matched to the demands of tasks.

The ecological approach to modelling worksystems has been contrasted (Payne, 1991) both with the architecture-driven and the phenomenon-driven approaches: that is to say, it can be contrasted both with the deductive application of general architectures to models of specific behaviours (Howes and Young, 1997), and with attempts to generate ‘local’ models from empirical observations of specific performance issues. However this distinction is too sharply drawn and needs to be further qualified, since the organisation of a worksystem model (as opposed to the content), is not determined by the domain model. First, the ATM worksystem model instantiates the conception of cognitive design problems. Hence the concepts of structure, behaviour and costs, are used as a primary partitioning of the ATM worksystem model. Second, the ATM worksystem model adopts specific constructs from the blackboard architecture (Hayes-Roth and Hayes-Roth, 1979) to organise the particular relations between the representations and processes deriving from the ATM domain model. Hence a general cognitive architecture is employed selectively in the ATM worksystem model.

3.1 Structures of the ATM worksystem

The structures of the ATM worksystem consist, at base, of representations and processes. The representations constructed and maintained by the ATM worksystem are shown schematically in Figure 5, contained within a blackboard of airtraffic events, a blackboard of event vectors, and a schedule of planned instructions.

The blackboard of airtraffic events contains a representation of the current airtraffic event (e1)constructed from sensed traffic data. The blackboard has two dimensions, a real time dimension and a dimension of hypotheses about the PASHT attribute states of individual aircraft. Knowledge sources associated with this blackboard support the construction of hypotheses about the attributes of airtraffic events. For example, knowledge sources concerning the topology of the sector airways support the construction of hypotheses about heading attributes. As the ATM worksystem monitors flights through the sector, it maintains a representation of a succession of discrete airtraffic events.

A blackboard of event vectors contains separate representations of a current event vector, a goal event vector, and a planned vector. The current event vector expresses the actual values of task attributes deriving from the current airtraffic event, and the projected values of those task attributes at future events. A representation of the goal event vector expresses the goal values of task attributes for the current and projected airtraffic events. A representation of a planned event vector expresses planned values of task attributes for the current and projected airtraffic events. Critically, this vector is distinct from the goal event vector, allowing that the planned state of the traffic will not necessarily coincide with the idealised goal state.

The blackboard of event vectors has two dimensions, a real time dimension and a dimension of hypotheses about the task attributes of event vectors. The hypotheses then concern the attributes of safety and expedition of each vector, where the attribute of expedition subsumes the individual attributes of progress, fuel use, number of manoeuvres and exit height variation. Knowledge sources separately associated with this blackboard support the construction of hypotheses about the attributes of event vectors. For example, knowledge sources about the minimum legal separations of traffic, and about aircraft fuel consumption characteristics, support the construction of hypotheses about safety and fuel use, respectively. Other knowledge sources support the ATM worksystem in reasoning about differences between the current vector and goal vector, and in constructing the planned vector.

Apparent within the blackboard of event vectors are a distinct monitoring horizon and planning horizon. The current event vector extends variably into future events. The temporal limits of the current vector constitute a ‘monitoring horizon’ of the ATM worksystem: it is the extent to which the worksystem is ‘looking ahead’ for traffic management problems. Similarly the planned event vector extends variably into the future events. Its temporal limits constitute a ‘planning horizon’: it is the extent to which the ATM worksystem is ‘planning ahead’ to solve future traffic management problems. Both monitoring horizon and planning horizon can be expected to be reduced with increasing traffic volumes and complexities.

The planned vector is executed by a set of planned instructions. Planned instructions are generated by reasoning about the set of planned vectors for individual aircraft and the options for possible instructed changes in speed, heading or altitude. This reasoning is again supported by specialised knowledge sources. The worksystem maintains a schedule of planned instructions, shown in Figure 5 as a separate representation: instruction i1 is shown executed at time t1.

The complexity of the representations of the ATM worksystem is complemented by the simplicity of its processes. Two kinds of abstract process are specified: generation processes and, evaluation processes and can address the event-level and the vector-level representations. Two kinds of physical process are specified addressed to the event-level representations: monitoring processes and executing processes.

Figure 5. Schematic view of representations maintained by the ATM worksystem

3.2 Behaviours of the ATM worksystem

The behaviours of the ATM worksystem are the activation of its structures, both physical and abstract, which occurs when the worksystem is situated in an instance of an ATM domain. Behaviours, whether physical or abstract, are understood as the processing of representations, and so can be defined in the association of processes with representations. Eight kinds of ATM worksystem behaviour can be defined, grouped in three super-ordinate classes of monitoring, planning and controlling (i.e., executing) behaviours:

Monitoring behaviours

  • Generating a current airtraffic event. The ATM worksystem generates a representation of the current airtraffic event. This behaviour is a conjunction of both monitoring and generating processes addressing the monitoring space. The representation which is generated expresses values of the PASHT attributes of the current airtraffic event.
  • Generating a current event vector . The ATM worksystem generates a representation of the current vector by abstraction from the representation of the current airtraffic event. Therepresentation expresses current actual values, and currently projected values of the task attributes of the event profile. In other words, it expresses the actual and projected safety and expedition of the traffic.
  • Generating a goal event vector. The representation of the goal vector is generated directly by a conjunction of monitors and generators. The representation expresses goal values of the task attributes of the event profile.
  • Evaluating a current event vector. The ATM worksystem evaluates the current vector by identifying its variance with the goal vector. This behaviour attaches ‘problem flags’ to the representation of the current vector.

Planning behaviours

  • Generating a planned event vector. If the evaluation of the current vector with the goal vector reports an acceptable conformance of the former, then the current vector is adopted as the planned vector. Otherwise, a planned vector is generated to improve that conformance.
  • Evaluating a planned event vector. With the succession of current vector representations, and their evaluation, the ATM worksystem evaluates the planned vector and a new planned vector is generated.
  • Generating a planned instruction. Given the planned vector, the instructions needed to realise the plan will be generated by the ATM worksystem, and perhaps too, the actions needed to execute those interventions.

Controlling behaviour

  • Executing a planned intervention. The ATM worksystem generates the execution of planned interventions, in other words, it decides to act to issue an instruction to the aircraft.

These eight worksystem behaviours can be expressed continuously and concurrently. With the changing state of the domain, not least as a consequence of the worksystem’s interventions, each representation will be revised.

3.3 Cognitive costs

Cognitive costs can be attributed to the behaviours of the ATM worksystem and denote the cost of performing the air traffic management task. These cognitive costs are a critical component of the performance of the ATM worksystem, and so too of this formulation of the ATM cognitive design problem. Cognitive costs are derived from a model of the eight classes of worksystem behaviour as they are expressed over the period of the air traffic management. The model of worksystem behaviours is established using a post-task elicitation method, as now described.

Following completion of the simulated traffic management task, the controller subject was required to re-construct their behaviour in the task by observing a video recording of traffic movements on the sector during the task. The recording also showed all requests the controller had made to aircraft for height and speed information, and it showed the instructions that were issued to each aircraft. A set of unmarked flight strips for the traffic scenario was provided. As the video record of the task was replayed, the controller was required to manipulate the flight strips in the way they would have done during the task. For example, as each aircraft entered the sector they were required to move the appropriate strip to the live position. As the aircraft progressed through the sector, its sequence of strips would be ‘made live’ and then discarded. The controller annotated the flight strips with information obtained from each aircraft request made during the task, and with each instruction issued. The controller was required to view the videotape as a sequence of five minute periods. They were able to halt the tape at any point, for example, in order to update the flight strips. However, no part of the videotape could be replayed.

At the end of each five minute period, the controller was required to complete a ‘plan elicitation’ sheet. The plan elicitation sheet required the controller to state for each aircraft, the interventions they were planning to make. The specific planned instruction was to be stated (height or speed change) as well as the location of the aircraft when the instruction would be issued. The controller was asked to identify aircraft for which, at that time, no interventions were planned, whether because consideration had not then been given to that aircraft, or a decision had been made that no further instructions would be needed. When the sheet was completed it was set to one side and the controller then viewed the next five minute period of the videotape, after which they completed a new plan elicitation sheet. In this way, for each aircraft at the end of each five minute interval, all planned interventions were described.

This elicited protocol of sampled planned interventions was then compared with the instructions originally issued, as recorded by the traffic model. The comparison indicated a number of issued instructions whose plan had not been reported in the corresponding previous sampling interval of the post-task elicitation. These additional instructions were taken to indicate planning behaviours wherein a planned intervention had been generated and executed between elicitation points. Hence, the record of executed interventions was used to augment and further complete the record of planned interventions obtained from the post-task elicitation. The result of this analysis was a data set describing the sequence of planned interventions for each aircraft over the period of the traffic management task.
The analysis was continued by abstracting the classes of planned interventions for each aircraft over the scenario, divided again into a succession of five minute intervals. Four different kinds of planned intervention were identified:

  • (i) interventions planned at the beginning of an interval and not executed within
    the interval.
  • (ii) planned interventions which were a revision of earlier plans, but which also
    were not executed within the five minute interval.
  • (iii) planned interventions which were also executed within the same five minute
    interval, plans executed exactly, and plans revised when executed.
  • (iv) plans for interventions made during the five minute interval, but where those
    plans were not described at all at the beginning of the interval.

Each of these intervention plans was identified by its instruction type, that is, whether it was a planned instruction for a height or speed change.

Representations of airtraffic events, planned event vectors, current vectors and goal vectors are implicit in the analysis of planned interventions. These representations were inferred from the analysis of planned interventions by applying a set of eight rules deriving from the ATM worksystem model, as given in Table 1.

  1. the behaviour of generating a representation of the current airtraffic event was associated with any planned intervention for a given aircraft within a given interval, whether reported or inferred, except where those planned interventions were (a) reported rather than inferred, and (b) a reiteration of a previous reporting of a planned intervention, and (c) not executed within the interval.
  2. the behaviour of generating a representation of the current vector was only associated with those planned interventions already associated with the behaviour of generating an event representation, except where (a) the planned intervention is a revision of an earlier planned intervention (b) and the planned intervention is not executed within the same interval.
  3. the behaviour of generating a goal event vector was only associated with the first planned intervention for each aircraft.
  4. the behaviour of evaluating the current vector was associated with all planned interventions already associated with a behaviour of generate a current vector.
  5. the behaviour of generating a planned vector was associated with all planned interventions already associated with a behaviour of evaluating a current vector.
  6. the behaviour of evaluating a planned vector was associated only with planned interventions which were revisions of earlier planned interventions, regardless of whether they were reported or inferred.
  7. the behaviour of generating a planned intervention was associated with all planned interventions already associated with a behaviour of generating a planned vector, or where the planned intervention was a revision of an earlier reported planned intervention
  8. the behaviour of generating the execution of a planned intervention was identified directly from the model of planned interventions

Table 1. Rules applied to constructing the worksystem behaviour model from the analysis of planned interventions.

The result of this analysis is a model of the eight cognitive behaviours of the ATM worksystem expressed over the period of the task. Cognitive costs can be derived from this model by applying the following simplifying assumptions. First, costs are atomised, wherein a cost is separately associated with each instance of expressed behaviour. Second, a common cost ‘unit’ is attributed to each such instance. Two different but complementary kinds of assessment of behavioural cognitive costs are possible. A cumulative assessment describes the cognitive costs associated with each class of behaviour over the complete task, based on the total number of expressed instances of this class of behaviour. A continuous assessment describes the cognitive costs associated with each class of behaviour over each interval. The metric used in both forms of assessment is simply the number of instances of expressed behaviour in a specific class.

An example of the cumulative assessment of the controller’s behavioural costs is given in Figure 6. The figure presents the behavioural costs of each class of controller behaviour exhibited during the traffic management task.

Figure 6. Cumulative assessment of cognitive costs for each class of ATM worksystem behaviour

Figure 7. Continuous assessment of cognitive behavioural costs.

Examining the variation across categories, the costs of generating goal vectors were less than any other category. The costs of generating a representation of the current event, and the costs of generating planned interventions, were greater than any other category. Other categories of behaviour incurred seemingly equivalent levels of cost. In terms of the superordinate categories of behaviour, the cognitive costs of planning appear equivalent to those of monitoring and controlling.

An example of the continuous assessment of the same controller’s behavioural costs is given in Figure 7. It is an assessment of all classes of cost over each sampling interval (300 seconds) of the task. For simplicity, this assessment is presented as the costs of the superordinate classes of behaviour of monitoring, planning and controlling over each interval. Again, the assessment is produced directly from the number of expressed instances of each class of worksystem behaviour. The continuous assessment includes the average across all costs over each interval.

The continuous assessment suggests that costs rose from the first five minute interval of the task to reach a maximum in the third interval. Because all the aircraft had arrived on the sector by the third interval in the task, the increase in cognitive behavioural costs might be interpreted as the effect of traffic density increases. However, since costs then fall to a minimum in the fifth interval, this interpretation is implausible. Rather, the effect is due to an increase then decrease in monitoring and planning costs as the controller monitored the entry of each aircraft and generated a plan. Although the plan might later be modified, planning behaviours would predominate in the first part of the task. The plan would later be executed by the worksystem’s controlling behaviours, and indeed, Figure 7 indicates that the cognitive costs of controlling behaviours predominated over both monitoring and planning costs in the final interval of the task.

The simplifying assumptions adopted in this analysis of cognitive costs need to be independently validated before the technique could be exploited more generally. They can be seen as an example of the approximation which Norman associates with Cognitive Engineering, and which allows tractable formulations of complex problems. As an assessment of cognitive costs based on a model of cognitive behaviour, the analysis contrasts with current methods for assessment of mental workload applied to the ATM task, methods which include concurrent self- assessment by controllers on a four point scale, and other assessments based on observations of the number and state of flight strips in use on the sector suite. Within the primary aim of this paper, the analysis exemplifies the incorporation of cognitive costs within the formulation of the cognitive design problem of ATM.

4. Using the problem formulation in cognitive design

Taken together, the models of the ATM domain and ATM worksystem provide a formulation of the cognitive design problem of Air Traffic Management. The domain model describes the work of air traffic management in terms of objects and relations, attributes and states, goals and task quality (goal achievement). The worksystem model describes the system that performs the work of air traffic management, in terms of structures, processes and the costs of work. The models have been illustrated with data captured from a simulated ATM system, wherein controller subjects performed the simulated management task with a computational traffic model.

In the case of the simulated system, the data indicate a worksystem which achieves an insufficient level of traffic management quality and incurs an undesirable level of cognitive cost. The assessment of ATMQ (fl) for all controllers indicated, for example, an inconsistent management of traffic safety (Figure 3). The assessment of ATMQ (int) for Controller 3 indicated, for example, a declining management of progress over the period of the task, and a sub-optimal trade-off between management of progress and of fuel use (Figure 3). Cognitive costs associated with specific categories of behaviour having a level significantly higher than average might also be considered undesirable, such as the category of generating a planned intervention (Figure 6). These data express the requirement for a revised worksystem able to achieve an acceptable trade-off (Norman, 1986) between task quality and cognitive costs.

Because it expresses this cognitive requirement, the problem formulation has the potential to contribute to the specification of requirements for worksystems. Cognitive requirements should be seen as separate from, and complimentary to, software systems requirements. Both kinds of requirement must be met in the design of software-intensive worksystems. Such an approach would mark a shift from standard treatments of software systems development (Sommerville, 1996) wherein users’ tasks and capabilities are interpreted as and re-expressed as ‘non- functional’ requirements of the user interface of the software system.

As well as supporting the specification of requirements, the formulation of the ATM cognitive design problem may also be expected to support the design of worksystems. We might, for example, consider how the problem formulation can contribute to the design of an electronic flight progress strip, earlier described as a focal issue in the development of a more effective ATM system. The problem formulation provides a network within which the flight progress strip can be understood in terms of what it is, and how it is used. First, the domain model allows analysis of the flight strip as a representation. For example, each paper flight strip represents a specific airtraffic event of an aircraft passing a particular beacon. It also represents for reference purposes the preceding and the following such events. The printed information on the strip describes the goal attributes of this airtraffic event in terms of desired height, speed and heading. The controller’s annotations of the strip describe both instructions issued and planned instructions. Hence the strip provides a representation of PASHT attributes of the given event. The strip does not represent event vectors, or their task attributes. The worksystem model tells us that the controller must construct the current, goal and planned event vectors, and their attributes, from the PASHT level representation on the strip. These examples indicate how the problem formulation can begin to be used to describe the flight strip and to reason about how the strip is used.

The problem formulation supports the process of evaluation, including the formative evaluation of specific design defects. For example, Controller3 achieved a poor management of safety (QSM) over the period of the task (see Figure 4) due to three interventions made some 1250 seconds into the task. The domain modelindicated that the first of the three instructions was for one aircraft to climb above and behind another aircraft, leading to a separation infringement. The worksystem model, constructed from the post-task protocol analysis, described the plans that lead to this misjudgement.

To conclude at a discipline level, the problem formulation presented in this paper can be viewed more generally in terms of the claimed emergence of cognitive engineering. Dowell and Long (1998) have identified design exemplars as a critical entity in the discipline matrix of cognitive engineering. An exemplar is a problem formulation and its solution. Exemplars exemplify the use of cognitive engineering knowledge in solving problems of cognitive design; and they serve as cases for reasoning about new problems. By contrast, craft practices of cognitive design use demonstrators and ‘design classics’ as its exemplars, a role occupied, for example, by the Macintosh graphical user interface. By contrast, the exemplars of cognitive engineering must be abstractions : they must be formulations of design problems and solutions. The formulation in this paper of the ATM cognitive design problem is an attempt to better understand and advance the construction of exemplars for cognitive engineering.

Acknowledgement

This work was conducted at the Ergonomics and HCI Unit, University College London. I am indebted to Professor John Long for his critical contributions.

References

Checkland P., 1981. Systems thinking, systems practice. John Wiley and Sons: Chichester.

Debenard S., Vanderhaegen F., and Millot. P., 1992. An experimental investigation of dynamic allocation of tasks between air traffic controller and AI system. In Proc. of 5th symposium ‘Analysis, design and evaluation of man machine systems, The Hague, Holland, June 9-11.

Dowell J. and Long. J.B., 1998. Conception of the cognitive engineering design problem, Ergonomics. vol 41, 2, pp. 126 – 139.

Dowell J., Salter I. and Zekrullahi S., 1994. A domain analysis of air traffic management work can be used to rationalise interface design issues. In Cockton G., Draper S. and Weir G. (ed.s),People and Computers IX, CUP.

Field A., 1985. International Air Traffic Control. Pergamon: Oxford.

Harper R. R., Hughes J. A, and Shapiro D. Z., 1991. Harmonious working and CSCW: computer technology and air traffic control. In Studies in Computer supported cooperative work: theory, practice and design, (ed.s), Bowers J.M & Benford S.D. North Holland: Amsterdam.

Hayes-Roth B. and Hayes-Roth F., (1979). A cognitive model of planning. Cognitive Science, 3, pp 275-310.

Hollnagel E. and Woods D.D., (1983). Cognitive systems engineering: new wine in new bottles. International Journal of Man-Machine Studies, 18, pp 583-600.

Hopkin V. D., 1971. Conflicting criteria in evaluating air traffic control systems. Ergonomics , 14, 5, pp 557-564.

Howes A. & Young R.M. 1997, The role of cognitive architecture in modeling the user: Soar’s learning mechanism, Human Computer Interaction, Vol. 12 No. 4, pp 311- 343

Hutchins E. (1994), Cognition in the wild. MIT Press: Mass. Kanafani A., 1986. The analysis of hazards and the hazards of analysis: reflections
on air traffic safety management. Accident analysis and prevention. 18, 5, pp403-416.

Keeney R.L., 1993, Value focussed thinking: a path to creative decision making.
Cambridge MA: Harvard University Press.

Long J.B. and Dowell J., 1989. Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering . In Sutcliffe A. and Macaulay L. (ed.s). People and Computers V. Cambridge University Press, Cambridge.

Marr D., (1982). Vision. Wh Freeman and Co: New York.

Norman D.A., (1986). Cognitive engineering. In Norman D.A. and Draper S.W., (ed.s)User Centred System Design. Erlbaum: Hillsdale, NJ. pp 31 – 61.

Payne S.J., 1991, Interface problems and interface resources. In Carroll J.M (ed.) Designing Interaction, Cambridge University Press: Cambidge.

Rasmussen J., (1986). Information processing and human-machine interaction: an approach to cognitive engineering. North Holland: New York.

Ratcliffe S., 1985. Automation in air traffic management. S. Journal of Navigation, 38, 3, pp 405-412.

Rouse W. B., (1980). Systems engineering models of human machine interaction. Elsevier: North Holland.

Shepard T., Dean T., Powley W. and Akl Y., (1991). A conflict prediction algorithm using intent information. In Proceedings of the Air Traffic Control Association Annual Conference, 1991.

Sommerville, I., 1996, Software Engineering. Addison Wesley: New York.

Sperandio J. C., 1978. The regulation of working methods as a function of workload
among air traffic controllers. Ergonomics, 21, 3, pp 195-202.

Vera A.H. and Simon H.A., 1993, Situated action: a symbolic interpretation.
Cognitive Science, 17, pp 7-48.

Whitfield D. and Jackson A., (1982). The air traffic controller’s picture as an example of a mental model. In Proceedings of the IFAC conference on analysis, design and evaluation of man-machine systems. Baden-Baden, Germany, 1982. HMSO: London.

Woods D.D. and Roth E.M., (1988). Cognitive systems engineering. In Helander M. (ed.) Handbook of Human Computer Interaction. Elsevier: North-Holland.

Appendix 1. Functions for computing ATMQ (fl): the air traffic management qualities for completed flights.

This rule means that if at a given airtraffic event, two aircraft are on a collision course and are less than a safe separation apart (300 seconds), then a penalty is immediately given, commensurate with a ‘near miss’ condition. When aircraft are on a collision course but a long way apart, safety is assessed as a function of closing time and projected time of complete flight. The form of function which this rule supplies is such that QSM is optimal when a value of zero is returned, meaning that at no time was the aircraft in separation conflict or on a course leading to a conflict no matter how far apart. The value increases negatively when conflict courses are instructed, and sharply so (as given by constant C) when those courses occur with less than a specified track and vertical separation.


The forms of function of the unit-less indices provided by these ratios are such that in each case, quality of management is optimal when a zero value is returned, that it to say, when actual state and goal state are coincident. QPM and QFM are greater than zero when respective actual states are better than goal states, and less than zero when they are worse (it is possible for actual values of fuel consumed or flight time to be less than their goal values). The difference is given by proportion with the difference that would have been the case if there had been no interventions by the ATM worksystem over the scenario. In this way, the added value of the worksystem’s interventions is indicated.

The values of QXM increase negatively from zero with the difference between actual exit height and the goal exit height. The difference is again given by proportion with the difference that would have been the case if no ATM worksystem interventions had been made: the aircraft would have left the sector at its entry height.

The values of QMM range from +0.3 when the actual number of manoeuvres is less than the goal number of manoeuvres, to zero when actual and goal are equal, and slowly increase negatively as the number of manoeuvres increases above the goal number.

The constants in the formulae for QPM, QXM, and QFM are included to reduce the ‘order effect’ distortions when small differences occur in denominator or numerator. These constants are determined by numerical iteration to ensure a negligible change in the general shape of the functions.

Appendix 2. Functions for computing ATMQ (int): the air traffic management qualities for each controller intervention

We can determine ATMQ(int) for any given time in the relationship between the previous state, the state following the intervention, and the desired state. For QPM, QFM and QXM, these states are final states projected over the remainder of the flight, and assume no further intervention will be made.

where n = number of aircraft on the sector at the time of the intervention

QXM is computed from the final event within a vector, since it is a closure-type task attribute. Safety is a continuous attribute, and QSM for each intervention is therefore as already computed for ATMQ(fl), as given in Appendix 1.

 

Conception of the Cognitive Engineering design problem 150 150 admin

Conception of the Cognitive Engineering design problem

John Dowell
Centre for HCI Design, City University, Northampton Square, London. EC1V 0HB

John Long
Ergonomics and HCI Unit, University College London, 26 Bedford Way, London. WC1H 0AP, UK.

Cognitive design, as the design of cognitive work and cognitive tools, is predominantly a craft practice which currently depends on the experience and insight of the designer. However the emergence of a discipline of Cognitive Engineering promises a more effective alternative practice, one which turns on the prescription of solutions to cognitive design problems. In this paper, we first examine the requirements for advancing Cognitive Engineering as a discipline. In particular, we identify the need for a conception which would provide the concepts necessary for explicitly formulating cognitive design problems. A proposal for such a conception is then presented.

1. Discipline of Cognitive Engineering

1.1. Evolution of Cognitive Design

A recurrent assumption about technological progress is that it derives from, or is propelled by, the application of scientific theory. Design is seen principally as an activity which translates scientific theory into useful artifacts. As such, design does not possess its own knowledge, other than perhaps as the derivative of a purer scientific knowledge. Yet close examination (Layton, 1974; Vincenti, 1993) shows this view to be in contradiction of the facts. The more correct analysis suggests that technology disciplines acquire and develop their own knowledge which enables them to solve their design problems (Long and Dowell, 1996).

The analysis of “technology as knowledge” (Layton, 1974) recognises the variety of forms of technological knowledge, ranging from tacit ‘know how’ and ‘know what’, based on personal experience, to validated engineering principles. Consider the evolution of a new technology. New technologies invariably emerge from the “inspired tinkering” (Landes, 1969) of a few who see a direct route between innovation and exploitation. As an industry is established, ad hoc innovation is supplanted by more methodical practices through which the experience of prior problems is codified and re-used. Design is institutionalised as a craft discipline which supports the cumulation and sharing of techniques and lessons learnt. The knowledge accumulated is only marginally, or indirectly derivative of scientific theory. In the case of computing technology, for example, Shaw has observed:” Computer science has contributed some relevant theory but practice proceeds largely independently of this organised knowledge (Shaw, 1990)”.

This same observation can be made of cognitive design, the activity of designing cognitive work and cognitive tools (including interactive computational tools). To date, the seminal successes in cognitive design have been principally the result of inspired innovation. The graphical user interface arose from the careful application of experience cast as design heuristics, for example, “Communicate through metaphors” (Johnson, Roberts, Verplank, Irby, Beard and Mackey, 1989). The spreadsheet is another example. More recent advances in “cognitive technologies”, such as those in groupware, dynamic visualisation techniques, and multimedia, are no different in arising essentially through craft practice based on innovation, experience and prior developments. Nevertheless, in the wake of these advances, a craft discipline has been established which supports the cumulation and sharing of knowledge of cognitive design.

However the history of technological disciplines also indicates that continued progress depends on the evolution of a corpus of validated theory to support them (Hoare, 1981; Shaw, 1990). Craft disciplines give way to engineering disciplines: personal experiential knowledge is replaced by design principles; ‘invent and test’ practices (that is to say, trial-and-error) are replaced by ‘specify then implement’ practices. Critically, design principles appear not to be acquired by translation of scientific theories. Rather, they are developed through the validation of knowledge about design problems and how to solve them.
The evolution of an engineering discipline is a visible requirement for progress in cognitive design. The requirement is apparent in at least three respects. First, cognitive design needs to improve its integration in systems development practices, and to ensure it has a greater influence in the early development life of products. Second, cognitive design needs to improve the reliability of its contributions to design, providing a greater assurance of the effectiveness of cognitive work and tools. Third, cognitive design needs to improve its learning process so that knowledge accumulated in successful designs can be made available to support solutions to new design problems. For at least these reasons, cognitive design must advance towards an engineering discipline. This paper is addressed to the evolution of such a discipline, a discipline of Cognitive Engineering.

1.2. Emergence of Cognitive Engineering

The idea of a discipline of Cognitive Engineering has been advocated consistently for more than a decade (Hollnagel and Woods, 1983; Norman, 1986; Rasmussen, Pejtersen, and Goodstein, 1994; Woods, 1994). Norman has described Cognitive Engineering as a discipline which has yet to be constructed but whose promise is to transform cognitive design by supplying the “principles that get the design to a pretty good state the first time around (Norman, 1986)”. The aims of Cognitive Engineering are “to understand the fundamental principles behind human action and performance that are relevant for the development of engineering principles of design”, and second, “to devise systems that are pleasant to use”. The critical phenomena of Cognitive Engineering include tasks, user action, user conceptual models and system image. The critical methods of Cognitive Engineering include approximation, and treating design as a series of trade-offs including giving different priorities to design decisions (Norman, 1986).

Woods (1994) describes Cognitive Engineering as an approach to the interaction and cooperation of people and technology. Significantly, it is not to be taken as an applied Cognitive Science, seeking to apply computational theories of mind to the design of systems of cognitive work. Rather, Cognitive Engineering is challenged to develop its own theoretical base. Further, “Cognitive systems are distributed over multiple agents, both people and machines” (Woods, 1994) which cooperatively perform cognitive work. Hence the unit of analysis must be the joint and distributed cognitive system. The question which Cognitive Engineering addresses is how to maximise the overall performance of this joint system. Woods and Roth (1988) state that this question is not answered simply through amassing ever more powerful technology; they contrast such a technology-driven approach with a problem-driven approach wherein the “requirements and bottlenecks in cognitive task performance drive the development of tools to support the human problem solver”. Yet whether such an approach may be developed remains an open question: whether designers might be provided with the “concepts and techniques to determine what will be useful, or are we condemned to simply build what can be practically built and wait for the judgement of experience?” Woods and Roth re-state this as ultimately a question of whether “principle-driven design is possible”.

1.3. Discipline matrix of Cognitive Engineering

Cognitive Engineering is clearly an emerging discipline whose nucleus has been in research aiming to support cognitive design. The breadth and variety of its activity has continued to grow from its inception and the question now arises as to how the evolution of this discipline can be channelled and hastened. It is here that reference to Kuhn’s analysis of paradigms in the (physical and biological) sciences may offer guidance (Kuhn, 1970). Specifically, Kuhn identifies the principal elements of a ‘discipline matrix’ by which a discipline emerges and evolves. We might similarly interpret the necessary elements of the ‘discipline matrix’ of Cognitive Engineering.

The first element described by Kuhn is a “shared commitment to models” which enables a discipline to recognise its scope, or ontology. (Kuhn gives the example of a commitment to the model of heat conceived as the kinetic energy of the constituent parts of masses). For Cognitive Engineering, we may interpret this requirement as the need to acquire a conception of the nature and scope of cognitive design problems. Similarly, as Carroll and Campbell have argued, “the appropriate ontology, the right problems and the right ways of looking at them … have to be in place for hard science to develop (Carroll and Campbell, 1986)”. Features of a conception for Cognitive Engineering are already apparent, for example, in Wood’s assertion that the unit of analysis must be the distributed cognitive system.

A second element of the disciplinary matrix is “values” which guide the solution to problems. Kuhn gives the example of the importance which science attaches to prediction. Cognitive Engineering also needs to establish its values, an example is the value attached to design prescription: “(getting) the design to a pretty good state the first time around (Norman, 1986)”

A third element is “symbolic generalisations” which function both as laws and definitions for solving problems. Kuhn gives the example of Ohm’s Law which specifies the lawful relationships between the concepts of resistance, current and voltage. For Cognitive Engineering, we may interpret this requirement as the need for engineering principles which express the relations between concepts and which enable design prescription. The need for engineering principles is one which has been recognised by both Norman and by Woods.
The final element of the disciplinary matrix is “exemplars” which are instances of problems and their solutions. Exemplars work by exemplifying the use of models, values and symbolic generalisations, and they support reasoning about similarity relations with new and unsolved problems. Kuhn gives the example of the application of Newton’s second law to predicting the motion of the simple pendulum. (Note, Newton’s second law embodies the concept of inertia established in the model of mechanics which commences the Principia). Cognitive Engineering too must acquire exemplars, but here those exemplars are instances of solutions to cognitive design problems, together with the design practices which produced those solutions. Such design exemplars must illustrate the application of the conception, values and design principles and must allow designers to view new cognitive design problems as similar to problems already solved.

1.4. Requirements for a conception

If this analysis of the discipline matrix of Cognitive Engineering is correct, then it is also apparent that the necessary elements substantially remain to be constructed. None are particularly apparent in the craft-like discipline of Human Factors which, for example, does not possess engineering principles, the heuristics it possesses being either ‘rules of thumb’ derived from experience or guidelines derived informally from psychological theories and findings.

This paper is concerned with the requirement for a conception of cognitive design. As later explained, we believe this is the element of the Cognitive Engineering matrix which can and should be established first. The current absence of a conception of cognitive design is well recognised; for example, Barnard and Harrison (1989) called for an “integrating framework …. that situates action in the context of work …. and relates system states to cognitive states”, a call which still remains unanswered. However it would be wrong to suggest that currently there is no available conception of cognitive design. Rather, there are many alternative and conflicting conceptions, most being informal and partial. Hollnagel (1991) was able to characterise three broad kinds of conception: the computer as ‘interlocutor’, with cognitive work seen as a form of conversation with cognitive tools; the “human centred” conception, wherein cognitive work is understood in terms of the user’s experience of the world and its mediation by tools; and the ‘systems understanding’ in which the worker and tools constitute a socio-technical system acting in a world. The last form of conception most clearly conforms with Woods’ requirements for Cognitive Engineering, as detailed above.

Previously we have proposed a conception of the cognitive design problem (Dowell and Long, 1989; see also, Long and Dowell, 1989) intended to contribute to the discipline matrix of Cognitive Engineering. That proposal is re-stated in revised form below.

2 Conception of the Cognitive Engineering design problem

Cognitive design concerns the problems of designing effective cognitive work, and the tools with which we perform that work. Our conception of the general problem of Cognitive Engineering is formulated over concepts of cognitive work and tools, and the need to prescribe effective solutions to the cognitive design problems they present. The concepts are highlighted on first reference. A glossary appears at the end of the paper.

Cognitive work is performed by worksystems which use knowledge to produce intended changes in environments, or domains.Worksystems consist of both human activity and the tools which are used in that activity (Mumford, 1995). Domains are organised around specific goals and contain both possibilities and constraints. For example, the domain of Air Traffic Management is defined by the goals of getting aircraft to their destinations safely, on time, and with a minimum of fuel use, etc. This domain has possibilities, such as vacant flight levels and the climbing abilities of different aircraft; it also has constraints, such as rules about the legal separation of aircraft. Cognitive work occurs when a particular worksystem uses knowledge to intentionally realise the possibilities in a particular domain to achieve goals. The air traffic controllers, for example, use their knowledge of individual flights, and of standard routes through some airspace, to instruct aircraft to maintain separations and best flight tracks. In this way, the controllers act intentionally to provide a desired level of safety and ‘expedition’ to all air traffic.

Cognitive tools support the use of knowledge in cognitive work. Those tools provide representations of domains, processes for transforming those representations, and a means of expressing those transformations in the domains (Simon, 1969). The radar and other devices in the Air Traffic Controller’s suite, for example, provide representations which enable the controller to reason about the state of the domain, such as aircraft proximities, and to transform those representations, including issuing instructions to pilots, so expressing the controller’s activity in the air traffic management domain. The controller’s tools embed the intention of their designers of helping the controller achieve their goals. In spite of the way we may often casually describe what we are doing, it is never the case that the our real intention is one of using a tool. Rather, our intention is to do ‘something’ with the tool. The difficulty we have, in describing exactly what that something is, stems from the fact that the domains in which we perform cognitive work are often virtual worlds, far removed from physical objects (for instance, computer-mediated foreign exchange dealing).

The worksystem clearly forms a dualism with its domain: it therefore makes no sense to consider one in isolation of the other (Neisser, 1987). If the worksystem is well adapted to its domain, it will reflect the goals, regularities and complexities in the domain (Simon, 1969). It follows that the fundamental unit of analysis in cognitive design must be the worksystem whose agents are joined by the common intention of performing work in the domain (see also Rasmussen and Vicente, 1990; Woods, 1994). Within the worksystem, human activity is said to be intentional, the behaviour of tools is said to be intended.
The following sections outline a conception of cognitive work informed by systems design theory (e.g., Simon, 1969; Checkland, 1981), ecological systems theory (e.g., Neisser, 1987), cognitive science (e.g., Winograd and Flores, 1986) and Cognitive Engineering theory (e.g., Woods, 1994). It provides a related set of concepts of the worksystem as a system of human and device agents which use knowledge to perform work in a domain.

2.1 Domains of cognitive work

The domains of cognitive work are abstractions of the ‘real world’ which describe the goals, possibilities and constraints of the environment of the worksystem. Beltracchi (1987, see Rasmussen and Vicente (1990)), for example, used the Rankine Cycle to describe the environment of process controllers. However, for most domains, such formal models and theories are not available, even for ubiquitous domains such as document production. Further too, such theories do not provide explicit or complete abstractions of the goals, possibilities and constraints for the decision-making worksystem. For example, the Rankine cycle leaves implicit the goal of optimising energy production (and the sub-goals of cycle efficiency, etc), and is incomplete with regard to the variables of the process (e.g., compressor pressure) which might be modified. The conception must therefore provide concepts for expressing the goals, possibilities and constraints for particular instances of domains of cognitive work.

Domains can be conceptualised in terms of objects identified by their attributes. Attributes emerge at different levels within a hierarchy of complexity within which they are related (energy cycle efficiency and feedwater temperature, for one example, or the safety of a set of air traffic and the separations of individual aircraft for another example). Attributes have states (or values) and may exhibit the affordance for change. Desirable states of attributes we recognise as goals, for instance, specific separations between aircraft, and specific levels of safety of air traffic being managed. Work occurs when the attribute states of objects are changed by the behaviours of a worksystem whose intention it is to achieve goals. However work does not always result in all goals being achieved all of the time, and the difference between the goals and the actual state changes achieved are expressed as task quality .

The worksystem has a boundary enclosing all user and device behaviours whose intention is to achieve the same goals in a given domain. Critically, it is only by defining the domain that the boundary of the worksystem can be established: users may exhibit many contiguous behaviours, and only by specifying the domain of concern, might the boundary of the worksystem enclosing all relevant behaviours be correctly identified. Hence, the boundary may enclose the behaviours of more than one device as, for example, when a user is working simultaneously with electronic mail and bibliographic services provided over a network. By the same token, the worksystem boundary may also include more than one user as, for example, in the case of the air traffic controller and the control chief making decisions with the same radar displays.

The centrality of the task domain has not always been accepted by cognitive design for research, with significant theoretical consequences. Consider the GOMS model (Card, Moran and Newell, 1983). Within this model, goals refer to states of “the user’s cognitive structure” referenced to the user interface; actions (methods) are lower level decompositions of goals. Hence a seminal theory in cognitive design leaves us unable to distinguish in kind between our goals and the behaviours by which we seek to achieve those goals.

2.2 Worksystem as cognitive structures and behaviours

Worksystems have both structures and behaviours. The structures of the worksystem are its component capabilities which, through coupling with the domain, give rise to behaviour. Behaviours are the activation (see Just and Carpenter, 1992; also Hoc, 1990) of structures and ultimately produce the changes in a domain which we recognise as work being performed.

Consider the structures and behaviours of a text editor. A text editor is a computer for writing, reading and storing text. Text is a domain object and is both real and virtual. At a low level of description, usually invisible to the user, text appears as data files stored in a distinct code. At a higher level, text consists of information and knowledge stored in a format which the user may choose. Text objects have attributes, such as character fonts at one extreme and the quality of prose at the other. Generally, the domain is represented by the text editor only partially and only at low and intermediate levels. The program is a set of structures, including functions, such as formatting commands, as well as menus, icons and windows. In simple text editors, the program is a fixed invariant structure; more sophisticated editors allow the user to modify the structure – users can choose which functions are included in the program, which are presented on the menus, and the parameters of the processes they specify. These structures are activated in the behaviours of the text editor when text is created, revised and stored. Higher level editor behaviours would include browsing and creating tables of contents through interaction with the user. With these behaviours, text which has themes, style and grammar is created by users.
As this example indicates, structures consist of representations (e.g., for storing text) and processes (e.g., text editing processes). Behaviours (e.g., creating and editing text) are exhibited through activating structures when processes (e.g., functions) transform representations (e.g., text). Behaviours are the processing of representations.

2.3 Cognitive structures and behaviours of the user

Users too can be conceptualised in terms of structures and behaviours by limiting our concern for the person to a cognitive agent performing work. The user’s cognitive behaviours are the processing of representations. So, perception is a process where a representation of the domain, often mediated by tools, is created. Reasoning is a process where one representation is transformed into another representation. Each successive transformation is accomplished by a process that operates on associated representations. The user’s cognitive behaviours are both abstract (i.e., mental) and physical. Mental behaviours include perceiving, knowing, reasoning and remembering; physical behaviours include looking and acting. So, the physical behaviour of looking might have entailed the mental behaviours of reasoning and remembering, that is why and where to look. These behaviours are related whereby mental behaviours generally determine, and are expressed by, the user’s physical behaviours. A user similarly possesses cognitive structures, an architecture of processes and representations containing knowledge of the domain and of the worksystem, including the tools and other agents with which the user interacts.

Propositions, schema, mental models and images are all proposals for the morphology of representations of knowledge. The organisation of the memory system, associative and inductive mechanisms of learning, and constraints on how information can be represented (such as innate grammatical principles) have all been proposed as aspects of cognition and its structural substrates.

However, such theories established in Cognitive Science may not, in fact, have any direct relevance for the user models needed for designing cognitive work. To assume otherwise would be to conform with the view of (cognitive) design as an applied (cognitive) science, a view which we rejected at the beginning of this paper. Simply, the computational theory of mind is not concerned with how the symbols manipulated by cognition have a meaning external to the processes of manipulation, and therefore how they are grounded in the goals, constraints and possibilities of task domains (Hutchins, 1994; McShane, Dockrell and Wells, 1992). As a consequence, it is very likely the case that many theories presented by Cognitive Science to explain the manipulation of symbols cannot themselves be grounded in particular domains (see also Vicente,1990).

It is rather the case that Cognitive Engineering must develop its own models of the user as cognitive agent. In this development, the ecology of user cognition with the domain must be a fundamental assumption, with models of user cognition reflecting the nature of the domains in which cognitive work is performed. Such an assumption underpins the validity of models in Cognitive Engineering: “If we do not have a good account of the information that perceivers are actually using, our hypothetical models of their information processing are almost sure to be wrong. If we do have such an account, however, such models may turn out to be almost unnecessary”(Neisser, 1987).

2.4 Worksystem as hierarchy

The behaviours of the worksystem emerge at hierarchical levels where each level subsumes the underlying levels. For example, searching a bibliographic database for a report subsumes formulating a database query and perhaps iteratively revising the query on the basis of the results obtained. These behaviours themselves subsume recalling features of the report being sought and interpreting the organisation of the database being accessed.
The hierarchy of behaviours ultimately can be divided into abstract and physical levels. Abstract behaviours are generally the extraction, storage, transformation and communication of information. They represent and process information concerning: domain objects and their attributes, attribute relations and attribute states, and goals. Physical behaviours express abstract behaviours through action. Because they support behaviours at many levels, structures must also exist at commensurate levels.

The hierarchy of worksystem behaviours reflects the hierarchy of complexity in the domain. The worksystem must therefore have behaviours at different levels of abstraction equivalent to the levels at which goals are identified in the domain. Hence a complete description of the behaviours of an authoring worksystem, for example, must describe not only the keystroke level behaviours relating to the goal of manipulating characters on a page, but it must also describe the abstract behaviours of composition which relate to the goals of creating prose intended to convey meaning. Traditional task analyses describe normative task performance in terms of temporal sequences of overt user behaviours. Such descriptions cannot capture the variability in the tasks of users who work in complex, open domains. Here, user behaviour will be strongly determined by the initial conditions in the domain and by disturbances from external sources (Vicente, 1990). In complex domains, the same task can be performed with the same degree of effectiveness in quite different ways. Traditional task analyses cannot explain the ‘intelligence’ in behaviour because they do not have recourse to a description of the abstract and mental behaviours which are expressed in physical behaviours.

The hierarchy of worksystem behaviours is distributed over the agents and tools of the worksystem (i.e., its structures). It is definitional of systems (being ‘greater than the sum of their parts’) that they are composed from sub-systems where “the several components of any complex system will perform particular sub-functions that contribute to the overall function” (Simon, 1969). The functional relations, or “mutual influence” (Ashby, 1956), between the agents and between the agents and tools of the worksystem are interactions between behaviours. These interactions fundamentally determine the overall worksystem behaviours, rather than the behaviours of individual agents and tools alone. The user interface is the combination of structures of agents and tools supporting specific interacting behaviours (see Card, Moran and Newell, 1983). Norman (1986) explains that the technological structures of the user interfaces are changed through design, whilst the user cognitive structures of the user interface are changed through experience and training.

2.5 Costs of cognitive work

Work performed by the worksystem will always incur resource costs which may be structural or behavioural. Structural costs will always occur in providing the structures of the worksystem. Behavioural costs will always occur in using structures to perform work.

Human structural costs are always incurred in learning to perform cognitive work and to use cognitive tools. They are the costs of developing and maintaining the user’s knowledge and cognitive skills through education, training and gaining experience. The notion of learnability refers generally to the level of structural resource costs demanded of the user.
Human behavioural costs are always incurred in performing cognitive work. They are both physical and mental. Physical costs are the costs of physical behaviours, for example, the costs of making keystrokes on a keyboard and scrutinising a monitor; they may be generally expressed as physical workload. Mental behavioural costs are the costs of mental behaviours, for example, the costs of knowing, reasoning, and deciding; they may be generally recognised as mental workload. Behavioural cognitive costs are evidenced in fatigue, stress and frustration. The notion of usability refers generally to the level of behavioural resource costs demanded of the user.

2.6 Worksystem performance

The performance of a worksystem relates to its achievement of goals, expressed as task quality, and to the resource costs expended. Critically then, the behaviour of the worksystem is distinguished from its performance, in the same way that ‘how the system does what it does’ can be distinguished from ‘how well it does it’ (see also: Rouse, 1981; Dasgupta, 1991).

This concept of performance ultimately supports the evaluation of worksystems. For example, by relating task quality to resource costs we are able to distinguish between two different designs of cognitive tool which, whilst enabling the same goals to be achieved, demand different levels of the user’s resource costs. The different performances of the two worksystems which embody the tools would therefore be discriminated. Similarly, think about the implications of this concept of performance for concern with user error: it is not enough for user behaviours simply to be error-free; although eliminating errorful behaviours may contribute to the best performance possible, that performance may still be less than desired. On the other hand, although user behaviours may be errorful, a worksystem may still achieve a desirable performance. Optimal human behaviour uses a minimum of resource costs in achieving goals. However, optimality can only be determined categorically against worksystem performance, and the best performance of a worksystem may still be at variance with the performance desired of it.

This concept of performance allows us to recognise an economics of performance. Within this economy, structural and behavioural costs may be traded-off both within and between the agents of the worksystem, and those costs may be traded off also with task quality. Users may invest structural costs in training the cognitive structures needed to perform a specific task, with a consequent reduction in the behavioural costs of performing that task. Users may expend additional behavioural costs in their work to compensate for the reduced structural costs invested in the under-development of their cognitive tools.

The economics of worksystem performance are illustrated by Sperandio’s observation of air traffic controllers at Orly control tower (Sperandio 1978). Sperandio observed that as the amount of traffic increased, the controllers would switch control strategies in response to increasing workload. Rather than treating each aircraft separately, the controllers would treat a number of following aircraft as a chain on a common route. This strategy would ensure that safety for each aircraft was still maintained, but sacrificed consideration of time keeping, fuel usage, and other expedition goals. This observation of the controllers’ activity can be understood as the controller modifying their (generic) behaviours in response to the state of the domain as traffic increases. In effect, the controllers are trading-off their resource costs, that is, limiting their workload, against less critical aspects of task quality. The global effect of modifying their behaviour is a qualitative change in worksystem performance. Recent work in modelling air traffic management (Lemoine and Hoc, 1996) aims to dynamically re-distribute cognitive work between controllers and tools in order to stabilise task quality and controller resource costs, and therefore to stabilise worksystem performance.

2.7 Cognitive design problems

Engineering disciplines apply validated models and principles to prescribe solutions to problems of design. How then should we conceive of the design problems which Cognitive Engineering is expected to solve? It is commonplace for cognitive design to be described as a ‘problem solving activity’, but such descriptions invariably fail to say what might be the nature and form of the problem being solved. Where such reference is made, it is usually in domain specific terms, and a remarkable variety of cognitive design problems is currently presented, ranging from the design of teaching software for schools to the design of remote surgery. A recent exception can be found in Newman and Lamming (1995). Yet the ability to acquire knowledge which is valid from one problem to the next requires an ability to abstract what is general from those two problems. We presume that instances of cognitive design problems each embody some general form of design problem and further, that they are capable of explicit formulation. The following proposes that general form.

Cognitive work can be conceptualised in terms of a worksystem and a domain and their respective concepts. In performing work, the worksystem achieves goals by transformations in the domain and it also incurs resources which have their cost (Figure 1).

The aim of design is therefore ‘to specify worksystems which achieve a desired level of performance in given domains’.

Figure 1. Worksystem and a domain

More formally, we can express the general design problem of Cognitive Engineering as follows:

Specify then implement the cognitive structures and behaviours of a worksystem {W} which performs work in a given domain (D) to a desired level of performance (P) in terms of task quality (Σ Q) and cognitive user costs (Σ KU).

An example of such a cognitive design problem formulated in these terms might refer to: the requirement for specifying then implementing the representations and processes as the knowledge of an air traffic management worksystem which is required to manage air traffic of a given density with a specified level of safety and expedition and within an acceptable level of costs to the controllers. This problem expression would of necessity need to be supported by related models of the air traffic management worksystem and domain (see Dowell, in prep).

By its reference to design practice as ‘specify then implement’, this expression of the general cognitive design problem is equivalent to the design problems of other engineering disciplines; it contrasts with the trial and error practices of craft design. However, the relationship between the general cognitive design problem and the design problems addressed by other engineering disciplines associated with the design of cognitive tools, such as Software Engineering and ‘Hardware Engineering’, is not explicitly specified. Nevertheless, it is implied that those other engineering disciplines address the design of the internal behaviours and structures of cognitive tools embedded in the worksystem, with concern for the resource costs of those tools.

3. Prospect of Cognitive Engineering principles

The deficiencies of current cognitive design practices have prompted our investigation of Cognitive Engineering as an alternative form of discipline. Our analysis has focused on the disciplinary matrix of Cognitive Engineering consisting of a conception, values, design principles and exemplars. The analysis assumes that Cognitive Engineering can make good ‘the deficiencies’. First, the integration of cognitive design in systems development would be improved because Cognitive Engineering principles would enable the formulation of cognitive design problems and the early prescription of design solutions. Second, the efficacy of cognitive design would be improved because Cognitive Engineering principles would provide the guarantee so lacking in cognitive design which relies on experiential knowledge. Third, the efficiency of cognitive design would be improved through design exemplars related to principles supporting the re-use of knowledge. Fourth, the progress of cognitive design as a discipline would be improved through the cumulation of knowledge in the form of conception, design principles and exemplars.

However, we observe that these elements of the disciplinary matrix required by Cognitive Engineering remain to be established. And since not all are likely to be established at the same time, the question arises as to which might be constructed first. A conception for Cognitive Engineering is a pre-requisite for formulating engineering principles. It supplies the concepts and their relations which express the general problem of cognitive design and which would be embodied in Cognitive Engineering principles.

To this end, we have proposed a conception for Cognitive Engineering in this paper, one which we contend is appropriate for supporting the formulation of Cognitive Engineering principles. The conception for Cognitive Engineering is a broad view of the Cognitive Engineering general design problem. Instances of the general design problem may include the development of a worksystem, or the utilisation of a worksystem within an organisation. Developing worksystems which are effective, and maintaining the effectiveness of worksystems within a changing organisational environment, are both expressed within the problem.

To conclude, it might be claimed that the craft nature of current cognitive design practices are dictated by the nature of the problem they address. In other words, the indeterminism and complexity of the problem of designing cognitive systems (the softness of the problem) might be claimed to preclude the application of prescriptive knowledge. We believe this claim fails to appreciate that the current absence of prescriptive design principles may rather be symptomatic of the early stage of the discipline development. The softness of the problem needs to be independently established. Cognitive design problems are, to some extent, hard: human behaviour in cognitive work is clearly to some useful degree deterministic, and sufficiently so for the design, to some useful degree, of interactive worksystems.

The extent to which Cognitive Engineering principles might be realiseable in practice remains to be seen. It is not supposed that the development of effective systems will never require craft skills in some form, and engineering principles are not incompatible with craft knowledge. Yet the potential of Cognitive Engineering principles for the effectiveness of the discipline demands serious consideration. The conception presented in this paper is intended to contribute towards the process of formulating such principles.

Acknowledgement

We acknowledge the critical contributions to this work of our colleagues, past and present, at University College London. John Dowell and John Long hold a research grant in Cognitive Engineering from the Economic and Social Research Council.

References

Ashby W. R., (1956). An introduction to cybernetics. Methuen: London.

Barnard P. and Harrison M., (1989). Integrating cognitive and system models in
human computer interaction. In: Sutcliffe A. and Macaulay L. (ed.s). People and Computers V. Proceedings of the Fifth Conference of the BCS HCI SIG, Nottingham 5-8 September 1989. Cambridge University Press, Cambridge.

Beltracchi L., (1987). A direct manipulation interface for water-based rankine cycle heat engines, IEEE transactions on systems, man and cybernetics, SMC-17, 478-487.

Card, S. K., Moran, T., Newell, A., (1983). The Psychology of Human Computer Interaction. Erlbaum: New Jersey.

Carroll J.M., and Campbell R. L., 1986, Softening up Hard Science: Reply to Newell and Card. Human Computer Interaction, 2, 227-249.

Checkland P., (1981). Systems thinking, systems practice. John Wiley and Sons: Chichester.

Dasgupta, S., (1991). Design theory and computer science. Cambridge University Press: Cambridge.

Dowell J. and Long J.B., (1989). Towards a conception for an engineering discipline of human factors. InErgonomics, 32, 11, pp 1513-1535.

Dowell, J., (in prep) The design problem of Air Traffic Management as an examplar for Cognitive Engineering.

Hoare C.A.R. , 1981. Professionalism. Computer Bulletin, September 1981.

Hoc J.M., (1990). Planning and understanding: an introdution. In Falzon P. (ed.).
Cognitive Ergonomics: Understanding learning and designing human computer
interaction. Academic Press: London.

Hollnagel E. and Woods D.D., (1983). Cognitive systems engineering: new wine in
new bottles. International Journal of Man-Machine Studies, 18, pp 583-600.

Hollnagel E., (1991). The phenotype of erroneous actions: implications for HCI
design. In Alty J. and Weir G. (ed.s), Human-computer interaction and complex
systems. Academic Press: London.

Hutchins, E. (1994) Cognition in the wild Mass: MIT press.

Johnson J., Roberts T., Verplank W., Irby C., Beard M. and Mackey K., (1989). The
Xerox Star: a retrospective. IEEE Computer, Sept, 1989, pp 11-29.

Just M.A. and Carpenter P.A., 1992 A capacity theory of comprehension: individual
differences in working memory, Psychological Review, 99, 1, 122-149.

Kuhn T.S., (1970). The structure of scientific revolutions. 2nd edition. University of
Chicago press: Chicago.

Landes D.S., (1969). The unbound prometheus. Cambridge University Press:
Cambridge. 14

Layton E., (1974). Technology as knowledge. Technology and Culture, 15, pp 31-41.

Lemoine M.P. and Hoc J.M., (1996) Multi-level human machine cooperation in air
traffic control: an experimental evaluation. In Canas J., Green T.R.G. and Warren C.P (ed.s)Proceedings of ECCE-8. Eighth European Conference on Cognitive Ergonomics. Granada, 8-12 Sept, 1996.

Lenorovitz, D.R. and Phillips, M.D., (1987). Human factors requirements engineering for air traffic control systems. In Salvendy, G. (ed.) Handbook of Human Factors. Wiley, London. 1987.

Long J.B. and Dowell J., (1989). Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering . Published in: Sutcliffe A. and Macaulay L. (ed.s). People and Computers V. Cambridge University Press, Cambridge.

Long J.B. and Dowell J., (1996). Cognitive Engineering human computer interactions The Psychologist., Vol 9, pp 313 – 317.

McShane J., Dockrell J. and Wells A., (1992). Psychology and cognitive science. In The Psychologist:, 5, pp 252-255.

MumfordE. 1995. Effective requirements analysis and systems design: the ETHICS method. Macmillan

Neisser U., 1987. From direct perception to conceptual structure. In U. Neisser, Concepts and conceptual development: ecological and intellectual factors in categorisation, CUP .

Newman W. and Lamming M., 1995,Interactive System Design. Addision-Wesley.

Norman D.A., (1986). Cognitive engineering. In Norman D.A. and Draper S.W.,
(ed.s)User Centred System Design. Erlbaum: Hillsdale, NJ.

Phillips M.D. and Melville B.E., (1988). Analyzing controller tasks to define air
traffic control system automation requirements. In Proceedings of the conference on human error avoidance techniques, Society of Automotive Engineers. Warrendale: Penn.. pp 37-44.

Phillips M.D. and Tischer K., (1984). Operations concept formulation for next generation air traffic control systems. In Shackel B. (ed.), Interact ’84, Proceedings of the first IFIP conference on Human-Computer Interaction. Elsevier Science B.V.: Amsterdam. pp 895-900.

Rasmussen J. and Vicente K., (1990). Ecological interfaces: a technological imperative in high tech systems? International Journal of Human Computer Interaction, 2 (2) pp 93-111.

Rasmussen J., Pejtersen A., and Goodstein L., (1994) Cognitive Systems Engineering. New York: John Wiley and Sons.

Rouse W.B., (1980). Systems engineering models of human machine interaction. Elsevier: North Holland.

Shaw M., (1990) Prospects for an engineering discipline of software. IEEE Software, November 1990.

Simon H.A., (1969). The sciences of the artificial. MIT Press: Cambridge Mass..

Sperandio, J.C., (1978). The regulation of working methods as a function of
workload among air traffic controllers. Ergonomics, 21, 3, pp 195-202.

Vicente K., (1990). A few implications of an ecological approach to human factors.
Human Factors Society Bulletin, 33, 11, 1 – 4. 15

Vincenti W.G. (1993) What engineers know and how they know it. John Hopkins University Press: Baltimore.

Winograd T. and Flores F., (1986). Understanding computers and cognition. Addison Wesley: Mass..

Woods D.D. and Roth E.M., (1988). Cognitive systems engineering. In Helander M. (ed.) Handbook of Human Computer Interaction. Elsevier: North-Holland.

Woods D.D., (1994). Observations from studying cognitive systems in context. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society (Keynote address).

  • 1
  • 2