Posts By :

admin

Formulating the cognitive design problem of Air Traffic Management 150 150 admin

Formulating the cognitive design problem of Air Traffic Management

John Dowell
Department of Computer Science, University College London

Evolutionary approaches to cognitive design in the air traffic management (ATM) system can be attributed with a history of delayed developments. This issue is well illustrated in the case of the flight progress strip where attempts to design a computer-based system to replace the paper strip have consistently been met with rejection. An alternative approach to cognitive design of air traffic management is needed and this paper proposes an approach centered on the formulation of cognitive design problems. The paper gives an account of how a cognitive design problem was formulated for a simulated ATM task performed by controller subjects in the laboratory. The problem is formulated in terms of two complimentary models. First, a model of the ATM domain describes the cognitive task environment of managing the simulated air traffic. Second, a model of the ATM worksystem describes the abstracted cognitive behaviours of the controllers and their tools in performingw the traffic management task. Taken together, the models provide a statement of worksystem performance, and express the cognitive design problem for the simulated system. The use of the problem formulation in supporting cognitive design, including the design of computer-based flight strips, is discussed.

1. Cognitive design problems

1.1. Crafting the controller’s electronic flight strip

Continued exceptional growth in the volume of air traffic has made visible some rather basic structural limitations in the system which manages that traffic. Most clear is that additional increases in volume can only be achieved by sacrificing the ‘expedition’ of the traffic, if safety is to be ensured. As traffic volumes increase, the complexity of the traffic management problem rises disproportionately, with the result that flight paths are no longer optimised with regard to timeliness, directness, fuel efficiency, and other expedition factors; only safety remains constant. Sperandio (1978) has described how approach controllers at Orly airport switch strategies in order to sacrifice traffic expedition and so preserve acceptable levels of workload. Simply, these controllers switch to treating aircraft as groups (or more precisely, as ‘chains’) rather than as separate aircraft to be individually optimised.
For the medium term, there is no ambition of removing the controller from their central role in the ATM system (Ratcliffe, 1985). Therefore, substantially increasing the capacity of the system without qualitative losses in traffic management means giving controllers better tools to assist in their decision-making and to relieve their workload (CAA, 1990). Yet curiously, such tools have not appeared in the operational system at large, in spite of sustained efforts made to produce them.

Take the case of the controller’s flight progress strip. The strip board containing columns of individual paper strips is the tool which controllers use for planning and as such occupies a more central role in their task than even the radar screen (Whitfield and Jackson, 1982). Development of an electronic strip has been a goal for some two decades (Field, 1985), for the simple reason that until the technical sub- system components have access to the controller’s planning, they cannot begin to assist in that planning. Even basic facilities such as conflict detection cannot be provided unless the controller’s plans can be accessed and shared (Shepard, Dean, Powley, and Akl, 1991): automatic detection is of limited value to the controller unless it is able to operate up to the extremes of the controller’s ‘planning horizon’ and to take account of the controller’s intended future instructions.

Attempts to introduce electronic flight strips, including conflict detection facilities, have often met with rejection by controllers. Rejection has usually been on the grounds that designs either mis-represent the controller’s task, or that the benefits they might offer do not offset the increases in cognitive cost entailed in their use. The consistency in this pattern of rejection is of interest since it implicates the approach taken to development.

The approach taken in the United Kingdom has been to develop an electronic system which mimics the structures and behaviours of the paper system. This approach has entailed studies of the technical properties of flight strips, and also their social context of use (Harper, Hughes & Shapiro, 1991), followed by the rapid prototyping of electronic strips designs. But electronic flight strip systems cannot hope to match the physical facility of paper strips for annotation and manipulation, particularly within the work practices of the sector team. Rather, electronic flight strips might only be accepted if their inferior physical properties are compensated by providing innovative functions for actively sharing in the higher level cognitive tasks of traffic management. By actively sharing in tasks such as flight profiling, inter-sector coordinations, etc, electronic flight strips might offset the controller’s cognitive costs at higher levels, resulting in an overall reduction in cognitive cost.

These difficulties in the development of the electronic flight strip are symptomatic of the general approach taken to cognitive design within the ATM system. It is an approach which emphasises the value of incremental and evolutionary change. But it is also one which relies, not so much on ‘what is known’ about the system, as on what is ‘tried and tested’. This craft-like approach (Long and Dowell, 1989) has resulted in effective stalemate in respect of the controller’s task, since it excludes innovative forms of cognitive design. Without an explicit, complete or coherent analysis of the Air Traffic Management task, the changes resulting from innovative designs cannot be predicted and therefore must be avoided. An alternative approach is needed, and one which offers the required analysis is cognitive engineering, as now discussed.

1.2. Cognitive engineering as formulating and solving cognitive design problems

The development of the ATM system can be seen as an exemplary form of cognitive design problem, one which subsumes a domain of cognitive work (the effective control of air traffic movements) and a worksystem comprising cognitive agents (the controllers) and their cognitive tools (e.g., flight strips). Moreover, it critically includes the effectiveness of that worksystem in performing its work – the actual quality of the air traffic management achieved and the cognitive costs to the worksystem.

Treating air traffic management as a cognitive design problem is consistent with the cognitive engineering approach to development. Cognitive engineering has been variously defined (Hollnagel and Woods, 1983; Norman, 1986; Rasmussen, 1986; Woods and Roth, 1988) as a discipline which can supersede the craft like disciplines of Human Factors and Cognitive Ergonomics. A review of definitions can be found in Dowell and Long (1998). As a discipline, cognitive engineering can be distinguished most generally as the application of engineering knowledge of users, their work and their organisations to solving cognitive design problems. Its characteristic process is one of ‘formulate then solve’ problems of cognitive design, in contrast with ad hoc approaches to improving cognitive systems. Norman (1986) identifies approximation and the systematic trade-off between design decisions as basic features of this process. Ultimately, cognitive engineering seeks engineering principles which can prescribe solutions to cognitive design problems (Norman, 1986; Long and Dowell, 1989).

This paper presents the formulation of the cognitive design problem for a simulated ATM system. To formulate any cognitive design problem takes two starting points (Figure 1). First, there must be some “situation of concern” (Checkland, 1981), in which an instance or class of worksystem is identified as requiring change. In this paper, a simulated ATM system is taken as presenting such a situation of concern (Section 1.4). Second, there must be a conception of cognitive design problems. A conception provides the general concepts, and a language, with which to express particular design problems. Similarly, Checkland (1981) describes how an explicit system model supports the abstraction and expression of problem situations within the soft systems methodology. In this paper, a conception of cognitive design problems proposed by Dowell and Long (1998) supplies the framework for the problem formulation (see Figure 1). That conception is now summarised.

Figure 1. Formulation of a cognitive design problem. The problem is abstracted over a simulated ATM system which presents a situation of concern. The problem formulation instantiates a conception for cognitive engineering .

1.3. Conception of cognitive design problems

Cognitive design problems can be expressed in terms of a dualism of domain and worksystem, where the worksystem is designed to perform work in the domain to some desired level of performance (Dowell and Long, 1998). Domains might be generally conceived in terms of their goals, constraints and possibilities. Domains consist of objects identified by their attributes. Attributes emerge at different levels within a hierarchy of complexity within which they are related. Attributes have states (or values) and so exhibit an affordance for change. Desirable states of attributes we recognise as goals. Work occurs when the attribute states of objects are changed by the behaviours of a worksystem whose intention it is to achieve goals. However work does not always result in all goals being achieved all of the time, and the variances between goals and the actual outcomes of work are expressed by task quality .

The worksystem consists of the cognitive agents and their cognitive tools (technical sub-systems) which together perform work within the same domain. Being constituted within the worksystem, the cognitive agents and their tools are both characterised in terms of structures and behaviours. Structures provide the component capabilities for behaviour; most centrally, they can be distinguished as representations and processes. Behaviours are the actualisation of structures: they occur in the processing and transformation of representations, and in the expression of cognition in action. There are, therefore, both physical and mental (or virtual) forms of both structures and behaviours. Hutchins (1994) notes that this distinction between structure and behaviour corresponds with a separation of task and algorithm (Marr, 1982); here, however, a task is treated as the conjunction of transformations in a domain and the intentional behaviours which produce those them.
Work performed by the worksystem incurs resource costs. Structural costs are the costs of providing cognitive structures; behavioural costs are the costs of using those structures. Both structural and behavioural costs may be separately attributed to the agents of a worksystem. The performance of the worksystem is the relationship of the total costs to the worksystem of its behaviours and structures, and the task quality resulting from the decisions made. Critically then, the behaviours of the worksystem are distinguished from its performance (Rouse, 1980) and this distinction allows us to recognise an economics of performance. Within this economy, structural and behavioural costs may be traded-off both within and between the agents of the worksystem, and they may also be traded-off with task quality. Sperandio’s observations of the Orly controllers, discussed earlier, is an example of the trade-off of task quality for the controller’s behavioural costs.

It follows from this conception that the particular cognitive design problem of ATM should be formulated in terms of two models,

  • a model of the ATM domain, describing the air traffic processes being managed, and
  • a model of the ATM worksystem, describing the agents and technical sub-systems (tools) which perform that management.

These two models are indicated schematically in Figure 1, as the major components of the ATM problem formulation.

1.4. Simulated air traffic management task

The ATM cognitive design problem formulated here is of a simulated ATM system which presents a situation of concern: specifically, the unacceptable increases in workload, and the losses in traffic expedition, with increasing traffic volumes. The simulation reconstructs a form of the air traffic management task. This task is performed by trained subject ‘controllers’ who monitor the traffic situation and make instructions to the simulated aircraft. The simulation is built on a computational traffic model and provides the common form of ATM control suite (Dowell, Salter and Zekrullahi, 1994). It provides a radar display of the current state of traffic on a sector consisting of the intersection of two en-route airways. It also provides commands via pull-down menus for requesting information from and instructing aircraft (i.e., for interrogating and modifying the traffic model). Last, the control suite includes an inclined rack of paper flight progress strips, arranged in columns by different beacons or reporting points. For each beacon an aircraft will pass on its route through the sector, a strip is provided in the appropriate rack column. The strips tell the controller which aircraft will be arriving when, where, and how (i.e., their height and speed), their route, and their desired cruising height and cruising speed.

Using the radar display and flight strips, the subject controller is able continuously to plan the flights of all aircraft and to execute the plan by making appropriate interventions (issuing speed and height instructions). The subject controller works in a ‘planning space’ in which, reproducing the real system, aircraft must be separated by a prescribed distance, yet should be given flight paths which minimise fuel use, flying time and number of manoeuvres, whilst also achieving the correct sector exit height (Hopkin, 1971). Fuel use characteristics built into the computational traffic model constrain the controller’s planning space with regard to expedition, since fuel economy improves with height and worsens with increasing speed. Because of this characteristic, controllers may not solve the planning problem satisfactorily simply by distributing all aircraft at different levels and speeds across the sector. Additional airspace rules (for example, legal height assignments) both constrain and structure the controller’s planning space. The controller works alone on the simulation, performing a simplified version of the tasks which would be performed by a team of at least two controllers in the real system; the paper flight strips include printed information which a chief controller would usually add whilst coordinating adjacent sectors.

Increasing volumes of air traffic within this system inevitably result in sacrifices in traffic expedition, if safety is to be maintained. Simply, the traffic management problem (akin to a “game of high speed, 3D chess”, Field, 1985) becomes excessively complex to solve. Workload increases disproportionately with additional traffic volumes. The simulated system therefore presents a realistic situation of concern over which a cognitive design problem can be formulated, as now described.

2. Model of the ATM domain

The model of the ATM domain is given in this section. Because of its application to the laboratory simulation, the model makes certain simplifications. For example, the simulation does not represent the wake turbulence of real aircraft, a factor which may significantly determine the closeness with which certain aircraft may follow others; accordingly, the framework makes no mention of wake turbulence. However, the aim here is to present a basic, but essentially correct characterisation of the domain represented by the simulation. Later refinement, by the inclusion of wake turbulence for instance, is assumed to be possible having established the basic characterisation.

2.1 Airspace objects, aircraft objects, and their dispositional attributes

An instance of an ATM domain arises in two classes of elemental objects: airspace objects, and aircraft objects, defined by their respective attributes. Aircraft objects are defined by their callsign attribute and their type attributes, for example, laden weight and climb rate. Airspace objects include sector objects, airway interval objects, flight level objects, and beacon objects. Each is defined by their respective attributes, for example, beacons by their location. Importantly, the attributes of aircraft and airspace objects have an invariant but necessary state with respect to the work of the controller: these kinds of attribute we might call ‘dispositional’ attributes.

2.2 Airtraffic events and their affordant attributes

Notions of traffic intuitively associate transportation objects with a space containing them. In the same way, an instance of an ATM domain defines a class of airtraffic events in the association of airspace objects with aircraft objects at particular instants. Airtraffic events are, in effect, a superset of objects, where each object exists for a defined time. They have attributes emerging in the association of aircraft objects with airspace objects; these minimally include the attributes of:

  • Position (given by airway interval object currently occupied)
  • Altitude (given by flight level (FL) object currently occupied)
  • Speed (given in knots, derived from rate of change in Position and Altitude)
  • Heading (given by next routed beacon object(s))
  • Time (standard clock time)

Unlike the dispositional attributes of airspace and aircraft objects, PASHT attributes of airtraffic events have a variable state determined by the interventions of the controller; they might be said to be ‘affordant’ attributes.

2.3 Airtraffic event vectors and their task attributes

Each attribute of an airtraffic event can possess any of a range of states; generally, each attribute affords transformation from one state to another. However there is an obvious temporal continuity in the ATM domain since time-ordered series of airtraffic events are associated with the same aircraft. Such a series we can describe as an ‘airtraffic event vector’. Whilst event vectors subsume the affordant attributes (the PASHT attributes) of individual airtraffic events, they also exhibit higher level attributes. The task of the controller arises in the transformation of these ‘task attributes’ of event vectors.

The two superordinate task attributes of event vectors are safety and expedition. Safety is expressed in terms of a ‘track separation’ and a vertical separation. Track separation is the horizontal separation of aircraft, whether in passing, crossing or closing traffic patterns, and is expressed in terms of flying time separation (e.g., 600 seconds). A minimum legal separation is defined as 300 seconds, and all separations less than this limit are judged unsafe. Aircraft on intersecting paths but separated by more than the legal minimum are judged to be less than safe, and the level of their safety is indexed by their flying time separation. Aircraft not on intersecting paths (and outside the legal separation) are judged to be safe. A legal minimum for vertical separation of one flight level (500m) is adopted: aircraft separated vertically by more than this minimum are judged to be safe, whilst a lesser separation is judged unsafe.

Expedition subsumes the task attributes of:

  • flight progress’, that is, the duration of the flight (e.g., 600 seconds) from entry onto the sector to the present event ;
  • fuel use’, that is, the total of fuel used (e.g., 8000 gallons) from entry onto the sector ;
  • number of manoeuvres that is, the total number of instructions for changes in speed or navigation issued to the aircraft from entry onto the sector; and
  • ‘exit height variation’, that is, the variation (eg, 1.5 FLs) between actual and
    desired height at exit from the sector.

Three different sorts of airtraffic event vector can be defined: actual; projected and goal. Each vector posseses the same classes of task attribute, but each arises from different air traffic events. Figure 2 schematises the three event vectors within an event vector matrix.

  • First, the actual event vector describes the time-ordered series of actual states of airtraffic events: in other words, how and where an aircraft was in a given period of its flight. Aircraft within the same traffic scenario can be described by separate, but concurrent actual event vectors. Figure 2 schematises an actual event vector (actual0 … actual n, actual end) related to the underlying sequence of air traffic events (PASHT values). For example, actual1 represents the actual task attribute values for a given aircraft at the first instruction issued by the controller to the airtraffic. It expresses the actual current safety of a particular aircraft, the current total of fuel used, the current total of time taken in the flight, and the current total of manoeuvres made. Exit height variation applies only to the final event (actualend) in the event vector, when the final exit height is determined.
  • Second, the goal event vector describes the time-ordered series of goal states of airtraffic events: in other words, how and where an aircraft should have been in a given period of its flight. Figure 2 schematises a goal event vector (goal0 … goal n, goal end) within the event vector matrix. For example, goal 1 represents the goal values of the task attributes at the controller’s first intervention, in terms of the goal level of safety (i.e., the aircraft should be safe), and current goal levels of fuel used, time taken, and number of manoeuvres made. These values can be established by idealising the trajectory of a single flight made across the sector in the absence of any other aircraft, where the trajectory is optimised for fuel use and progress. The goal value for exit height variation applies only to the final event (goalend).
  • Third, the projected event vector describes the time-ordered series of projected future states of airtraffic events: in other words, how and where an aircraft would have been in a given period of its flight, given its current state – and assuming no subsequent intervention by the controller (an analysis commonly provided by ‘fast-time’ traffic simulation studies). In practice, only the projected exit state, and projected separation conflicts at future intermediate events, are needed for the analysis, and only from the start of the given period and at each subsequent controller intervention. In this way, the potentially large number of projected states is limited. Figure 2 schematises a projected event vector (projct0(end), .. projctn(end)) within the event vector matrix. For example, projct1(end) represents the projected end values of the task attributes following the controller’s first intervention. It describes the projected final safety state of the aircraft, total projected fuel use for its flight through the sector, its total projected flight time through the sector, the total number of interventions and the projected exit height variation.

Figure 2. The event vector matrix

An event vector matrix of this form was constructed for each of the controller subjects performing the simulated air traffic management task. It was constructed in a spreadsheet using a protocol of aircraft states and controller instructions collected by the computational traffic model.

The differentiation of actual, goal and projected event vectors now enables expression of the quality of air traffic management by the controller.

2.4 Quality of air traffic management (ATMQ)

The final concept in this framework for describing the ATM domain is of task quality. Task quality describes the actual transformation of domain objects with respect to goals (Dowell and Long, 1998). In the same way, the Quality of Air Traffic Management (ATMQ) describes the success of the controller in managing the air traffic with regard to its goal states.
ATMQ subsumes the Quality of Safety Management (QSM) and Quality of Expedition Management (QEM). Although there are examples (Kanafani, 1986) of such variables being combined, here the separation of these two management qualities is maintained. Since expedition subsumes the attributes of fuel use, progress, exit conditions and manoeuvres, each of these task attributes also has a management quality. So, QEM comprises:

  • QFM: Quality of fuel use management
  • QPM: Quality of progress management
  • QXM: Quality of exit conditions management
  • QMM: Quality of manoeuvres management

These separate management qualities are combined within QEM by applying weightings according to their perceived relative salience (Keeney, 1993).

A way of assessing any of these traffic management qualities would be (following Debenard, Vanderhaegen and Millot, 1992) to compare the actual state of the traffic with the goal state. But such an assessment could not be a true reflection of the controller’s management of the traffic because air traffic processes are intrinsically goal directed and partially self-determining. In other words, each aircraft can navigate its way through the airspace without the instructions of the controller, each seeking to optimise its own state; yet because each is blind to the state and intentions of other aircraft, the safety and expedition of the airtraffic will be poorly managed at best. ATMQ then, must be a statement about the ‘added value’ of the controller’s contributions to the state of a process inherently moving away or towards a desired state of safety and expedition. To capture this more complex view of management quality, ATMQ must relate the actual state of the traffic relative to the state it would have had if no (further) controller interventions had been made (its projected state) and relative to its goal state. In this way, ATMQ can be a measure of gain attributable to the controller.

Indices for each of the management qualities included in ATMQ can be computed from the differences between the goal and actual event vectors. The form of the index is such that the quality of management is optimal when a zero value is returned, that is to say, when actual state and goal state are coincident. A negative value is returned when traffic management quality is less than desired (goal state). For, QPM and QFM, a value greater than zero is possible when actual states are better than goal states, since it is possible for actual values of fuel consumed or flight time to be less than their goal values. Further, by relating the index to the difference between the goal and projected event vectors, the significant difference of the ATM worksystem’s interventions over the scenario are given. In this way, the ‘added value’ of the worksystem’s interventions is indicated.

Two forms of ATMQ are possible by applying the indices to the event vector matrix (Figure 2). Both forms will be illustrated here with the data obtained from the controller subjects performing the simulated ATM task. The analysis of ATMQ is output from the individual event vector matrices constructed in spreadsheets, as earlier explained.

The first form of ATMQ describes the task quality of traffic management over a complete period. It describes the sum of management qualities for all aircraft over their flight through the sector and so can be more accurately designated ATMQ(fl) to identify it as referring to completed flights. It is computed by using the initially projected, goal and actual final attribute values (projct0(end) , goalend, actualend ) for each event vector (i.e., the ‘beginning and end points’). The functions by which these ATMQ(fl) management qualities are calculated are given in Appendix 1.

Figure 3 illustrates the assessment of ATMQ(fl) – in other words the assessment of management qualities over completed flights for the controllers separately managing the same traffic scenario. The scenario consisted of six aircraft entering the sector over a period of 45 minutes. ATMQ(fl) was first computed for each form of management quality, for each aircraft under the control of each controller. Figure 3 presents a summation of this assessment for each of the controllers for each of the five management qualities but for all six individual flights. For example, we are able to see the quality with which Controller 1 managed the safety (i.e., QSM) and fuel use (i.e., QFM) of all six aircraft under her control over the entire period of the task.

Figure 3. Assessment of Air Traffic Management Quality for all completed flights of each controller.

It is important to note that ATMQ(fl) is achronological, in so much that it describes the quality of management of each flight after its completion: hence, it would return the same value whether all aircraft had been on the sector at the same time during the scenario, or whether only one flight had been on the sector at any one time. Whilst this kind of assessment provides an essential view of the acquittal of management work from the point of view of each aircraft, it provides a less satisfactory view of the acquittal of management work from the point of view of the worksystem.

The second form of ATMQ describes the task quality of traffic management for each intervention made by an individual controller. This second kind of task quality is designated as ATMQ(int), to identify it as referring to interventions and is computed from the currently projected end state, previously projected end state, and new goal end state (for example, projct1(end) , projct2(end), goalend for the second intervention). The functions by which these ATMQ(int) management qualities are calculated are given in Appendix 2.

Figure 4 illustrates this second principal form of ATMQ – the assessment of ATMQ(int) for all aircraft with each intervention of an individual controller. For the sake of clarity, only the qualities of safety (QSM), fuel use (QFM) and progress (QPM) are shown. For each management intervention made by the controller during the period of the task, these three management qualities are described, each triad of data points relating to an instruction issued by the controller to one of the six aircraft.

Figure 4. Qualities of: progress management (QPM); fuel use management (QFM); and safety management (QSM) achieved by Controller3 during the task.

Finally, although the analysis of ATMQ requires the worksystem’s interventions to be explicit, it does not require that there actually be any interventions. After all, when no problems are present in a process, good management is that which monitors but makes no intervention. Similarly, if the projected states of airtraffic events are the same as the goal states, then good management is that which makes no interventions, and in this event, ATMQ would return a value of zero.

To summarise, the ATM domain model describes the work performed in the Air Traffic Management task. It describes the objects, attributes, relations and states in this class of domain, as related to goals and the achievement of those goals. The model applies the generic concepts of domains given by the cognitive engineering conception presented earlier. The model describes the particular domain of the simulated ATM task from which derives the example assessments of traffic management quality given here. Corresponding with the domain model, the worksystem model presented in the next section describes the system of agents that perform the Air Traffic Management task.

3. Model of the ATM worksystem

A model of the worksystem which performs the Air Traffic Management task can be generated directly from the domain model. The representations and processes minimally required by the worksystem can be derived from the constructs which make up the domain model. In this way, ecological relations (Vera and Simon, 1993) bind the worksystem model to the domain. Woods and Roth (1988) identify the ecological modelling of systems as a central feature of cognitive engineering, given the concern for designing systems in which the cognitive resources and capabilities of users are matched to the demands of tasks.

The ecological approach to modelling worksystems has been contrasted (Payne, 1991) both with the architecture-driven and the phenomenon-driven approaches: that is to say, it can be contrasted both with the deductive application of general architectures to models of specific behaviours (Howes and Young, 1997), and with attempts to generate ‘local’ models from empirical observations of specific performance issues. However this distinction is too sharply drawn and needs to be further qualified, since the organisation of a worksystem model (as opposed to the content), is not determined by the domain model. First, the ATM worksystem model instantiates the conception of cognitive design problems. Hence the concepts of structure, behaviour and costs, are used as a primary partitioning of the ATM worksystem model. Second, the ATM worksystem model adopts specific constructs from the blackboard architecture (Hayes-Roth and Hayes-Roth, 1979) to organise the particular relations between the representations and processes deriving from the ATM domain model. Hence a general cognitive architecture is employed selectively in the ATM worksystem model.

3.1 Structures of the ATM worksystem

The structures of the ATM worksystem consist, at base, of representations and processes. The representations constructed and maintained by the ATM worksystem are shown schematically in Figure 5, contained within a blackboard of airtraffic events, a blackboard of event vectors, and a schedule of planned instructions.

The blackboard of airtraffic events contains a representation of the current airtraffic event (e1)constructed from sensed traffic data. The blackboard has two dimensions, a real time dimension and a dimension of hypotheses about the PASHT attribute states of individual aircraft. Knowledge sources associated with this blackboard support the construction of hypotheses about the attributes of airtraffic events. For example, knowledge sources concerning the topology of the sector airways support the construction of hypotheses about heading attributes. As the ATM worksystem monitors flights through the sector, it maintains a representation of a succession of discrete airtraffic events.

A blackboard of event vectors contains separate representations of a current event vector, a goal event vector, and a planned vector. The current event vector expresses the actual values of task attributes deriving from the current airtraffic event, and the projected values of those task attributes at future events. A representation of the goal event vector expresses the goal values of task attributes for the current and projected airtraffic events. A representation of a planned event vector expresses planned values of task attributes for the current and projected airtraffic events. Critically, this vector is distinct from the goal event vector, allowing that the planned state of the traffic will not necessarily coincide with the idealised goal state.

The blackboard of event vectors has two dimensions, a real time dimension and a dimension of hypotheses about the task attributes of event vectors. The hypotheses then concern the attributes of safety and expedition of each vector, where the attribute of expedition subsumes the individual attributes of progress, fuel use, number of manoeuvres and exit height variation. Knowledge sources separately associated with this blackboard support the construction of hypotheses about the attributes of event vectors. For example, knowledge sources about the minimum legal separations of traffic, and about aircraft fuel consumption characteristics, support the construction of hypotheses about safety and fuel use, respectively. Other knowledge sources support the ATM worksystem in reasoning about differences between the current vector and goal vector, and in constructing the planned vector.

Apparent within the blackboard of event vectors are a distinct monitoring horizon and planning horizon. The current event vector extends variably into future events. The temporal limits of the current vector constitute a ‘monitoring horizon’ of the ATM worksystem: it is the extent to which the worksystem is ‘looking ahead’ for traffic management problems. Similarly the planned event vector extends variably into the future events. Its temporal limits constitute a ‘planning horizon’: it is the extent to which the ATM worksystem is ‘planning ahead’ to solve future traffic management problems. Both monitoring horizon and planning horizon can be expected to be reduced with increasing traffic volumes and complexities.

The planned vector is executed by a set of planned instructions. Planned instructions are generated by reasoning about the set of planned vectors for individual aircraft and the options for possible instructed changes in speed, heading or altitude. This reasoning is again supported by specialised knowledge sources. The worksystem maintains a schedule of planned instructions, shown in Figure 5 as a separate representation: instruction i1 is shown executed at time t1.

The complexity of the representations of the ATM worksystem is complemented by the simplicity of its processes. Two kinds of abstract process are specified: generation processes and, evaluation processes and can address the event-level and the vector-level representations. Two kinds of physical process are specified addressed to the event-level representations: monitoring processes and executing processes.

Figure 5. Schematic view of representations maintained by the ATM worksystem

3.2 Behaviours of the ATM worksystem

The behaviours of the ATM worksystem are the activation of its structures, both physical and abstract, which occurs when the worksystem is situated in an instance of an ATM domain. Behaviours, whether physical or abstract, are understood as the processing of representations, and so can be defined in the association of processes with representations. Eight kinds of ATM worksystem behaviour can be defined, grouped in three super-ordinate classes of monitoring, planning and controlling (i.e., executing) behaviours:

Monitoring behaviours

  • Generating a current airtraffic event. The ATM worksystem generates a representation of the current airtraffic event. This behaviour is a conjunction of both monitoring and generating processes addressing the monitoring space. The representation which is generated expresses values of the PASHT attributes of the current airtraffic event.
  • Generating a current event vector . The ATM worksystem generates a representation of the current vector by abstraction from the representation of the current airtraffic event. Therepresentation expresses current actual values, and currently projected values of the task attributes of the event profile. In other words, it expresses the actual and projected safety and expedition of the traffic.
  • Generating a goal event vector. The representation of the goal vector is generated directly by a conjunction of monitors and generators. The representation expresses goal values of the task attributes of the event profile.
  • Evaluating a current event vector. The ATM worksystem evaluates the current vector by identifying its variance with the goal vector. This behaviour attaches ‘problem flags’ to the representation of the current vector.

Planning behaviours

  • Generating a planned event vector. If the evaluation of the current vector with the goal vector reports an acceptable conformance of the former, then the current vector is adopted as the planned vector. Otherwise, a planned vector is generated to improve that conformance.
  • Evaluating a planned event vector. With the succession of current vector representations, and their evaluation, the ATM worksystem evaluates the planned vector and a new planned vector is generated.
  • Generating a planned instruction. Given the planned vector, the instructions needed to realise the plan will be generated by the ATM worksystem, and perhaps too, the actions needed to execute those interventions.

Controlling behaviour

  • Executing a planned intervention. The ATM worksystem generates the execution of planned interventions, in other words, it decides to act to issue an instruction to the aircraft.

These eight worksystem behaviours can be expressed continuously and concurrently. With the changing state of the domain, not least as a consequence of the worksystem’s interventions, each representation will be revised.

3.3 Cognitive costs

Cognitive costs can be attributed to the behaviours of the ATM worksystem and denote the cost of performing the air traffic management task. These cognitive costs are a critical component of the performance of the ATM worksystem, and so too of this formulation of the ATM cognitive design problem. Cognitive costs are derived from a model of the eight classes of worksystem behaviour as they are expressed over the period of the air traffic management. The model of worksystem behaviours is established using a post-task elicitation method, as now described.

Following completion of the simulated traffic management task, the controller subject was required to re-construct their behaviour in the task by observing a video recording of traffic movements on the sector during the task. The recording also showed all requests the controller had made to aircraft for height and speed information, and it showed the instructions that were issued to each aircraft. A set of unmarked flight strips for the traffic scenario was provided. As the video record of the task was replayed, the controller was required to manipulate the flight strips in the way they would have done during the task. For example, as each aircraft entered the sector they were required to move the appropriate strip to the live position. As the aircraft progressed through the sector, its sequence of strips would be ‘made live’ and then discarded. The controller annotated the flight strips with information obtained from each aircraft request made during the task, and with each instruction issued. The controller was required to view the videotape as a sequence of five minute periods. They were able to halt the tape at any point, for example, in order to update the flight strips. However, no part of the videotape could be replayed.

At the end of each five minute period, the controller was required to complete a ‘plan elicitation’ sheet. The plan elicitation sheet required the controller to state for each aircraft, the interventions they were planning to make. The specific planned instruction was to be stated (height or speed change) as well as the location of the aircraft when the instruction would be issued. The controller was asked to identify aircraft for which, at that time, no interventions were planned, whether because consideration had not then been given to that aircraft, or a decision had been made that no further instructions would be needed. When the sheet was completed it was set to one side and the controller then viewed the next five minute period of the videotape, after which they completed a new plan elicitation sheet. In this way, for each aircraft at the end of each five minute interval, all planned interventions were described.

This elicited protocol of sampled planned interventions was then compared with the instructions originally issued, as recorded by the traffic model. The comparison indicated a number of issued instructions whose plan had not been reported in the corresponding previous sampling interval of the post-task elicitation. These additional instructions were taken to indicate planning behaviours wherein a planned intervention had been generated and executed between elicitation points. Hence, the record of executed interventions was used to augment and further complete the record of planned interventions obtained from the post-task elicitation. The result of this analysis was a data set describing the sequence of planned interventions for each aircraft over the period of the traffic management task.
The analysis was continued by abstracting the classes of planned interventions for each aircraft over the scenario, divided again into a succession of five minute intervals. Four different kinds of planned intervention were identified:

  • (i) interventions planned at the beginning of an interval and not executed within
    the interval.
  • (ii) planned interventions which were a revision of earlier plans, but which also
    were not executed within the five minute interval.
  • (iii) planned interventions which were also executed within the same five minute
    interval, plans executed exactly, and plans revised when executed.
  • (iv) plans for interventions made during the five minute interval, but where those
    plans were not described at all at the beginning of the interval.

Each of these intervention plans was identified by its instruction type, that is, whether it was a planned instruction for a height or speed change.

Representations of airtraffic events, planned event vectors, current vectors and goal vectors are implicit in the analysis of planned interventions. These representations were inferred from the analysis of planned interventions by applying a set of eight rules deriving from the ATM worksystem model, as given in Table 1.

  1. the behaviour of generating a representation of the current airtraffic event was associated with any planned intervention for a given aircraft within a given interval, whether reported or inferred, except where those planned interventions were (a) reported rather than inferred, and (b) a reiteration of a previous reporting of a planned intervention, and (c) not executed within the interval.
  2. the behaviour of generating a representation of the current vector was only associated with those planned interventions already associated with the behaviour of generating an event representation, except where (a) the planned intervention is a revision of an earlier planned intervention (b) and the planned intervention is not executed within the same interval.
  3. the behaviour of generating a goal event vector was only associated with the first planned intervention for each aircraft.
  4. the behaviour of evaluating the current vector was associated with all planned interventions already associated with a behaviour of generate a current vector.
  5. the behaviour of generating a planned vector was associated with all planned interventions already associated with a behaviour of evaluating a current vector.
  6. the behaviour of evaluating a planned vector was associated only with planned interventions which were revisions of earlier planned interventions, regardless of whether they were reported or inferred.
  7. the behaviour of generating a planned intervention was associated with all planned interventions already associated with a behaviour of generating a planned vector, or where the planned intervention was a revision of an earlier reported planned intervention
  8. the behaviour of generating the execution of a planned intervention was identified directly from the model of planned interventions

Table 1. Rules applied to constructing the worksystem behaviour model from the analysis of planned interventions.

The result of this analysis is a model of the eight cognitive behaviours of the ATM worksystem expressed over the period of the task. Cognitive costs can be derived from this model by applying the following simplifying assumptions. First, costs are atomised, wherein a cost is separately associated with each instance of expressed behaviour. Second, a common cost ‘unit’ is attributed to each such instance. Two different but complementary kinds of assessment of behavioural cognitive costs are possible. A cumulative assessment describes the cognitive costs associated with each class of behaviour over the complete task, based on the total number of expressed instances of this class of behaviour. A continuous assessment describes the cognitive costs associated with each class of behaviour over each interval. The metric used in both forms of assessment is simply the number of instances of expressed behaviour in a specific class.

An example of the cumulative assessment of the controller’s behavioural costs is given in Figure 6. The figure presents the behavioural costs of each class of controller behaviour exhibited during the traffic management task.

Figure 6. Cumulative assessment of cognitive costs for each class of ATM worksystem behaviour

Figure 7. Continuous assessment of cognitive behavioural costs.

Examining the variation across categories, the costs of generating goal vectors were less than any other category. The costs of generating a representation of the current event, and the costs of generating planned interventions, were greater than any other category. Other categories of behaviour incurred seemingly equivalent levels of cost. In terms of the superordinate categories of behaviour, the cognitive costs of planning appear equivalent to those of monitoring and controlling.

An example of the continuous assessment of the same controller’s behavioural costs is given in Figure 7. It is an assessment of all classes of cost over each sampling interval (300 seconds) of the task. For simplicity, this assessment is presented as the costs of the superordinate classes of behaviour of monitoring, planning and controlling over each interval. Again, the assessment is produced directly from the number of expressed instances of each class of worksystem behaviour. The continuous assessment includes the average across all costs over each interval.

The continuous assessment suggests that costs rose from the first five minute interval of the task to reach a maximum in the third interval. Because all the aircraft had arrived on the sector by the third interval in the task, the increase in cognitive behavioural costs might be interpreted as the effect of traffic density increases. However, since costs then fall to a minimum in the fifth interval, this interpretation is implausible. Rather, the effect is due to an increase then decrease in monitoring and planning costs as the controller monitored the entry of each aircraft and generated a plan. Although the plan might later be modified, planning behaviours would predominate in the first part of the task. The plan would later be executed by the worksystem’s controlling behaviours, and indeed, Figure 7 indicates that the cognitive costs of controlling behaviours predominated over both monitoring and planning costs in the final interval of the task.

The simplifying assumptions adopted in this analysis of cognitive costs need to be independently validated before the technique could be exploited more generally. They can be seen as an example of the approximation which Norman associates with Cognitive Engineering, and which allows tractable formulations of complex problems. As an assessment of cognitive costs based on a model of cognitive behaviour, the analysis contrasts with current methods for assessment of mental workload applied to the ATM task, methods which include concurrent self- assessment by controllers on a four point scale, and other assessments based on observations of the number and state of flight strips in use on the sector suite. Within the primary aim of this paper, the analysis exemplifies the incorporation of cognitive costs within the formulation of the cognitive design problem of ATM.

4. Using the problem formulation in cognitive design

Taken together, the models of the ATM domain and ATM worksystem provide a formulation of the cognitive design problem of Air Traffic Management. The domain model describes the work of air traffic management in terms of objects and relations, attributes and states, goals and task quality (goal achievement). The worksystem model describes the system that performs the work of air traffic management, in terms of structures, processes and the costs of work. The models have been illustrated with data captured from a simulated ATM system, wherein controller subjects performed the simulated management task with a computational traffic model.

In the case of the simulated system, the data indicate a worksystem which achieves an insufficient level of traffic management quality and incurs an undesirable level of cognitive cost. The assessment of ATMQ (fl) for all controllers indicated, for example, an inconsistent management of traffic safety (Figure 3). The assessment of ATMQ (int) for Controller 3 indicated, for example, a declining management of progress over the period of the task, and a sub-optimal trade-off between management of progress and of fuel use (Figure 3). Cognitive costs associated with specific categories of behaviour having a level significantly higher than average might also be considered undesirable, such as the category of generating a planned intervention (Figure 6). These data express the requirement for a revised worksystem able to achieve an acceptable trade-off (Norman, 1986) between task quality and cognitive costs.

Because it expresses this cognitive requirement, the problem formulation has the potential to contribute to the specification of requirements for worksystems. Cognitive requirements should be seen as separate from, and complimentary to, software systems requirements. Both kinds of requirement must be met in the design of software-intensive worksystems. Such an approach would mark a shift from standard treatments of software systems development (Sommerville, 1996) wherein users’ tasks and capabilities are interpreted as and re-expressed as ‘non- functional’ requirements of the user interface of the software system.

As well as supporting the specification of requirements, the formulation of the ATM cognitive design problem may also be expected to support the design of worksystems. We might, for example, consider how the problem formulation can contribute to the design of an electronic flight progress strip, earlier described as a focal issue in the development of a more effective ATM system. The problem formulation provides a network within which the flight progress strip can be understood in terms of what it is, and how it is used. First, the domain model allows analysis of the flight strip as a representation. For example, each paper flight strip represents a specific airtraffic event of an aircraft passing a particular beacon. It also represents for reference purposes the preceding and the following such events. The printed information on the strip describes the goal attributes of this airtraffic event in terms of desired height, speed and heading. The controller’s annotations of the strip describe both instructions issued and planned instructions. Hence the strip provides a representation of PASHT attributes of the given event. The strip does not represent event vectors, or their task attributes. The worksystem model tells us that the controller must construct the current, goal and planned event vectors, and their attributes, from the PASHT level representation on the strip. These examples indicate how the problem formulation can begin to be used to describe the flight strip and to reason about how the strip is used.

The problem formulation supports the process of evaluation, including the formative evaluation of specific design defects. For example, Controller3 achieved a poor management of safety (QSM) over the period of the task (see Figure 4) due to three interventions made some 1250 seconds into the task. The domain modelindicated that the first of the three instructions was for one aircraft to climb above and behind another aircraft, leading to a separation infringement. The worksystem model, constructed from the post-task protocol analysis, described the plans that lead to this misjudgement.

To conclude at a discipline level, the problem formulation presented in this paper can be viewed more generally in terms of the claimed emergence of cognitive engineering. Dowell and Long (1998) have identified design exemplars as a critical entity in the discipline matrix of cognitive engineering. An exemplar is a problem formulation and its solution. Exemplars exemplify the use of cognitive engineering knowledge in solving problems of cognitive design; and they serve as cases for reasoning about new problems. By contrast, craft practices of cognitive design use demonstrators and ‘design classics’ as its exemplars, a role occupied, for example, by the Macintosh graphical user interface. By contrast, the exemplars of cognitive engineering must be abstractions : they must be formulations of design problems and solutions. The formulation in this paper of the ATM cognitive design problem is an attempt to better understand and advance the construction of exemplars for cognitive engineering.

Acknowledgement

This work was conducted at the Ergonomics and HCI Unit, University College London. I am indebted to Professor John Long for his critical contributions.

References

Checkland P., 1981. Systems thinking, systems practice. John Wiley and Sons: Chichester.

Debenard S., Vanderhaegen F., and Millot. P., 1992. An experimental investigation of dynamic allocation of tasks between air traffic controller and AI system. In Proc. of 5th symposium ‘Analysis, design and evaluation of man machine systems, The Hague, Holland, June 9-11.

Dowell J. and Long. J.B., 1998. Conception of the cognitive engineering design problem, Ergonomics. vol 41, 2, pp. 126 – 139.

Dowell J., Salter I. and Zekrullahi S., 1994. A domain analysis of air traffic management work can be used to rationalise interface design issues. In Cockton G., Draper S. and Weir G. (ed.s),People and Computers IX, CUP.

Field A., 1985. International Air Traffic Control. Pergamon: Oxford.

Harper R. R., Hughes J. A, and Shapiro D. Z., 1991. Harmonious working and CSCW: computer technology and air traffic control. In Studies in Computer supported cooperative work: theory, practice and design, (ed.s), Bowers J.M & Benford S.D. North Holland: Amsterdam.

Hayes-Roth B. and Hayes-Roth F., (1979). A cognitive model of planning. Cognitive Science, 3, pp 275-310.

Hollnagel E. and Woods D.D., (1983). Cognitive systems engineering: new wine in new bottles. International Journal of Man-Machine Studies, 18, pp 583-600.

Hopkin V. D., 1971. Conflicting criteria in evaluating air traffic control systems. Ergonomics , 14, 5, pp 557-564.

Howes A. & Young R.M. 1997, The role of cognitive architecture in modeling the user: Soar’s learning mechanism, Human Computer Interaction, Vol. 12 No. 4, pp 311- 343

Hutchins E. (1994), Cognition in the wild. MIT Press: Mass. Kanafani A., 1986. The analysis of hazards and the hazards of analysis: reflections
on air traffic safety management. Accident analysis and prevention. 18, 5, pp403-416.

Keeney R.L., 1993, Value focussed thinking: a path to creative decision making.
Cambridge MA: Harvard University Press.

Long J.B. and Dowell J., 1989. Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering . In Sutcliffe A. and Macaulay L. (ed.s). People and Computers V. Cambridge University Press, Cambridge.

Marr D., (1982). Vision. Wh Freeman and Co: New York.

Norman D.A., (1986). Cognitive engineering. In Norman D.A. and Draper S.W., (ed.s)User Centred System Design. Erlbaum: Hillsdale, NJ. pp 31 – 61.

Payne S.J., 1991, Interface problems and interface resources. In Carroll J.M (ed.) Designing Interaction, Cambridge University Press: Cambidge.

Rasmussen J., (1986). Information processing and human-machine interaction: an approach to cognitive engineering. North Holland: New York.

Ratcliffe S., 1985. Automation in air traffic management. S. Journal of Navigation, 38, 3, pp 405-412.

Rouse W. B., (1980). Systems engineering models of human machine interaction. Elsevier: North Holland.

Shepard T., Dean T., Powley W. and Akl Y., (1991). A conflict prediction algorithm using intent information. In Proceedings of the Air Traffic Control Association Annual Conference, 1991.

Sommerville, I., 1996, Software Engineering. Addison Wesley: New York.

Sperandio J. C., 1978. The regulation of working methods as a function of workload
among air traffic controllers. Ergonomics, 21, 3, pp 195-202.

Vera A.H. and Simon H.A., 1993, Situated action: a symbolic interpretation.
Cognitive Science, 17, pp 7-48.

Whitfield D. and Jackson A., (1982). The air traffic controller’s picture as an example of a mental model. In Proceedings of the IFAC conference on analysis, design and evaluation of man-machine systems. Baden-Baden, Germany, 1982. HMSO: London.

Woods D.D. and Roth E.M., (1988). Cognitive systems engineering. In Helander M. (ed.) Handbook of Human Computer Interaction. Elsevier: North-Holland.

Appendix 1. Functions for computing ATMQ (fl): the air traffic management qualities for completed flights.

This rule means that if at a given airtraffic event, two aircraft are on a collision course and are less than a safe separation apart (300 seconds), then a penalty is immediately given, commensurate with a ‘near miss’ condition. When aircraft are on a collision course but a long way apart, safety is assessed as a function of closing time and projected time of complete flight. The form of function which this rule supplies is such that QSM is optimal when a value of zero is returned, meaning that at no time was the aircraft in separation conflict or on a course leading to a conflict no matter how far apart. The value increases negatively when conflict courses are instructed, and sharply so (as given by constant C) when those courses occur with less than a specified track and vertical separation.


The forms of function of the unit-less indices provided by these ratios are such that in each case, quality of management is optimal when a zero value is returned, that it to say, when actual state and goal state are coincident. QPM and QFM are greater than zero when respective actual states are better than goal states, and less than zero when they are worse (it is possible for actual values of fuel consumed or flight time to be less than their goal values). The difference is given by proportion with the difference that would have been the case if there had been no interventions by the ATM worksystem over the scenario. In this way, the added value of the worksystem’s interventions is indicated.

The values of QXM increase negatively from zero with the difference between actual exit height and the goal exit height. The difference is again given by proportion with the difference that would have been the case if no ATM worksystem interventions had been made: the aircraft would have left the sector at its entry height.

The values of QMM range from +0.3 when the actual number of manoeuvres is less than the goal number of manoeuvres, to zero when actual and goal are equal, and slowly increase negatively as the number of manoeuvres increases above the goal number.

The constants in the formulae for QPM, QXM, and QFM are included to reduce the ‘order effect’ distortions when small differences occur in denominator or numerator. These constants are determined by numerical iteration to ensure a negligible change in the general shape of the functions.

Appendix 2. Functions for computing ATMQ (int): the air traffic management qualities for each controller intervention

We can determine ATMQ(int) for any given time in the relationship between the previous state, the state following the intervention, and the desired state. For QPM, QFM and QXM, these states are final states projected over the remainder of the flight, and assume no further intervention will be made.

where n = number of aircraft on the sector at the time of the intervention

QXM is computed from the final event within a vector, since it is a closure-type task attribute. Safety is a continuous attribute, and QSM for each intervention is therefore as already computed for ATMQ(fl), as given in Appendix 1.

 

Conception of the Cognitive Engineering design problem 150 150 admin

Conception of the Cognitive Engineering design problem

John Dowell
Centre for HCI Design, City University, Northampton Square, London. EC1V 0HB

John Long
Ergonomics and HCI Unit, University College London, 26 Bedford Way, London. WC1H 0AP, UK.

Cognitive design, as the design of cognitive work and cognitive tools, is predominantly a craft practice which currently depends on the experience and insight of the designer. However the emergence of a discipline of Cognitive Engineering promises a more effective alternative practice, one which turns on the prescription of solutions to cognitive design problems. In this paper, we first examine the requirements for advancing Cognitive Engineering as a discipline. In particular, we identify the need for a conception which would provide the concepts necessary for explicitly formulating cognitive design problems. A proposal for such a conception is then presented.

1. Discipline of Cognitive Engineering

1.1. Evolution of Cognitive Design

A recurrent assumption about technological progress is that it derives from, or is propelled by, the application of scientific theory. Design is seen principally as an activity which translates scientific theory into useful artifacts. As such, design does not possess its own knowledge, other than perhaps as the derivative of a purer scientific knowledge. Yet close examination (Layton, 1974; Vincenti, 1993) shows this view to be in contradiction of the facts. The more correct analysis suggests that technology disciplines acquire and develop their own knowledge which enables them to solve their design problems (Long and Dowell, 1996).

The analysis of “technology as knowledge” (Layton, 1974) recognises the variety of forms of technological knowledge, ranging from tacit ‘know how’ and ‘know what’, based on personal experience, to validated engineering principles. Consider the evolution of a new technology. New technologies invariably emerge from the “inspired tinkering” (Landes, 1969) of a few who see a direct route between innovation and exploitation. As an industry is established, ad hoc innovation is supplanted by more methodical practices through which the experience of prior problems is codified and re-used. Design is institutionalised as a craft discipline which supports the cumulation and sharing of techniques and lessons learnt. The knowledge accumulated is only marginally, or indirectly derivative of scientific theory. In the case of computing technology, for example, Shaw has observed:” Computer science has contributed some relevant theory but practice proceeds largely independently of this organised knowledge (Shaw, 1990)”.

This same observation can be made of cognitive design, the activity of designing cognitive work and cognitive tools (including interactive computational tools). To date, the seminal successes in cognitive design have been principally the result of inspired innovation. The graphical user interface arose from the careful application of experience cast as design heuristics, for example, “Communicate through metaphors” (Johnson, Roberts, Verplank, Irby, Beard and Mackey, 1989). The spreadsheet is another example. More recent advances in “cognitive technologies”, such as those in groupware, dynamic visualisation techniques, and multimedia, are no different in arising essentially through craft practice based on innovation, experience and prior developments. Nevertheless, in the wake of these advances, a craft discipline has been established which supports the cumulation and sharing of knowledge of cognitive design.

However the history of technological disciplines also indicates that continued progress depends on the evolution of a corpus of validated theory to support them (Hoare, 1981; Shaw, 1990). Craft disciplines give way to engineering disciplines: personal experiential knowledge is replaced by design principles; ‘invent and test’ practices (that is to say, trial-and-error) are replaced by ‘specify then implement’ practices. Critically, design principles appear not to be acquired by translation of scientific theories. Rather, they are developed through the validation of knowledge about design problems and how to solve them.
The evolution of an engineering discipline is a visible requirement for progress in cognitive design. The requirement is apparent in at least three respects. First, cognitive design needs to improve its integration in systems development practices, and to ensure it has a greater influence in the early development life of products. Second, cognitive design needs to improve the reliability of its contributions to design, providing a greater assurance of the effectiveness of cognitive work and tools. Third, cognitive design needs to improve its learning process so that knowledge accumulated in successful designs can be made available to support solutions to new design problems. For at least these reasons, cognitive design must advance towards an engineering discipline. This paper is addressed to the evolution of such a discipline, a discipline of Cognitive Engineering.

1.2. Emergence of Cognitive Engineering

The idea of a discipline of Cognitive Engineering has been advocated consistently for more than a decade (Hollnagel and Woods, 1983; Norman, 1986; Rasmussen, Pejtersen, and Goodstein, 1994; Woods, 1994). Norman has described Cognitive Engineering as a discipline which has yet to be constructed but whose promise is to transform cognitive design by supplying the “principles that get the design to a pretty good state the first time around (Norman, 1986)”. The aims of Cognitive Engineering are “to understand the fundamental principles behind human action and performance that are relevant for the development of engineering principles of design”, and second, “to devise systems that are pleasant to use”. The critical phenomena of Cognitive Engineering include tasks, user action, user conceptual models and system image. The critical methods of Cognitive Engineering include approximation, and treating design as a series of trade-offs including giving different priorities to design decisions (Norman, 1986).

Woods (1994) describes Cognitive Engineering as an approach to the interaction and cooperation of people and technology. Significantly, it is not to be taken as an applied Cognitive Science, seeking to apply computational theories of mind to the design of systems of cognitive work. Rather, Cognitive Engineering is challenged to develop its own theoretical base. Further, “Cognitive systems are distributed over multiple agents, both people and machines” (Woods, 1994) which cooperatively perform cognitive work. Hence the unit of analysis must be the joint and distributed cognitive system. The question which Cognitive Engineering addresses is how to maximise the overall performance of this joint system. Woods and Roth (1988) state that this question is not answered simply through amassing ever more powerful technology; they contrast such a technology-driven approach with a problem-driven approach wherein the “requirements and bottlenecks in cognitive task performance drive the development of tools to support the human problem solver”. Yet whether such an approach may be developed remains an open question: whether designers might be provided with the “concepts and techniques to determine what will be useful, or are we condemned to simply build what can be practically built and wait for the judgement of experience?” Woods and Roth re-state this as ultimately a question of whether “principle-driven design is possible”.

1.3. Discipline matrix of Cognitive Engineering

Cognitive Engineering is clearly an emerging discipline whose nucleus has been in research aiming to support cognitive design. The breadth and variety of its activity has continued to grow from its inception and the question now arises as to how the evolution of this discipline can be channelled and hastened. It is here that reference to Kuhn’s analysis of paradigms in the (physical and biological) sciences may offer guidance (Kuhn, 1970). Specifically, Kuhn identifies the principal elements of a ‘discipline matrix’ by which a discipline emerges and evolves. We might similarly interpret the necessary elements of the ‘discipline matrix’ of Cognitive Engineering.

The first element described by Kuhn is a “shared commitment to models” which enables a discipline to recognise its scope, or ontology. (Kuhn gives the example of a commitment to the model of heat conceived as the kinetic energy of the constituent parts of masses). For Cognitive Engineering, we may interpret this requirement as the need to acquire a conception of the nature and scope of cognitive design problems. Similarly, as Carroll and Campbell have argued, “the appropriate ontology, the right problems and the right ways of looking at them … have to be in place for hard science to develop (Carroll and Campbell, 1986)”. Features of a conception for Cognitive Engineering are already apparent, for example, in Wood’s assertion that the unit of analysis must be the distributed cognitive system.

A second element of the disciplinary matrix is “values” which guide the solution to problems. Kuhn gives the example of the importance which science attaches to prediction. Cognitive Engineering also needs to establish its values, an example is the value attached to design prescription: “(getting) the design to a pretty good state the first time around (Norman, 1986)”

A third element is “symbolic generalisations” which function both as laws and definitions for solving problems. Kuhn gives the example of Ohm’s Law which specifies the lawful relationships between the concepts of resistance, current and voltage. For Cognitive Engineering, we may interpret this requirement as the need for engineering principles which express the relations between concepts and which enable design prescription. The need for engineering principles is one which has been recognised by both Norman and by Woods.
The final element of the disciplinary matrix is “exemplars” which are instances of problems and their solutions. Exemplars work by exemplifying the use of models, values and symbolic generalisations, and they support reasoning about similarity relations with new and unsolved problems. Kuhn gives the example of the application of Newton’s second law to predicting the motion of the simple pendulum. (Note, Newton’s second law embodies the concept of inertia established in the model of mechanics which commences the Principia). Cognitive Engineering too must acquire exemplars, but here those exemplars are instances of solutions to cognitive design problems, together with the design practices which produced those solutions. Such design exemplars must illustrate the application of the conception, values and design principles and must allow designers to view new cognitive design problems as similar to problems already solved.

1.4. Requirements for a conception

If this analysis of the discipline matrix of Cognitive Engineering is correct, then it is also apparent that the necessary elements substantially remain to be constructed. None are particularly apparent in the craft-like discipline of Human Factors which, for example, does not possess engineering principles, the heuristics it possesses being either ‘rules of thumb’ derived from experience or guidelines derived informally from psychological theories and findings.

This paper is concerned with the requirement for a conception of cognitive design. As later explained, we believe this is the element of the Cognitive Engineering matrix which can and should be established first. The current absence of a conception of cognitive design is well recognised; for example, Barnard and Harrison (1989) called for an “integrating framework …. that situates action in the context of work …. and relates system states to cognitive states”, a call which still remains unanswered. However it would be wrong to suggest that currently there is no available conception of cognitive design. Rather, there are many alternative and conflicting conceptions, most being informal and partial. Hollnagel (1991) was able to characterise three broad kinds of conception: the computer as ‘interlocutor’, with cognitive work seen as a form of conversation with cognitive tools; the “human centred” conception, wherein cognitive work is understood in terms of the user’s experience of the world and its mediation by tools; and the ‘systems understanding’ in which the worker and tools constitute a socio-technical system acting in a world. The last form of conception most clearly conforms with Woods’ requirements for Cognitive Engineering, as detailed above.

Previously we have proposed a conception of the cognitive design problem (Dowell and Long, 1989; see also, Long and Dowell, 1989) intended to contribute to the discipline matrix of Cognitive Engineering. That proposal is re-stated in revised form below.

2 Conception of the Cognitive Engineering design problem

Cognitive design concerns the problems of designing effective cognitive work, and the tools with which we perform that work. Our conception of the general problem of Cognitive Engineering is formulated over concepts of cognitive work and tools, and the need to prescribe effective solutions to the cognitive design problems they present. The concepts are highlighted on first reference. A glossary appears at the end of the paper.

Cognitive work is performed by worksystems which use knowledge to produce intended changes in environments, or domains.Worksystems consist of both human activity and the tools which are used in that activity (Mumford, 1995). Domains are organised around specific goals and contain both possibilities and constraints. For example, the domain of Air Traffic Management is defined by the goals of getting aircraft to their destinations safely, on time, and with a minimum of fuel use, etc. This domain has possibilities, such as vacant flight levels and the climbing abilities of different aircraft; it also has constraints, such as rules about the legal separation of aircraft. Cognitive work occurs when a particular worksystem uses knowledge to intentionally realise the possibilities in a particular domain to achieve goals. The air traffic controllers, for example, use their knowledge of individual flights, and of standard routes through some airspace, to instruct aircraft to maintain separations and best flight tracks. In this way, the controllers act intentionally to provide a desired level of safety and ‘expedition’ to all air traffic.

Cognitive tools support the use of knowledge in cognitive work. Those tools provide representations of domains, processes for transforming those representations, and a means of expressing those transformations in the domains (Simon, 1969). The radar and other devices in the Air Traffic Controller’s suite, for example, provide representations which enable the controller to reason about the state of the domain, such as aircraft proximities, and to transform those representations, including issuing instructions to pilots, so expressing the controller’s activity in the air traffic management domain. The controller’s tools embed the intention of their designers of helping the controller achieve their goals. In spite of the way we may often casually describe what we are doing, it is never the case that the our real intention is one of using a tool. Rather, our intention is to do ‘something’ with the tool. The difficulty we have, in describing exactly what that something is, stems from the fact that the domains in which we perform cognitive work are often virtual worlds, far removed from physical objects (for instance, computer-mediated foreign exchange dealing).

The worksystem clearly forms a dualism with its domain: it therefore makes no sense to consider one in isolation of the other (Neisser, 1987). If the worksystem is well adapted to its domain, it will reflect the goals, regularities and complexities in the domain (Simon, 1969). It follows that the fundamental unit of analysis in cognitive design must be the worksystem whose agents are joined by the common intention of performing work in the domain (see also Rasmussen and Vicente, 1990; Woods, 1994). Within the worksystem, human activity is said to be intentional, the behaviour of tools is said to be intended.
The following sections outline a conception of cognitive work informed by systems design theory (e.g., Simon, 1969; Checkland, 1981), ecological systems theory (e.g., Neisser, 1987), cognitive science (e.g., Winograd and Flores, 1986) and Cognitive Engineering theory (e.g., Woods, 1994). It provides a related set of concepts of the worksystem as a system of human and device agents which use knowledge to perform work in a domain.

2.1 Domains of cognitive work

The domains of cognitive work are abstractions of the ‘real world’ which describe the goals, possibilities and constraints of the environment of the worksystem. Beltracchi (1987, see Rasmussen and Vicente (1990)), for example, used the Rankine Cycle to describe the environment of process controllers. However, for most domains, such formal models and theories are not available, even for ubiquitous domains such as document production. Further too, such theories do not provide explicit or complete abstractions of the goals, possibilities and constraints for the decision-making worksystem. For example, the Rankine cycle leaves implicit the goal of optimising energy production (and the sub-goals of cycle efficiency, etc), and is incomplete with regard to the variables of the process (e.g., compressor pressure) which might be modified. The conception must therefore provide concepts for expressing the goals, possibilities and constraints for particular instances of domains of cognitive work.

Domains can be conceptualised in terms of objects identified by their attributes. Attributes emerge at different levels within a hierarchy of complexity within which they are related (energy cycle efficiency and feedwater temperature, for one example, or the safety of a set of air traffic and the separations of individual aircraft for another example). Attributes have states (or values) and may exhibit the affordance for change. Desirable states of attributes we recognise as goals, for instance, specific separations between aircraft, and specific levels of safety of air traffic being managed. Work occurs when the attribute states of objects are changed by the behaviours of a worksystem whose intention it is to achieve goals. However work does not always result in all goals being achieved all of the time, and the difference between the goals and the actual state changes achieved are expressed as task quality .

The worksystem has a boundary enclosing all user and device behaviours whose intention is to achieve the same goals in a given domain. Critically, it is only by defining the domain that the boundary of the worksystem can be established: users may exhibit many contiguous behaviours, and only by specifying the domain of concern, might the boundary of the worksystem enclosing all relevant behaviours be correctly identified. Hence, the boundary may enclose the behaviours of more than one device as, for example, when a user is working simultaneously with electronic mail and bibliographic services provided over a network. By the same token, the worksystem boundary may also include more than one user as, for example, in the case of the air traffic controller and the control chief making decisions with the same radar displays.

The centrality of the task domain has not always been accepted by cognitive design for research, with significant theoretical consequences. Consider the GOMS model (Card, Moran and Newell, 1983). Within this model, goals refer to states of “the user’s cognitive structure” referenced to the user interface; actions (methods) are lower level decompositions of goals. Hence a seminal theory in cognitive design leaves us unable to distinguish in kind between our goals and the behaviours by which we seek to achieve those goals.

2.2 Worksystem as cognitive structures and behaviours

Worksystems have both structures and behaviours. The structures of the worksystem are its component capabilities which, through coupling with the domain, give rise to behaviour. Behaviours are the activation (see Just and Carpenter, 1992; also Hoc, 1990) of structures and ultimately produce the changes in a domain which we recognise as work being performed.

Consider the structures and behaviours of a text editor. A text editor is a computer for writing, reading and storing text. Text is a domain object and is both real and virtual. At a low level of description, usually invisible to the user, text appears as data files stored in a distinct code. At a higher level, text consists of information and knowledge stored in a format which the user may choose. Text objects have attributes, such as character fonts at one extreme and the quality of prose at the other. Generally, the domain is represented by the text editor only partially and only at low and intermediate levels. The program is a set of structures, including functions, such as formatting commands, as well as menus, icons and windows. In simple text editors, the program is a fixed invariant structure; more sophisticated editors allow the user to modify the structure – users can choose which functions are included in the program, which are presented on the menus, and the parameters of the processes they specify. These structures are activated in the behaviours of the text editor when text is created, revised and stored. Higher level editor behaviours would include browsing and creating tables of contents through interaction with the user. With these behaviours, text which has themes, style and grammar is created by users.
As this example indicates, structures consist of representations (e.g., for storing text) and processes (e.g., text editing processes). Behaviours (e.g., creating and editing text) are exhibited through activating structures when processes (e.g., functions) transform representations (e.g., text). Behaviours are the processing of representations.

2.3 Cognitive structures and behaviours of the user

Users too can be conceptualised in terms of structures and behaviours by limiting our concern for the person to a cognitive agent performing work. The user’s cognitive behaviours are the processing of representations. So, perception is a process where a representation of the domain, often mediated by tools, is created. Reasoning is a process where one representation is transformed into another representation. Each successive transformation is accomplished by a process that operates on associated representations. The user’s cognitive behaviours are both abstract (i.e., mental) and physical. Mental behaviours include perceiving, knowing, reasoning and remembering; physical behaviours include looking and acting. So, the physical behaviour of looking might have entailed the mental behaviours of reasoning and remembering, that is why and where to look. These behaviours are related whereby mental behaviours generally determine, and are expressed by, the user’s physical behaviours. A user similarly possesses cognitive structures, an architecture of processes and representations containing knowledge of the domain and of the worksystem, including the tools and other agents with which the user interacts.

Propositions, schema, mental models and images are all proposals for the morphology of representations of knowledge. The organisation of the memory system, associative and inductive mechanisms of learning, and constraints on how information can be represented (such as innate grammatical principles) have all been proposed as aspects of cognition and its structural substrates.

However, such theories established in Cognitive Science may not, in fact, have any direct relevance for the user models needed for designing cognitive work. To assume otherwise would be to conform with the view of (cognitive) design as an applied (cognitive) science, a view which we rejected at the beginning of this paper. Simply, the computational theory of mind is not concerned with how the symbols manipulated by cognition have a meaning external to the processes of manipulation, and therefore how they are grounded in the goals, constraints and possibilities of task domains (Hutchins, 1994; McShane, Dockrell and Wells, 1992). As a consequence, it is very likely the case that many theories presented by Cognitive Science to explain the manipulation of symbols cannot themselves be grounded in particular domains (see also Vicente,1990).

It is rather the case that Cognitive Engineering must develop its own models of the user as cognitive agent. In this development, the ecology of user cognition with the domain must be a fundamental assumption, with models of user cognition reflecting the nature of the domains in which cognitive work is performed. Such an assumption underpins the validity of models in Cognitive Engineering: “If we do not have a good account of the information that perceivers are actually using, our hypothetical models of their information processing are almost sure to be wrong. If we do have such an account, however, such models may turn out to be almost unnecessary”(Neisser, 1987).

2.4 Worksystem as hierarchy

The behaviours of the worksystem emerge at hierarchical levels where each level subsumes the underlying levels. For example, searching a bibliographic database for a report subsumes formulating a database query and perhaps iteratively revising the query on the basis of the results obtained. These behaviours themselves subsume recalling features of the report being sought and interpreting the organisation of the database being accessed.
The hierarchy of behaviours ultimately can be divided into abstract and physical levels. Abstract behaviours are generally the extraction, storage, transformation and communication of information. They represent and process information concerning: domain objects and their attributes, attribute relations and attribute states, and goals. Physical behaviours express abstract behaviours through action. Because they support behaviours at many levels, structures must also exist at commensurate levels.

The hierarchy of worksystem behaviours reflects the hierarchy of complexity in the domain. The worksystem must therefore have behaviours at different levels of abstraction equivalent to the levels at which goals are identified in the domain. Hence a complete description of the behaviours of an authoring worksystem, for example, must describe not only the keystroke level behaviours relating to the goal of manipulating characters on a page, but it must also describe the abstract behaviours of composition which relate to the goals of creating prose intended to convey meaning. Traditional task analyses describe normative task performance in terms of temporal sequences of overt user behaviours. Such descriptions cannot capture the variability in the tasks of users who work in complex, open domains. Here, user behaviour will be strongly determined by the initial conditions in the domain and by disturbances from external sources (Vicente, 1990). In complex domains, the same task can be performed with the same degree of effectiveness in quite different ways. Traditional task analyses cannot explain the ‘intelligence’ in behaviour because they do not have recourse to a description of the abstract and mental behaviours which are expressed in physical behaviours.

The hierarchy of worksystem behaviours is distributed over the agents and tools of the worksystem (i.e., its structures). It is definitional of systems (being ‘greater than the sum of their parts’) that they are composed from sub-systems where “the several components of any complex system will perform particular sub-functions that contribute to the overall function” (Simon, 1969). The functional relations, or “mutual influence” (Ashby, 1956), between the agents and between the agents and tools of the worksystem are interactions between behaviours. These interactions fundamentally determine the overall worksystem behaviours, rather than the behaviours of individual agents and tools alone. The user interface is the combination of structures of agents and tools supporting specific interacting behaviours (see Card, Moran and Newell, 1983). Norman (1986) explains that the technological structures of the user interfaces are changed through design, whilst the user cognitive structures of the user interface are changed through experience and training.

2.5 Costs of cognitive work

Work performed by the worksystem will always incur resource costs which may be structural or behavioural. Structural costs will always occur in providing the structures of the worksystem. Behavioural costs will always occur in using structures to perform work.

Human structural costs are always incurred in learning to perform cognitive work and to use cognitive tools. They are the costs of developing and maintaining the user’s knowledge and cognitive skills through education, training and gaining experience. The notion of learnability refers generally to the level of structural resource costs demanded of the user.
Human behavioural costs are always incurred in performing cognitive work. They are both physical and mental. Physical costs are the costs of physical behaviours, for example, the costs of making keystrokes on a keyboard and scrutinising a monitor; they may be generally expressed as physical workload. Mental behavioural costs are the costs of mental behaviours, for example, the costs of knowing, reasoning, and deciding; they may be generally recognised as mental workload. Behavioural cognitive costs are evidenced in fatigue, stress and frustration. The notion of usability refers generally to the level of behavioural resource costs demanded of the user.

2.6 Worksystem performance

The performance of a worksystem relates to its achievement of goals, expressed as task quality, and to the resource costs expended. Critically then, the behaviour of the worksystem is distinguished from its performance, in the same way that ‘how the system does what it does’ can be distinguished from ‘how well it does it’ (see also: Rouse, 1981; Dasgupta, 1991).

This concept of performance ultimately supports the evaluation of worksystems. For example, by relating task quality to resource costs we are able to distinguish between two different designs of cognitive tool which, whilst enabling the same goals to be achieved, demand different levels of the user’s resource costs. The different performances of the two worksystems which embody the tools would therefore be discriminated. Similarly, think about the implications of this concept of performance for concern with user error: it is not enough for user behaviours simply to be error-free; although eliminating errorful behaviours may contribute to the best performance possible, that performance may still be less than desired. On the other hand, although user behaviours may be errorful, a worksystem may still achieve a desirable performance. Optimal human behaviour uses a minimum of resource costs in achieving goals. However, optimality can only be determined categorically against worksystem performance, and the best performance of a worksystem may still be at variance with the performance desired of it.

This concept of performance allows us to recognise an economics of performance. Within this economy, structural and behavioural costs may be traded-off both within and between the agents of the worksystem, and those costs may be traded off also with task quality. Users may invest structural costs in training the cognitive structures needed to perform a specific task, with a consequent reduction in the behavioural costs of performing that task. Users may expend additional behavioural costs in their work to compensate for the reduced structural costs invested in the under-development of their cognitive tools.

The economics of worksystem performance are illustrated by Sperandio’s observation of air traffic controllers at Orly control tower (Sperandio 1978). Sperandio observed that as the amount of traffic increased, the controllers would switch control strategies in response to increasing workload. Rather than treating each aircraft separately, the controllers would treat a number of following aircraft as a chain on a common route. This strategy would ensure that safety for each aircraft was still maintained, but sacrificed consideration of time keeping, fuel usage, and other expedition goals. This observation of the controllers’ activity can be understood as the controller modifying their (generic) behaviours in response to the state of the domain as traffic increases. In effect, the controllers are trading-off their resource costs, that is, limiting their workload, against less critical aspects of task quality. The global effect of modifying their behaviour is a qualitative change in worksystem performance. Recent work in modelling air traffic management (Lemoine and Hoc, 1996) aims to dynamically re-distribute cognitive work between controllers and tools in order to stabilise task quality and controller resource costs, and therefore to stabilise worksystem performance.

2.7 Cognitive design problems

Engineering disciplines apply validated models and principles to prescribe solutions to problems of design. How then should we conceive of the design problems which Cognitive Engineering is expected to solve? It is commonplace for cognitive design to be described as a ‘problem solving activity’, but such descriptions invariably fail to say what might be the nature and form of the problem being solved. Where such reference is made, it is usually in domain specific terms, and a remarkable variety of cognitive design problems is currently presented, ranging from the design of teaching software for schools to the design of remote surgery. A recent exception can be found in Newman and Lamming (1995). Yet the ability to acquire knowledge which is valid from one problem to the next requires an ability to abstract what is general from those two problems. We presume that instances of cognitive design problems each embody some general form of design problem and further, that they are capable of explicit formulation. The following proposes that general form.

Cognitive work can be conceptualised in terms of a worksystem and a domain and their respective concepts. In performing work, the worksystem achieves goals by transformations in the domain and it also incurs resources which have their cost (Figure 1).

The aim of design is therefore ‘to specify worksystems which achieve a desired level of performance in given domains’.

Figure 1. Worksystem and a domain

More formally, we can express the general design problem of Cognitive Engineering as follows:

Specify then implement the cognitive structures and behaviours of a worksystem {W} which performs work in a given domain (D) to a desired level of performance (P) in terms of task quality (Σ Q) and cognitive user costs (Σ KU).

An example of such a cognitive design problem formulated in these terms might refer to: the requirement for specifying then implementing the representations and processes as the knowledge of an air traffic management worksystem which is required to manage air traffic of a given density with a specified level of safety and expedition and within an acceptable level of costs to the controllers. This problem expression would of necessity need to be supported by related models of the air traffic management worksystem and domain (see Dowell, in prep).

By its reference to design practice as ‘specify then implement’, this expression of the general cognitive design problem is equivalent to the design problems of other engineering disciplines; it contrasts with the trial and error practices of craft design. However, the relationship between the general cognitive design problem and the design problems addressed by other engineering disciplines associated with the design of cognitive tools, such as Software Engineering and ‘Hardware Engineering’, is not explicitly specified. Nevertheless, it is implied that those other engineering disciplines address the design of the internal behaviours and structures of cognitive tools embedded in the worksystem, with concern for the resource costs of those tools.

3. Prospect of Cognitive Engineering principles

The deficiencies of current cognitive design practices have prompted our investigation of Cognitive Engineering as an alternative form of discipline. Our analysis has focused on the disciplinary matrix of Cognitive Engineering consisting of a conception, values, design principles and exemplars. The analysis assumes that Cognitive Engineering can make good ‘the deficiencies’. First, the integration of cognitive design in systems development would be improved because Cognitive Engineering principles would enable the formulation of cognitive design problems and the early prescription of design solutions. Second, the efficacy of cognitive design would be improved because Cognitive Engineering principles would provide the guarantee so lacking in cognitive design which relies on experiential knowledge. Third, the efficiency of cognitive design would be improved through design exemplars related to principles supporting the re-use of knowledge. Fourth, the progress of cognitive design as a discipline would be improved through the cumulation of knowledge in the form of conception, design principles and exemplars.

However, we observe that these elements of the disciplinary matrix required by Cognitive Engineering remain to be established. And since not all are likely to be established at the same time, the question arises as to which might be constructed first. A conception for Cognitive Engineering is a pre-requisite for formulating engineering principles. It supplies the concepts and their relations which express the general problem of cognitive design and which would be embodied in Cognitive Engineering principles.

To this end, we have proposed a conception for Cognitive Engineering in this paper, one which we contend is appropriate for supporting the formulation of Cognitive Engineering principles. The conception for Cognitive Engineering is a broad view of the Cognitive Engineering general design problem. Instances of the general design problem may include the development of a worksystem, or the utilisation of a worksystem within an organisation. Developing worksystems which are effective, and maintaining the effectiveness of worksystems within a changing organisational environment, are both expressed within the problem.

To conclude, it might be claimed that the craft nature of current cognitive design practices are dictated by the nature of the problem they address. In other words, the indeterminism and complexity of the problem of designing cognitive systems (the softness of the problem) might be claimed to preclude the application of prescriptive knowledge. We believe this claim fails to appreciate that the current absence of prescriptive design principles may rather be symptomatic of the early stage of the discipline development. The softness of the problem needs to be independently established. Cognitive design problems are, to some extent, hard: human behaviour in cognitive work is clearly to some useful degree deterministic, and sufficiently so for the design, to some useful degree, of interactive worksystems.

The extent to which Cognitive Engineering principles might be realiseable in practice remains to be seen. It is not supposed that the development of effective systems will never require craft skills in some form, and engineering principles are not incompatible with craft knowledge. Yet the potential of Cognitive Engineering principles for the effectiveness of the discipline demands serious consideration. The conception presented in this paper is intended to contribute towards the process of formulating such principles.

Acknowledgement

We acknowledge the critical contributions to this work of our colleagues, past and present, at University College London. John Dowell and John Long hold a research grant in Cognitive Engineering from the Economic and Social Research Council.

References

Ashby W. R., (1956). An introduction to cybernetics. Methuen: London.

Barnard P. and Harrison M., (1989). Integrating cognitive and system models in
human computer interaction. In: Sutcliffe A. and Macaulay L. (ed.s). People and Computers V. Proceedings of the Fifth Conference of the BCS HCI SIG, Nottingham 5-8 September 1989. Cambridge University Press, Cambridge.

Beltracchi L., (1987). A direct manipulation interface for water-based rankine cycle heat engines, IEEE transactions on systems, man and cybernetics, SMC-17, 478-487.

Card, S. K., Moran, T., Newell, A., (1983). The Psychology of Human Computer Interaction. Erlbaum: New Jersey.

Carroll J.M., and Campbell R. L., 1986, Softening up Hard Science: Reply to Newell and Card. Human Computer Interaction, 2, 227-249.

Checkland P., (1981). Systems thinking, systems practice. John Wiley and Sons: Chichester.

Dasgupta, S., (1991). Design theory and computer science. Cambridge University Press: Cambridge.

Dowell J. and Long J.B., (1989). Towards a conception for an engineering discipline of human factors. InErgonomics, 32, 11, pp 1513-1535.

Dowell, J., (in prep) The design problem of Air Traffic Management as an examplar for Cognitive Engineering.

Hoare C.A.R. , 1981. Professionalism. Computer Bulletin, September 1981.

Hoc J.M., (1990). Planning and understanding: an introdution. In Falzon P. (ed.).
Cognitive Ergonomics: Understanding learning and designing human computer
interaction. Academic Press: London.

Hollnagel E. and Woods D.D., (1983). Cognitive systems engineering: new wine in
new bottles. International Journal of Man-Machine Studies, 18, pp 583-600.

Hollnagel E., (1991). The phenotype of erroneous actions: implications for HCI
design. In Alty J. and Weir G. (ed.s), Human-computer interaction and complex
systems. Academic Press: London.

Hutchins, E. (1994) Cognition in the wild Mass: MIT press.

Johnson J., Roberts T., Verplank W., Irby C., Beard M. and Mackey K., (1989). The
Xerox Star: a retrospective. IEEE Computer, Sept, 1989, pp 11-29.

Just M.A. and Carpenter P.A., 1992 A capacity theory of comprehension: individual
differences in working memory, Psychological Review, 99, 1, 122-149.

Kuhn T.S., (1970). The structure of scientific revolutions. 2nd edition. University of
Chicago press: Chicago.

Landes D.S., (1969). The unbound prometheus. Cambridge University Press:
Cambridge. 14

Layton E., (1974). Technology as knowledge. Technology and Culture, 15, pp 31-41.

Lemoine M.P. and Hoc J.M., (1996) Multi-level human machine cooperation in air
traffic control: an experimental evaluation. In Canas J., Green T.R.G. and Warren C.P (ed.s)Proceedings of ECCE-8. Eighth European Conference on Cognitive Ergonomics. Granada, 8-12 Sept, 1996.

Lenorovitz, D.R. and Phillips, M.D., (1987). Human factors requirements engineering for air traffic control systems. In Salvendy, G. (ed.) Handbook of Human Factors. Wiley, London. 1987.

Long J.B. and Dowell J., (1989). Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering . Published in: Sutcliffe A. and Macaulay L. (ed.s). People and Computers V. Cambridge University Press, Cambridge.

Long J.B. and Dowell J., (1996). Cognitive Engineering human computer interactions The Psychologist., Vol 9, pp 313 – 317.

McShane J., Dockrell J. and Wells A., (1992). Psychology and cognitive science. In The Psychologist:, 5, pp 252-255.

MumfordE. 1995. Effective requirements analysis and systems design: the ETHICS method. Macmillan

Neisser U., 1987. From direct perception to conceptual structure. In U. Neisser, Concepts and conceptual development: ecological and intellectual factors in categorisation, CUP .

Newman W. and Lamming M., 1995,Interactive System Design. Addision-Wesley.

Norman D.A., (1986). Cognitive engineering. In Norman D.A. and Draper S.W.,
(ed.s)User Centred System Design. Erlbaum: Hillsdale, NJ.

Phillips M.D. and Melville B.E., (1988). Analyzing controller tasks to define air
traffic control system automation requirements. In Proceedings of the conference on human error avoidance techniques, Society of Automotive Engineers. Warrendale: Penn.. pp 37-44.

Phillips M.D. and Tischer K., (1984). Operations concept formulation for next generation air traffic control systems. In Shackel B. (ed.), Interact ’84, Proceedings of the first IFIP conference on Human-Computer Interaction. Elsevier Science B.V.: Amsterdam. pp 895-900.

Rasmussen J. and Vicente K., (1990). Ecological interfaces: a technological imperative in high tech systems? International Journal of Human Computer Interaction, 2 (2) pp 93-111.

Rasmussen J., Pejtersen A., and Goodstein L., (1994) Cognitive Systems Engineering. New York: John Wiley and Sons.

Rouse W.B., (1980). Systems engineering models of human machine interaction. Elsevier: North Holland.

Shaw M., (1990) Prospects for an engineering discipline of software. IEEE Software, November 1990.

Simon H.A., (1969). The sciences of the artificial. MIT Press: Cambridge Mass..

Sperandio, J.C., (1978). The regulation of working methods as a function of
workload among air traffic controllers. Ergonomics, 21, 3, pp 195-202.

Vicente K., (1990). A few implications of an ecological approach to human factors.
Human Factors Society Bulletin, 33, 11, 1 – 4. 15

Vincenti W.G. (1993) What engineers know and how they know it. John Hopkins University Press: Baltimore.

Winograd T. and Flores F., (1986). Understanding computers and cognition. Addison Wesley: Mass..

Woods D.D. and Roth E.M., (1988). Cognitive systems engineering. In Helander M. (ed.) Handbook of Human Computer Interaction. Elsevier: North-Holland.

Woods D.D., (1994). Observations from studying cognitive systems in context. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society (Keynote address).

  • 1
  • 2