Posts By :

John

1984/1985 Vinai Kumar 150 150 John

1984/1985 Vinai Kumar

Date of MSc 1984/85

Project Title

Evaluating the Usability of Laptop Computers (Sponsored by British Telecom, Ipswich)

Pre-MSc Background:

B.E. Mechanical Engineering (University of Roorkee, Roorkee, India)

Design Training programme at the National Institute of Design (NID), Ahmedabad, India

Pre-MSc View of HCI/Cognitive Ergonomics:

Between 1981 and 1984, I studied (Classical) Ergonomics on my own and applied this in my teaching at the National Institute of Design (NID), Ahmedabad India. I had no knowledge of HCI and Cognitive Ergonomics at this time.

Post-MSc View of HCI/Cognitive Ergonomics:

In my view the inputs in HCI and Cognitive Ergonomics were ahead of time in 1984-85. Cognitive Ergonomics immensely helped me in understanding the fallibility of human-decision making and action taking in complex and dynamic real-life situations. These inputs later led me to develop a human-centred systems approach for problem solving in our business environments. This is an easy to apply approach useful for engineers, designers and business managers.

Subsequent-to-MSc View of HCI/Cognitive Ergonomics

I strongly feel that Cognitive Ergonomics is a discipline which must be taught to all the major decision makers in Education, Industry, Business, and Government. This will enable decision makers and managers to understand why so many man-made systems fail or do not perform as desired and expected. Problem solving with an understanding of the cognitive processes of humans in a target context will help evolve and implement more workable and sustainable solutions.

Additional Reflections

Educational frameworks in engineering, and business management are too narrowly focused on technology and selling respectively. However, in real-life, any user or consumer would ideally like to have a product or service that is well-integrated across aesthetics, usability, technical performance, and affordability. A collaborative and interdisciplinary problem solving approach is the key to design and manage an appropriate value to a user. Cognitive science, ergonomics, and systems thinking (which are rarely taught to the professional problem solvers) are an essential component of a holistic and integrative problem solving process.

Professor John Long has influenced me in a very significant way by leading me into the realm of cognitive science and ergonomics. This was a life changing phase particularly in the 1980s when HCI and Cognitive ergonomics were in a nascent stage (there were no books on these subjects at that time!). Back in India, these terms were totally alien even to those who were in the field of ergonomics and design. I sincerely thank Prof. John for always encouraging me, and for appreciating my work during the programme at UCL. Although I would have liked to work further at a deeper level in the area of Cognitive Ergonomics and Systems Thinking, I still could manage to contribute to the development of a new paradigm for the Industrial Design process at the educational as well as professional levels. A large body of students and working designers benefited from this new way of looking at design (it was usually aesthetics and style focused). In the 1990s a significant number of my design students joined and influenced usability and user experience design initiative of the Indian IT industry, inspired by my inputs in design projects.

Most of my thoughts, beliefs and professional work are and will continue to be based on cognitive ergonomics and systems thinking in design and management.

 

A Preliminary Model of the Planning and Control of the Combined Response to Disaster 150 150 John

A Preliminary Model of the Planning and Control of the Combined Response to Disaster

 

Becky Hill & John Long 

Ergonomics & HCI Unit, University College London,

26 Bedford Way, London WC1H 0AP, England.

Tel: +44 (0)171 387 7050.

Fax: +44 (0)171 580 1100.

E-mail: b.hill@ucl.ac.uk.

ABSTRACT 

The emergency management combined response system is a planning and control system, set up to manage disasters. This paper presents a preliminary model of the planning and control of the combined response system and its domain. The model is based on a framework for modelling planning and control for multiple task work and on data from a training scenario. The application of the framework has extended its scope to the new domain of emergency management combined response. In addition, the preliminary model is used to identify tasks and behaviours of the combined response system which highlight issues of co-ordination.

Keywords 

Planning and control, multiple tasks, emergency management

INTRODUCTION 

This paper presents an informal analytic assessment of a model of the emergency management combined response (EMCR) worksystem with respect to the data from a training scenario. The model was derived by applying a framework for modelling planning and control in multiple task work (PCMT) to the data from an emergency management training scenario. This framework has been applied to develop models of three office administration planning and control domains – Medical Reception (Hill, Smith, Long and Whitefield 1995) -Secretarial Office Administration (Smith, Hill, Long and Whitefield 1992) and Solicitors Legal Work. The intention here was to extend the scope of the PCMT framework by using it to model a domain other than office administration. The first section of the paper characterises the domain to be modelled, identifying issues with the current design of the EMCR worksystem. The following section outlines the PCMT framework. Next, the data collection, training scenario, and example data are described. The model of PCMT- EMCR is then presented. Using the model to identify overlap between worksystem behaviours is exemplified. Last, issues related to the use of this framework for modelling the domain of EMCR are identified, along with plans for future work.

EMCR WORKSYSTEM 

The planning and control system, which is set up for emergency response to disaster, is that of the ‘combined’ response (EMCR). The EMCR worksystem has a command and control organisation with a three tier structure. Different agencies use this structure to organise their own planning behaviours, so that they co-ordinate effectively with one another. The three levels of command and control are operational, tactical and strategic. At each level the agencies have their own commander for co-ordinating the response. At the strategic level, these commanders make up a ‘senior co-ordinating group’. The operational response is carried out by each agency, concentrating on its specific roles (tasks) within its areas of responsibility. For example, the Fire Service fight fires. The tactical response determines the priority in allocating resources, for example additional fire engines. It also plans and co-ordinates the overall response, obtaining other resources as required. The strategic co-ordinating group formulates the overall policy within which the response to a major incident is made. This EMCR worksystem has been specified to ensure better co-ordination of the different agencies involved in the response. Co-ordination in this context is defined as the ‘harmonious integration of the expertise of all the agencies involved with the object of effectively and efficiently bringing the incident to a successful conclusion.'(Emergency planning college document 1995). The principal agencies involved are the Police, Fire Service and Ambulance Service. The primary objectives for all the emergency services are: ‘to save life, prevent escalation of the disaster, to relieve suffering, to facilitate investigation of the incident, safeguard the environment, protect property and restore normality’ (Home Office Publication 1994).

However, a succession of public enquiry documents analysing various disasters have identified co-ordination problems within EMCR. The Home Office Emergency Planning Research Group has also identified problems relating to co-ordination within EMCR (see acknowledgements).

Each emergency service has its own major incident plan. Each plan specifies a set of roles/tasks that need to be supported, when responding to a disaster, for example, evacuation, setting up a casualty clearing station etc. Some of the tasks, at the most generic level of description, are the same for the three emergency services, for example the Police and the Fire Service both support evacuation tasks. Most of the behaviours which support these tasks for the two services are different, but some are the same. Sometimes the behaviours overlap between the different services, when they are supporting different tasks. The Fire Service may be attempting to set up an inner cordon to contain a hazardous scene; the Ambulance Service may be

attempting to access the same scene to locate casualties. It is proposed here, that when behaviours overlap, co-ordination between the services may become a problem. (Behaviours do not necessarily have to be the same to overlap.)

In developing the present preliminary model of EMCR with respect to a training scenario, it is intended to identify such overlapping behaviours to support better the identification of the co-ordination problems raised by the system.

FRAMEWORK 

A framework was developed for modelling the planning and control of multiple task work (PCMT) in office administration. The PCMT framework makes a fundamental distinction between an interactive worksystem, comprising one or more users and computers/devices, and its domain of application, constituting the work carried out by the worksystem. This distinction allows for an expression of the effectiveness (performance) with which the work is carried out, conceptualised as a function of two factors: the quality of the task (i.e. whether the desired goals have been achieved), and resource costs (i.e. the resources required by the worksystem to accomplish the work) (Dowell and Long 1989).

In the framework, a domain of application (or work domain e.g. the disaster) is described in terms of objects, which may be abstract or physical. Objects are defined by their attributes, which have values. The attribute values of an object may be related to the attribute values of one or more other objects. An object at any time is determined by the values of its attributes. The worksystem performs work by changing the value of domain objects (i.e. by transforming their attribute values) to their desired values, as specified by the work goal.

The framework defines a number of worksystem structures for the planning and control of multiple task work. These structures are expressed at both abstract and physical levels of description. First, the framework describes the worksystem’s abstract cognitive structures. These structures comprise four processes (planning, controlling, perceiving and executing), and two representational structures (plans and knowledge-of-tasks). The four processes support the behaviours of planning, control, perception and execution respectively. A complete description for this set of structures, and the associated rationale, is to be found elsewhere (Smith, Hill, Long and Whitefield, 1992).

At the first (abstract) level of description, Plans are specifications of required transformations of domain objects and/or of required behaviours (to achieve goals). They may be partial (in the sense that they may specify only some of the behaviours or transformations). They may also be general (in the sense that some behaviours or transformations may be specified only generally, and not at a level which can be directly executed). Planning behaviours thus specify the required transformations and/or behaviours to support domain object transformations.

Perception and execution behaviours are, respectively, those whereby the worksystem acquires information about the domain objects and those whereby it carries out work which changes the value of the attributes for those objects as desired. Information about domain objects from perception behaviours is expressed in the knowledge-of-tasks representation. Control behaviours entail deciding which behaviour to carry out next, both within and between tasks.

The second level of description of planning and control structures is physical, wherein the framework describes the distribution of the abstract cognitive structures across the physically separate user and devices of particular worksystems.

This framework was chosen to model the combined response worksystem because:

(i) the EMCR worksystem is a planning and control system;

(ii) the work involves multiple tasks; and

(iii) there is a need to identify the tasks and behaviours of the worksystem in relationship to the work, to better identify co-ordination problems within the EMCR worksystem.

DATA COLLECTION 

Data were collected by means of a training scenario which took place at the Home Office Emergency Planning Training College. The trainees were members of the emergency services and local authority emergency planning officers. The exercise required the trainees to describe their response to the disaster scenario from initial response to restoration of normality. Data were collected by transcribing the descriptions given by the trainees, and through presentations given by the trainees about their response.

There are various phases in response to the disaster scenario, each of which can be clearly defined, and which involve different worksystem configurations and different worksystem behaviours. In this paper, only one worksystem configuration and its behaviours is modelled – that of the initial response phase to the scenario.

Scenario 

‘At 9.30am on a weekday during school term time, a tanker train en-route from a refinery to an airport fuel depot is derailed whilst passing over a railway bridge. The railway bridge carries the railway over the main access road to a town from east to west. The main access road bisects the town in a north/ south direction. There are market stalls set up on the roadway on either side of the bridge. At the time of the accident, a tourist’ bus carrying 45 foreign tourists is passing beneath the railway bridge. One of the tank cars is ruptured during the derailment and aviation fuel flows down the sides of the embankment onto the roadway. Flammable vapours from the fuel have been ignited by an open gas burner from a catering caravan. The explosion has created

severe structural damage in a 30 metre radius and moderate structural damage in 100 metre radius. At least 50 people have been killed including some of the foreign tourists, many people have received burns, and many people are trapped. A number have also been contaminated with aviation fuel. The leaking fuel has run down the road and is entering the canal and watercourses at the bottom of the incline. There is evidence to believe that vandalism may be responsible for the derailment.’

DESCRIPTION OF THE INITIAL COMBINED RESPONSE DATA 

A major incident was declared within 15 minutes of assessing the situation. Thus, all the emergency services initiated their major incident plans. Each service has a set of defined roles or tasks specified in their individual plans. For this scenario, examples of initial Police roles/tasks were:

setting up and manning an outer cordon around the site (a larger area than the scene).

preserving and managing the scene

logging all personnel entering the outer cordon

establishing routes and rendezvous points and marshalling areas for all the emergency services

Examples of the initial Fire Service roles/tasks were:

set up an inner cordon

control the fire

carry out immediate rescue

Stem the flow of the hazardous substance

Monitor the health and safety of all inside the inner cordon

Examples of the initial Ambulance Service roles/tasks were:

set up a casualty clearing point

set up a triage point

get medics to the scene

help with evacuation of bedridden people

For each of the tasks the personnel required were identified by, and are shown in, the model.

PCMT-EMCR MODEL 

The model describes the EMCR worksystem and its domain for the initial response scenario data. The initial response in this scenario had no strategic level of command. Thus, although the strategic level physical worksystem users and devices are represented in the model, they are included for completeness. No further description is offered. Also, the operational physical users and devices change rapidly over time. The model represents only those entities present for initial response. The local authority response has not been modelled.

EMCR Domain 

Based on the PCMT framework, the EMCR domain is expressed as those objects, whose transformation constitutes the work of the EMCR worksystem.

In the model, the domain is conceptualised as having a single disaster object comprising other abstract objects, such as lives, property, environment etc. The disaster object and its component sub-objects are abstract. The sub-objects of the domain are based on the identified primary objectives of EMCR (see earlier). The domain in addition contains physical objects.

The work carried out by the EMCR worksystem is thus the transformation of a ‘disaster object’s’ attribute values. Each task carried out by the EMCR worksystem transforms the attribute values of the disaster object. For example, one attribute is stability, which has values along a continuum (i.e. very unstable, slightly unstable, unstable, nearly stable, moderately stable, stable). The work of the EMCR worksystem is to transform this attribute to a desired level of stability. In order to transform this attribute, the attributes of the component sub-objects must be transformed. This attribute value changes by manipulating the values of the attributes of the other objects of the domain, for example, transforming the ‘lives object’ attribute survivor from not rescued to rescued. The transformation of the abstract object attributes result from the manipulation of the physical domain objects of the worksystem. Attributes may be affordant or dispositional. Affordant attributes are transformed by the worksystem; their transformation constitutes the work performed. Dispositional attributes are relevant to the work (they need to be perceived by the worksystem, but are not changed by the worksystem ). For example for the ‘lives object’ the attribute survivor has a value of mobility injured or mobility uninjured. This value needs to be perceived by the EMCR worksystem, so that the Ambulance Service knows there is a casualty to be transported. However, the worksystem does not change the injured state of the survivor. In Figure 1, all the starred attributes are dispositional.

Figure 1 shows the physical domain objects (but not their attributes) in the model. Included are all the objects identified from the scenario data. The objects include those associated with the example which follows.

EMCR Worksystem 

The model of the EMCR worksystem (see Figure 1) shows the cognitive structures of the PCMT framework. The cognitive structures in Figure 1 embody the planning and control behaviours described earlier. The framework comprises four process structures: planning; controlling; perceiving; and executing and two representation structures: plans and knowledge of tasks. For EMCR, these cognitive structures were identified as follows:

A perceiving process which acquires information about property, lives and other domain objects, such as their risk status. This information appears in the knowledge-of-tasks representation. A task in the EMCR will be different for the different agencies. A task for the Police could be setting up a casualty clearance point. A task for the Fire Service could be making sure all personnel are safe within the inner cordon.

An executing process which transforms domain object attribute values. For example, moving injured survivors

away from the scene transforms the ‘lives object’ attribute survivor from not rescued to rescued.

A controlling process which decides which of the other processes should be carried out next based on the plans and knowledge-of-tasks. For example the controlling process might direct the executing process to perform next the execution behaviours of evacuating people (based on the perceiving process that identified where people were, and the knowledge-of-tasks information that people in this situation are at risk), rather than directing the planning process to specify how to preserve the scene. (Thus, there is prioritisation of the tasks carried out by EMCR.)

A planning process, which constructs plans based on knowledge-of-tasks. For example using information from knowledge-of-tasks that the scene needs to be preserved for investigation, the planning process constructs a plan of the sequence of behaviours to carry out this task.

The Plans representation structure embodies the different types of plan used in the combined response.

A knowledge-of-tasks structure, which represents knowledge of relevant aspects of the work domain. For example, the type of hazardous substance.

The physical worksystem comprises the Police and their devices, the Fire Service and their devices, the Ambulance Service and their devices (plus other agencies and their devices which may be involved). The framework allows the construction of alternative models of the distribution of cognitive structures across the user and devices. Thus, it supports reasoning about allocation of function. In the case of the EMCR worksystem, planning and control are distributed across the different levels of the worksystem hierarchy. In the model, the distribution of the structures across the physical worksystem is not shown. However, these physical worksystem structures support its behaviours. In the example which follows, the worksystem behaviours are identified for a selection of data from the training scenario to illustrate this distribution across the worksystem.

RELATIONSHIP BETWEEN THE DOMAIN AND THE WORKSYSTEM 

This section describes an example of two tasks being carried out by the combined response system, in terms of the planning, control, perception and execution behaviours and the transformations these behaviours perform in the domain. This description helps to identify potential conflicts within the combined response system behaviours which may cause co-ordination problems. It also highlights some issues related to using the PCMT framework for modelling EMCR, which will be discussed in the final section.

The Fire Service operational commander carries out perception behaviours that update his knowledge-of-tasks with the information that there are structurally damaged buildings, fires and leaking hazardous fuels. He then carries out control behaviours that direct him to consult the major incident plan. According to the plan, the Fire Service is responsible for setting up an inner safety cordon when there are hazards and dangers at the scene. It also maintains the safety of the emergency service personnel within this cordon. Based on this plan, the operational commander then carries out control behaviours that direct him to consult the operational plan for setting up of a cordon. The operational plan gives guidance for cordon set up and regulations.

The operational commander then carries out control behaviours that direct him to carry out planning, based on the operational plan and the knowledge-of-tasks. The planning specifies how the inner cordon should be set up and what the regulations are for entering it. The operational commander then carries out control behaviours that direct him to carry out an execution behaviour of setting up the cordon. This execution behaviour is carried out by the operational personnel setting up the cordon and maintaining specified safety regulations. It is the manipulation of the physical objects that transform the abstract disaster scene objects attribute scene from uncontained to contained.

The operational commander then carries out control behaviours that direct him to inform the tactical incident officer of the inner cordon set up. The tactical incident officer thus carries out perception behaviours which update his knowledge-of-tasks about the inner cordon set up. The tactical incident officer then carries out control behaviours that direct him to consult his plan to assess the resources required for the set up. He then carries out planning behaviours to specify the resources required for this task.

At the same time, the operational Ambulance senior officer is carrying out perception behaviours that update his knowledge-of-tasks with the information that there are a number of casualties at the scene. He then carries out control behaviours that direct him to consult his major incident plan. According to the plan, casualties must be located and then either treated at the scene and/or transported to hospital. He then carries out control behaviours that direct him to carry out planning behaviours to specify in the operational plan what personnel are required and how to access the casualties. This plan then directs him to carry out control behaviours to direct the execution behaviours of accessing, treating and transporting casualties. These execution behaviours are carried out by the Ambulance operational personnel. It is the manipulation of the physical objects which transform the abstract lives object attribute survivor from not transported to transported.

However, the Ambulance operational senior officer has not carried out perception behaviours that update his knowledge of tasks that the scene is now contained. Therefore, when the Ambulance personnel attempt to carry out their execution behaviours, they do not fulfill the proper safety requirements which would allow them to enter the inner cordon. Therefore, the execution behaviours of transporting casualties cannot be carried out. The primary objective of EMCR is to save life. Here, there is an overlap of behaviours which is

hindering this objective. Also, if the Ambulance Service is not allowed to carry out its execution behaviour, in trying to increase the saving of life, the fire service must carry out rescue execution behaviours to move the casualties to the edge of the inner cordon. These rescue execution behaviours will decrease the resources available for carrying out the execution behaviours of controlling the hazard, thus decreasing the effectiveness of the response to the secondary objective of preventing escalation of disaster

To attempt to rectify this situation, the Ambulance senior officer informs the Ambulance incident officer of the inner cordon regulations. Thus, the Ambulance incident officer carries out perception behaviours that update his knowledge-of-tasks about the inner cordon regulations. He then carries out control behaviours that direct him to carry out planning behaviours to specify in the plan the required equipment for all resources directed to the scene. Until these regularised resources arrive, the Ambulance incident officer needs to communicate with the Fire Service incident officer to try and negotiate access for the operational personnel. The Fire Service incident officer should have informed the Ambulance incident officer of the regulations, to obviate the problem of co-ordination.

CONCLUSIONS 

This paper has described the application of the PCMT framework to modelling the domain of Emergency management combined response (EMCR). A preliminary model has been developed based on data from a training scenario. This final section will discuss issues related to using the PCMT framework for modelling this domain, and the use of the model for identifying overlapping behaviours that highlight co-ordination issues within EMCR.

The PCMT framework is for use in planning and control for multiple task domains. EMCR is such a domain, but there are differences between this domain and the other domains already modelled. First, EMCR has a changing worksystem. The PCMT representation does not currently represent a changing worksystem. Thus, the preliminary model only represents one phase of response. Second, EMCR is multiple task, but unlike the previous domains studied, there are not multiple objects at the highest level i.e. multiple disaster objects, rather there are multiple sub-objects, such as multiple lives objects. Third, EMCR is made up of multiple agents within a complex three tier command structure. The PCMT framework has so far only modelled domains with a single level of operation. Thus, interactions between the different horizontal layers and different vertical layers of the system are difficult to describe in terms of the present framework. The difficulty can be appreciated in the example, where the different commanders liase, at the same horizontal level, and information flows between vertical levels.

However, the model provides a description of the system and its domain that can be used to identify overlapping behaviours, albeit for a simple example and in a limited context. The allocation of the abstract structures across the physical worksystem can be inferred from the description of the physical behaviours, which can then be used to identify possible reconfigurations of the worksystem to support more effective co-ordination.

Future work will develop more complete models of EMCR in response to a disaster scenario and in so doing extend the PCMT framework to accommodate the issues described earlier.

ACKNOWLEDGEMENTS 

This work is part funded by the Home Office Emergency Planning Research group under the EPSRC CASE scheme.

Thanks are due to the Home Office Emergency Planning college for their infinite co-operation with the data collection

REFERENCES 

Dowell, J. and Long, J. (1989) Towards a conception for an engineering discipline of human factors. Ergonomics, 32, 1513-1536.

Hill, B., Long, J.B., Smith, W. and Whitefield, A.D. (1995) A model of medical reception – the planning and control of multiple task work, Applied Cognitive Psychology, 9, 81-114.

Smith, M.W., Hill, B., Long, J.B. and Whitefield, A.D. (1992) Modelling the Relationship Between Planning, Control, Perception and Execution Behaviours in Interactive Worksystems. In D.Diaper, M.Harrison and A.Monk (Eds) People and Computers VII; Proceedings of HCI ’92. Cambridge University Press.

Dealing with Disaster 2nd Edition (1994). Home Office Publication HMSO London.

Emergency planning college document (1995) Management levels in response to a major incident – The Principles of Command and Control

Figure 1 The PCMT-EMCRS Model

Picture 3

3.4 Long and Dowell (1989) – HCI Engineering Knowledge – Short Version 150 150 John

3.4 Long and Dowell (1989) – HCI Engineering Knowledge – Short Version

Short Version

Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering 

John Long and John Dowell 

Ergonomics Unit, University College London,  26 Bedford Way, London. WC1H 0AP.  L.

Published in: People and Computers V. Sutcliffe A. and Macaulay (ed.s). Cambridge University Press, Cambridge. Proceedings of the Fifth Conference of the BCS HCI SIG, Nottingham 5-8 September 1989.

……..First, consideration of disciplines in general suggests their complete definition can be summarised as: ‘knowledge, practices and a general problem having a particular scope, where knowledge supports practices seeking solutions to the general problem’……….

Contents  1. Introduction 2. A Framework for Conceptions of the HCI Discipline 3. Three Conceptions of the Discipline of HCI 4. Summary and Conclusions

1. Introduction 

The main theme of HCI ’89 is ‘the theory and practice of HCI’. …….. For example, what is HCI? What is HCI practice? What theory supports HCI practice? How well does HCI theory support HCI practice? 1.1. Alternative Interpretations of the Theme …….. Some would claim HCI theory as explanatory laws, others as design principles. Some would claim HCI theory as directly supporting HCI practice, others as indirectly providing support. Some would claim HCI theory as effectively supporting HCI practice, whilst others may claim such support as non-existent. …….. Answers to different questions may also be mutually exclusive; for example, HCI as engineering would likely exclude HCI theory as explanatory laws, and HCI practice as ‘trial and error’. And moreover, answers to some questions may constrain the answers to other questions; for example, types of HCI theory, perhaps design principles, may constrain the type of practice, perhaps as ‘specify and implement’.

1.2. Alternative Conceptions of HCI: the Requirement for a Framework

1.3. Aims .

2. A Framework for Conceptions of the HCI Discipline 

2.1. On the Nature of Disciplines Most definitions assume three primary characteristics of disciplines: knowledge; practice; and a general problem. All definitions of disciplines make reference to discipline knowledge as the product of research or more generally of a field of study. Knowledge can be public (ultimately formal) or private (ultimately experiential). It may assume a number of forms; for example, it may be tacit, formal, experiential, codified – as in theories, laws and principles etc. It may also be maintained in a number of ways; for example, it may be expressed in journals, or learning systems, or it may only be embodied in procedures and tools. All disciplines would appear to have knowledge as a component (for example, scientific discipline knowledge, engineering discipline knowledge, medical discipline knowledge, etc). Knowledge, therefore, is a necessary characteristic of a discipline. …….. suggest the definition of a discipline as: ‘the use of knowledge to support practices seeking solutions to a general problem having a particular scope’.

2.2. Of Humans Interacting with Computers

2.3. The Framework for Conceptions of the HCI Discipline Hence, we may express a framework for conceptions of the discipline of HCI as: ‘the use of HCI knowledge to support practices seeking solutions to the general problem of HCI of designing humans and computers interacting to perform work effectively’.

3. Three Conceptions of the Discipline of HCI 

3.1. Conception of HCI as a Craft Discipline Craft disciplines solve the general problems they address by practices of implementation and evaluation. Their practices are supported by knowledge typically in the form of heuristics; heuristics are implicit (as in the procedures of good practice) and informal (as in the advice provided by one craftsperson to another). Craft knowledge is acquired by practice and example, and so is experiential; it is neither explicit nor formal.     ……..HCI craft knowledge, supporting practice, is maintained by practice itself.   ……..

The first explanation of this – and one that may at first appear paradoxical – is that the (public) knowledge possessed by HCI as a craft discipline is not operational. That is to say, because it is either implicit or informal, it cannot be directly applied by those who are not associated with the generation of the heuristics or exposed to their use. If the heuristics are implicit in practice, they can be applied by others only by means of example practice. If the heuristics are informal, they can be applied only with the help of guidance from a successful practitioner (or by additional, but unvalidated, reasoning by the user).   ……..If craft knowledge is not operational, then it is unlikely to be testable –……..there is no guarantee that practice applying HCI craft knowledge will have the consequences intended (guarantees cannot be provided if testing is precluded). There is no guarantee that its application to designing humans and computers interacting will result in their performing work effectively……….

Thus, with respect to the guarantee that knowledge applied by practice will solve the general HCI problem, the HCI craft discipline fails to be effective. If craft knowledge is not testable, then neither is it likely to be generalisable ……To be clear, if being operational demands that (public) discipline knowledge can be directly applied by others than those who generated the knowledge, then being general demands that the knowledge be guaranteed to be appropriate in instances other than those in which it was generated. Yet, the knowledge possessed by HCI as a craft discipline applies only to those problems already addressed by its practice, that is, in the instances in which it was generated. ……..

Thus, with respect to the generality of its knowledge, the HCI craft discipline fails to be effective. It (Craft) is ineffective because its knowledge is neither operational (except in practice itself), nor generalisable, nor guaranteed to achieve its intended effect – except as the continued success of its practice and its continued use by successful craftspeople.

3.2. Conception of HCI as an Applied Science Discipline The discipline of science uses scientific knowledge (in the form of theories, models, laws, truth propositions, hypotheses, etc.) to support the scientific practice ……..Scientific knowledge is explicit and formal, operational, testable and generalisable. It is therefore refutable (if not proveable; Popper [1959]). An applied science discipline is one which recruits scientific knowledge to the practice of solving its general problem – a design problem. HCI as an applied science discipline uses scientific knowledge  as an aid to addressing the general problem of designing humans and computers interacting to perform work effectively. …….

First, its science knowledge cannot be applied directly, not – as in the case of craft knowledge – because it is implicit or informal, but because the knowledge is not prescriptive; it is only explanatory and predictive. Its scope is not that of the general problem of design. Second, the guidelines based on the science knowledge, which are not predictive but prescriptive, are not defined, operationalised, tested or generalised with respect to desired effective performance. Their selection and application in any system would be a matter of heuristics (and so paradoxically of good practice). Third, the application of guidelines based on science knowledge does not guarantee the consequences intended, that is effective performance…….

HCI as an applied science discipline, however, differs in two important respects from HCI as a craft discipline. Science knowledge is explicit and formal, and so supports reasoning about the derivation of guidelines, their solution and application (although one might have to be a discipline specialist so to do). Second, science knowledge (of encoding specificity, for example) would be expected to be more correct, coherent and complete than ……..It (Applied Science) fails to be effective principally because its knowledge is not directly applicable and because the guidelines based on its knowledge are neither generalisable, nor guaranteed to achieve their intended effect.

3.3. Conception of HCI as an Engineering Discipline The discipline of engineering may characteristically solve its general problem (of design) by the specification of designs before their implementation. It is able to do so because of the prescriptive nature of its discipline knowledge supporting those practices – knowledge formulated as engineering principles. Further, its practices are characterised by their aim of ‘design for performance’.

Engineering principles may enable designs to be prescriptively specified for artefacts, or systems which when implemented, demonstrate a prescribed and assured performance. ……. The conception of HCI engineering principles assumes the possibility of a codified, general and testable formulation of HCI discipline knowledge which might be prescriptively applied to designing humans and computers interacting to perform work effectively. Such principles would be unequivocally formal and operational. Indeed their operational capability would derive directly from their formality, including the formality of their concepts –…….

…….. The abstracted form of those principles is visible. An HF engineering principle would take as input a performance requirement of the interactive worksystem, and a specified behaviour of the computer, and prescribe the necessary interacting  behaviours ……..

First, HCI engineering principles would be a generaliseable knowledge. Hence, application of principles to solving each new design problem could be direct and efficient with regard to costs incurred. The discipline would be effective. Second, engineering HCI principles would be operational, and so their application would be specifiable…….. Because they would be operational, they would be testable and their reliability and generality could be specified.

4. Summary and Conclusions  This paper has developed the Conference theme of ‘the theory and practice of HCI’. ……..

Although all conceptions of HCI as a discipline necessarily include the notion of practice (albeit of different types), the concept of theory is more readily associated with HCI as an applied science discipline, because scientific knowledge in its most correct, coherent and complete form is typically expressed as theories.

Craft knowledge is more typically expressed as heuristics. Engineering knowledge is more typically expressed as principles. ……. Although all three conceptions address the general problem of HCI, they differ concerning the knowledge recruited to solve the problem. Craft recruits heuristics; applied science recruits theories expressed as guidelines; and engineering recruits principles. …….

The different types of knowledge and the different types of practice have important consequences for the effectiveness of any discipline of HCI. Heuristics are easy to generate, but offer no guarantee that the design solution will exhibit the properties of performance desired. Scientific theories are difficult and costly to generate, and the guidelines derived from them (like heuristics) offer no final guarantee concerning performance. Engineering principles would offer guarantees, but are predicted to be difficult…….

…….suggest that the initial creation of discipline knowledge, whether heuristics, guidelines or principles, in all cases requires a reflexive cognitive act involving intuition and reason. Thus, contrary to common assumption, the craft, applied science, and engineering conceptions of the discipline of HCI are similarly reflexive with regard to the general design problem. The initial generation of albeit different discipline knowledges requires in each case the reflexive cognitive act of reason and intuition. ……..

However, there is a case for mutual support of conceptions and it is presented here as a final conclusion. The case is based on the claim made earlier that the creation of discipline knowledge of each conception of HCI requires a reflexive cognitive act of reason and intuition. If the claim is accepted, the reflexive cognitive act of one conception might be usefully but indirectly informed by the discipline knowledge of another.

References 

See full version of the paper, referenced in 3.5.

2.4 Dowell and Long (1989) – HCI Engineering Design Problem – Short Version 150 150 John

2.4 Dowell and Long (1989) – HCI Engineering Design Problem – Short Version

Towards a Conception for an Engineering Discipline of Human Factors 

John Dowell and John Long Ergonomics Unit, University College London,  26, Bedford Way, London. WC1H 0AP. 

Abstract  This paper concerns one possible response of Human Factors to the need for better user-interactions of computer-based systems. The paper is in two parts. Part I examines the potential for Human Factors to formulate engineering principles. A basic pre-requisite for realising that potential is a conception of the general design problem addressed by Human Factors. The problem is expressed informally as: ‘to design human interactions with computers for effective working’. A conception would provide the set of related concepts which both expressed the general design problem more formally, and which might be embodied in engineering principles. Part II of the paper proposes such a conception and illustrates its concepts. It is offered as an initial and speculative step towards a conception for an engineering discipline of Human Factors.

In P. Barber and J. Laws (ed.s) Special Issue on Cognitive Ergonomics, Ergonomics, 1989, vol. 32, no. 11, pp. 1613-1536.

Part 1. Requirement for Human Factors as an Engineering Discipline of Human-Computer Interaction 1.1 Introduction; 1.2 Characterization of the human factors discipline; 1.3 State of the human factors art; 1.4 Human factors engineering; 1.5 The requirement for an engineering conception of human factors.   1.1. Introduction …….. This paper is concerned to develop one possible response of HF to the need for better human-computer interactions. It is in two parts. Part I examines the potential for HF to formulate HF engineering principles for supporting its better response. Pre-requisite to the realisation of that potential, it concludes, is a conception of the general design problem it addresses.

Part II of the paper is a proposal for such a conception. …….. However, a pre-requisite for the formulation of any engineering principle is a conception. A conception is a unitary (and consensus) view of a general design problem; its power lies in the coherence and completeness of its definition of the concepts which can express that problem. Engineering principles are articulated in terms of those concepts. Hence, the requirement for a conception for the HF discipline is concluded (Section 1.5.). ……..

1.2. Characterisation of the Human Factors Discipline ……..

Most definitions of disciplines assume three primary characteristics: a general problem; practices, providing solutions to that problem; and knowledge, supporting those practices. This characterisation presupposes classes of general problem corresponding with types of discipline. For example, one class of general problem is that of the general design problem and includes the design of artefacts (of bridges, for example) and the design of ‘states of the world’ (of public administration, for example).

Engineering and craft disciplines address general design problems. Further consideration also suggests that any general problem has the necessary property of a scope, delimiting the province of concern of the associated discipline. Hence may disciplines also be distinguished from each other; for example, the engineering disciplines of Electrical and Mechanical Engineering are distinguished by their respective scopes of electrical and mechanical artefacts. ……..

The scope of the HCI general design problem includes: humans, both as individuals, as groups, and as social organisations; computers, both as programmable machines, stand-alone and networked, and as functionally embedded devices within machines; and work, both with regard to individuals and the organisations in which it occurs (Long, 1989). For example, the general design problem of HCI includes the problems of designing the effective use of navigation systems by aircrew on flight-decks, and the effective use of word-processors by secretaries in offices.

1.3. State of the Human Factors Art ……..

Figure 1 …….. ……..

1.4. Human Factors Engineering Principles

…….. The ‘design’ disciplines are ranged according to the ‘hardness’ or ‘softness’ of their respective general design problems. …….. However, here hard and soft problems will be generally distinguished by their determinism for the purpose, that is, by the need for design solutions to be determinate. 

A Classification Space for ‘Design ‘ Disciplines A discipline’s practices construct solutions to its general design problem. Consideration of disciplines indicates much variation in their use of specification as a practice in constructing solutions. …….. This variation, however, appears not to be dependent on variations in the hardness of the general design problems. Rather, disciplines appear to differ in the completeness with which they specify solutions to their respective general design problems before implementation occurs. …….. ’Specify then Implement’, therefore, and ‘implement and test’, would appear to represent the extremes of a dimension by which disciplines may be distinguished by their practices. It is a dimension of the completeness with which they specify design solutions.

Taken together, the dimension of problem hardness, characterising general design problems, and the dimension of specification completeness, characterising discipline practices, constitute a classification space for design disciplines such as Electrical Engineering and Graphic Design. The space is shown in Figure 2, including for illustrative purposes, the speculative location of SE. 1.5.

The Requirement for an Engineering Conception for Human Factors ……..  

Part 2. Conception for an Engineering Discipline of Human Factors 2.1 Conception of the human factors general design problem; 2.2 Conception of work and user; 2.2.1 Objects and their attributes; 2.2.2 Attributes and levels of complexity; 2.2.3 Relations between attributes; 2.2.4 Attribute states and affordance; 2.2.5 Organisations, domains (of application)2.2.6 Goals; 2.2.7 Quality; 2.2.8 Work and the user; and the requirement for attribute state changes; 2.3 Conception of the interactive worksystem and the user; 2.3.1 Interactive worksystems; 2.3.2 The user as a system of mental and physical human behaviours; 2.3.3 Human-computer interaction; 2.3.4 On-line and off-line behaviours; 2.3.5 Human structures and the user; 2.3.6 Resource costs and the user; 2.4 Conception of performance of the interactive worksystem and the user; 2.5 Conclusions and the prospect for Human Factors engineering principles

2.1. Conception of the Human Factors General Design Problem.

The conception for the (super-ordinate) engineering discipline of HCI asserts a fundamental distinction between behavioural systems which perform work, and a world in which work originates, is performed and has its consequences. Specifically conceptualised are interactive worksystems consisting of human and computer behaviours together performing work. It is work evidenced in a world of physical and informational objects disclosed as domains of application. The distinction between worksystems and domains of application is represented schematically in Figure 3.  Effectiveness derives from the relationship of an interactive worksystem with its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs it incurs. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed. The concern of an engineering HCI discipline would be the design of interactive worksystems for performance.

More precisely, its concern would be the design of behaviours constituting a worksystem {S} whose actual performance (PA) conformed with some desired performance (PD). And to design {S} would require the design of human behaviours {U} interacting with computer behaviours {C}. Hence, conception of the general design problem of an engineering discipline of HCI is expressed as: Specify then implement {U} and {C}, such that {U} interacting with {C} = {S}as PAPD where PD = fn. { QD ,KD } QD expresses the desired quality of the products of work within the given domain of application, KD expresses acceptable (i.e., desired) costs incurred by the worksystem, i.e., by both human and computer.

The problem, when expressed as one of to ‘specify then implement’ designs of interactive worksystems, is equivalent to the general design problems characteristic of other engineering disciplines (see Section 1.4.). The interactive worksystem can be distinguished as two separate, but interacting sub-systems, that is, a system of human behaviours interacting with a system of computer behaviours. The human behaviours may be treated as a behavioural system in their own right, but one interacting with the system of computer behaviours to perform work. It follows that the general design problem of HCI may be decomposed with regard to its scope (with respect to the human and computer behavioural sub-systems) giving two related problems. Decomposition with regard to the human behaviours gives the general design problem of the HF discipline as: Specify then implement {U} such that {U} interacting with {C} = {S}PaPd. The general design problem of HF then, is one of producing implementable specifications of human behaviours {U} which, interacting with computer behaviours {C}, are constituted within a worksystem {S} whose performance conforms with a desired performance (Pd).

The following sections elaborate the conceptualisation of human behaviours (the user, or users) with regard to the work they perform, the interactive worksystem in which they are constituted, and performance.

2.2 . Conception of Work and the User

The conception for HF identifies a world in which work originates, is performed and has its consequences. This section presents the concepts by which work and its relations with the user are expressed.

2.2.1 Objects and their attributes

Work occurs in a world consisting of objects and arises in the intersection of organisations and (computer) technology. Objects may be both abstract as well as physical, and are characterised by their attributes. Abstract attributes of objects are attributes of information and knowledge. Physical attributes are attributes of energy and matter. Letters (i.e., correspondence) are objects; their abstract attributes support the communication of messages etc; their physical attributes support the visual/verbal representation of information via language.

2.2.2 Attributes and levels of complexity

The different attributes of an object may emerge at different levels within a hierarchy of levels of complexity (see Checkland, 1981). For example, characters and their configuration on a page are physical attributes of the object ‘a letter’ which emerge at one level of complexity; the message of the letter is an abstract attribute which emerges at a higher level of complexity. Objects are described at different levels of description commensurate with their levels of complexity. However, at a high level of description, separate objects may no longer be differentiated. For example, the object ‘income tax return’ and the object ‘personal letter’ are both ‘correspondence’ objects at a higher level of description. Lower levels of description distinguish their respective attributes of content, intended correspondent etc. In this way, attributes of an object described at one level of description completely re-represent those described at a lower level.

2.2.3 Relations between attributes  

Attributes of objects are related, and in two ways. First, attributes at different levels of complexity are related. As indicated earlier, those at one level are completely subsumed in those at a higher level. In particular, abstract attributes will occur at higher levels of complexity than physical attributes and will subsume those lower level physical attributes. For example, the abstract attributes of an object ‘message’ concerning the representation of its content by language subsume the lower level physical attributes, such as the font of the characters expressing the language. As an alternative example, an  industrial process, such as a steel rolling process in a foundry, is an object whose abstract attributes will include the process’s efficiency. Efficiency subsumes physical attributes of the process, – its power consumption, rate of output, dimensions of the output (the rolled steel), etc – emerging at a lower level of complexity. Second, attributes of objects are related within levels of complexity. There is a dependency between the attributes of an object emerging within the same level of complexity. For example, the attributes of the industrial process of power consumption and rate of output emerge at the same level and are inter-dependent.

2.2.4 Attribute states and affordance

At any point or event in the history of an object, each of its attributes is conceptualised as having a state. Further, those states may change. For example, the content and characters (attributes) of a letter (object) may change state: the content with respect to meaning and grammar etc; its characters with respect to size and font etc. Objects exhibit an affordance for transformation, engendered by their attributes’ potential for state change (see Gibson, 1977). Affordance is generally pluralistic in the sense that there may be many, or even, infinite transformations of objects, according to the potential changes of state of their attributes. Attributes’ relations are such that state changes of one attribute may also manifest state changes in related attributes, whether within the same level of complexity, or across different levels of complexity. For example, changing the rate of output of an industrial process (lower level attribute) will change both its power consumption (same level attribute) and its efficiency (higher level attribute).

2.2.5 Organisations,domains (of application) and the requirement for attribute state changes A domain of application may be conceptualised as: ‘a class of affordance of a class of objects’. Accordingly, an object may be associated with a number of domains of application (‘domains’). The object ‘book’ may be associated with the domain of typesetting (state changes of its layout attributes) and with the domain of authorship (state changes of its textual content). In principle, a domain may have any level of generality, for example, the writing of letters and the writing of a particular sort of letter. Organisations are conceptualised as having domains as their operational province and of requiring the realisation of the affordance of objects. It is a requirement satisfied through work. Work is evidenced in the state changes of attributes by which an object is intentionally transformed: it produces transforms, that is, objects whose attributes have an intended state. For example, ‘completing a tax return’ and ‘writing to an acquaintance’, each have a ‘letter’ as their transform, where those letters are objects whose attributes (their content, format and status, for example) have an intended state. Further editing of those letters would produce additional state changes, and therein, new transforms.

2.2.6 Goals Organisations express their requirement for the transformation of objects through specifying goals. A product goal specifies a required transform – a required realisation of the affordance of an object. In expressing the required transformation of an object, a product goal will generally suppose necessary state changes of many attributes. The requirement of each attribute state change can be expressed as a task goal, deriving from the product goal. So for example, the product goal demanding transformation of a letter making its message more courteous, would be expressed by task goals possibly requiring state changes of semantic attributes of the propositional structure of the text, and of syntactic attributes of the grammatical structure.

Hence, a product goal can be re-expressed as a task goal structure, a hierarchical structure expressing the relations between task goals, for example, their sequences. In the case of the computer-controlled steel rolling process, the process is an object whose transformation is required by a foundry organisation and expressed by a product goal. For example, the product goal may specify the elimination of deviations of the process from a desired efficiency. As indicated earlier, efficiency will at least subsume the process’s attributes of power consumption, rate of output, dimensions of the output (the rolled steel), etc. As also indicated earlier, those attributes will be inter-dependent such that state changes of one will produce state changes in the others – for example, changes in rate of output will also change the power consumption and the efficiency of the process.

In this way, the product goal (of correcting deviations from the desired efficiency) supposes the related task goals (of setting power consumption, rate of output, dimensions of the output etc). Hence, the product goal can be expressed as a task goal structure and task goals within it will be assigned to the operator monitoring the process. 2.2.7 Quality The transformation of an object demanded by a product goal will generally be of a multiplicity of attribute state changes – both within and across levels of complexity. Consequently, there may be alternative transforms which would satisfy a product goal – letters with different styles, for example – where those different transforms exhibit differing compromises between attribute state changes of the object. By the same measure, there may also be transforms which will be at variance with the product goal. The concept of quality (Q) describes the variance of an actual transform with that specified by a product goal. It enables all possible outcomes of work to be equated and evaluated.

2.2.8 Work and the user

Conception of the domain then, is of objects, characterised by their attributes, and exhibiting an affordance arising from the potential changes of state of those attributes. By specifying product goals, organisations express their requirement for transforms – objects with specific attribute states. Transforms are produced through work, which occurs only in the conjunction of objects affording transformation and systems capable of producing a transformation. From product goals derive a structure of related task goals which can be assigned either to the human or to the computer (or both) within an associated worksystem. The task goals assigned to the human are those which motivate the human’s behaviours. The actual state changes (and therein transforms) which those behaviours produce may or may not be those specified by task and product goals, a difference expressed by the concept of quality.

Taken together, the concepts presented in this section support the HF conception’s expression of work as relating to the user. The following section presents the concepts expressing the interactive worksystem as relating to the user.

2.3. Conception of the Interactive Worksystem and the User.

The conception for HF identifies interactive worksystems consisting of human and computer behaviours together performing work. This section presents the concepts by which interactive worksystems and the user are expressed.

2.3.1 Interactive worksystems

Humans are able to conceptualise goals and their corresponding behaviours are said to be intentional (or purposeful). Computers, and machines more generally, are designed to achieve goals, and their corresponding behaviours are said to be intended (or purposive1). Human behaviour is teleological, machine behaviour is teleonomic (Checkland, 1981). An interactive worksystem (‘worksystem’) is a behavioural system distinguished by a boundary enclosing all human and computer behaviours whose purpose is to achieve and satisfy a common goal. For example, the behaviours of a secretary and wordprocessor whose purpose is to produce letters constitute a worksystem.

Critically, it is only by identifying that common goal that the boundary of the worksystem can be established: entities, and more so – humans, may exhibit a range of contiguous behaviours, and only by specifying the goals of concern, might the boundary of the worksystem enclosing all relevant behaviours be correctly identified. Worksystems transform objects by producing state changes in the abstract and physical attributes of those objects (see Section 2.2). The secretary and wordprocessor may transform the object ‘correspondence’ by changing both the attributes of its meaning and the attributes of its layout.

More generally, a worksystem may transform an object through state changes produced in related attributes. An operator monitoring a computer-controlled industrial process may change the efficiency of the process through changing its rate of output. The behaviours of the human and computer are conceptualised as behavioural sub-systems of the worksystem – sub-systems which interact1. The human behavioural sub-system is here more appropriately termed the user. Behaviour may be loosely understood as ‘what the human does’, in contrast with ‘what is done’ (i.e. attribute state changes in a domain). More precisely the user is conceptualised as: a system of distinct and related human behaviours, identifiable as the sequence of states of a person2 interacting with a computer to perform work, and corresponding with a purposeful (intentional) transformation of objects in a domain (see also Ashby, 1956). Although possible at many levels, the user must at least be expressed at a level commensurate with the level of description of the transformation of objects in the domain. For example, a secretary interacting with an electronic mailing facility is a user whose behaviours include receiving and replying to messages. An operator interacting with a computer-controlled milling machine is a user whose behaviours include planning the tool path to produce a component of specified geometry and tolerance.

2.3.2 The user as a system of mental and physical behaviours   The behaviours constituting a worksystem are both physical as well as abstract. Abstract behaviours are generally the acquisition, storage, and transformation of information. They represent and process information at least concerning: domain objects and their attributes, attribute relations and attribute states, and the transformations required by goals. Physical behaviours are related to, and express, abstract behaviours. Accordingly, the user is conceptualised as a system of both mental (abstract) and overt (physical) behaviours which extend a mutual influence – they are related. In particular, they are related within an assumed hierarchy of behaviour types (and their control) wherein mental behaviours generally determine, and are expressed by, overt behaviours. Mental behaviours may transform (abstract) domain objects represented in cognition, or express through overt behaviour plans for transforming domain objects.     So for example, the operator working in the control room of the foundry has the product goal required to maintain a desired condition of the computer-controlled steel rolling process. The operator attends to the computer (whose behaviours include the transmission of information about the process). Hence, the operator acquires a representation of the current condition of the process by collating the information displayed by the computer and assessing it by comparison with the condition specified by the product goal. The operator`s acquisition, collation and assessment are each distinct mental behaviours, conceptualised as representing and processing information. The operator reasons about the attribute state changes necessary to eliminate any discrepancy between current and desired conditions of the process, that is, the set of related changes which will produce the required transformation of the process. That decision is expressed in the set of instructions issued to the computer through overt behaviour – making keystrokes, for example.

The user is conceptualised as having cognitive, conative and affective aspects. The cognitive aspects of the user are those of their knowing, reasoning and remembering, etc; the conative aspects are those of their acting, trying and persevering, etc; and the affective aspects are those of their being patient, caring, and assured, etc. Both mental and overt human behaviours are conceptualised as having these three aspects.

2.3.3 Human-computer Interaction

Although the human and computer behaviours may be treated as separable sub-systems of the worksystem, those sub-systems extend a “mutual influence”, or interaction whose configuration principally determines the worksystem (Ashby, 1956). Interaction is conceptualised as: the mutual influence of the user (i.e., the human behaviours) and the computer behaviours associated within an interactive worksystem. Hence, the user {U} and computer behaviours {C} constituting a worksystem {S}, were expressed in the general design problem of HF (Section 2.1) as: {U} interacting with {C} = {S} Interaction of the human and computer behaviours is the fundamental determinant of the worksystem, rather than their individual behaviours per se. For example, the behaviours of an operator interact with the behaviours of a computer-controlled milling machine. The operator’s behaviours influence the behaviours of the machine, perhaps in the tool path program – the behaviours of the machine, perhaps the run-out of its tool path, influences the selection behaviour of the operator. The configuration of their interaction – the inspection that the machine allows the operator, the tool path control that the operator allows the machine – determines the worksystem that the operator and machine behaviours constitute in their planning and execution of the machining work.

The assignment of task goals then, to either the human or the computer delimits the user and therein configures the interaction. For example, replacement of a mis-spelled word required in a document is a product goal which can be expressed as a task goal structure of necessary and related attribute state changes. In particular, the text field for the correctly spelled word demands an attribute state change in the text spacing of the document. Specifying that state change may be a task goal assigned to the user, as in interaction with the behaviours of early text editor designs, or it may be a task goal assigned to the computer, as in interaction with the ‘wrap-round’ behaviours of contemporary wordprocessor designs. The assignment of the task goal of specification configures the interaction of the human and computer behaviours in each case; it delimits the user.

2.3.4 On-line and off-line behaviours

The user may include both on-line and off-line human behaviours: on-line behaviours are associated with the computer’s representation of the domain; offline behaviours are associated with non-computer representations of the domain, or the domain itself. As an illustration of the distinction, consider the example of an interactive worksystem consisting of behaviours of a secretary and a wordprocessor and required to produce a paper-based copy of a dictated letter stored on audio tape. The product goal of the worksystem here requires the transformation of the physical representation of the letter from one medium to another, that is, from tape to paper. From the product goal derives the task goals relating to required attribute state changes of the letter. Certain of those task goals will be assigned to the secretary. The secretary’s off-line behaviours include listening to, and assimilating the dictated letter, so acquiring a representation of the domain directly. By contrast, the secretary’s on-line behaviours include specifying the represention by the computer of the transposed content of the letter in a desired visual/verbal format of stored physical symbols. On-line and off-line human behaviours are a particular case of the ‘internal’ interactions between a human’s behaviours as, for example, when the secretary’s typing interacts with memorisations of successive segments of the dictated letter.

2.3.5 Human structures and behaviour

Conceptualisation of the user as a system of human behaviours needs to be extended to the structures supporting behaviour. Whereas human behaviours may be loosely understood as ‘what the human does’, the structures supporting them can be understood as ‘how they are able to do what they do’ (see Marr, 1982; Wilden, 1980). There is a one to many mapping between a human`s structures and the behaviours they might support: the structures may support many different behaviours. In co-extensively enabling behaviours at each level, structures must exist at commensurate levels. The human structural architecture is both physical and mental, providing the capability for a human’s overt and mental behaviours. It provides a represention of domain information as symbols (physical and abstract) and concepts, and the processes available for the transformation of those representations. It provides an abstract structure for expressing information as mental behaviour. It provides a physical structure for expressing information as physical behaviour. Physical human structure is neural, bio-mechanical and physiological. Mental structure consists of representational schemes and processes. Corresponding with the behaviours it supports and enables, human structure has cognitive, conative and affective aspects. The cognitive aspects of human structures include information and knowledge – that is, symbolic and conceptual representations – of the domain, of the computer and of the person themselves, and it includes the ability to reason. The conative aspects of human structures motivate the implementation of behaviour and its perseverence in pursuing task goals. The affective aspects of human structures include the personality and temperament which respond to and supports behaviour.

To illustrate the conceptualisation of mental structure, consider the example of structure supporting an operator’s behaviours in the foundry control room. Physical structure supports perception of the steel rolling process and executing corrective control actions to the process through the computer input devices. Mental structures support the acquisition, memorisation and transformation of information about the steel rolling process. The knowledge which the operator has of the process and of the computer supports the collation, assessment and reasoning about corrective control actions to be executed. The limits of human structure determine the limits of the behaviours they might support. Such structural limits include those of: intellectual ability; knowledge of the domain and the computer; memory and attentional capacities; patience; perseverence; dexterity; and visual acuity etc. The structural limits on behaviour may become particularly apparent when one part of the structure ( channel capacity, perhaps) is required to support concurrent behaviours, perhaps simultaneous visual attending and reasoning behaviours. The user then, is ‘resource’ limited by the co-extensive human structure. The behavioural limits of the human determined by structure are not only difficult to define with any kind of completeness, they will also be variable because that structure can change, and in a number of respects. A person may have self-determined changes in response to the domain – as expressed in learning phenomena, acquiring new knowledge of the domain, of the computer, and indeed of themselves, to better support behaviour.

Also, human structure degrades with the expenditure of resources in behaviour, as evidenced in the phenomena of mental and physical fatigue. It may also change in response to motivating or de-motivating influences of the organisation which maintains the worksystem. It must be emphasised that the structure supporting the user is independent of the structure supporting the computer behaviours. Neither structure can make any incursion into the other, and neither can directly support the behaviours of the other. (Indeed this separability of structures is a pre-condition for expressing the worksystem as two interacting behavioural sub-systems.) Although the structures may change in response to each other, they are not, unlike the behaviours they support, interactive; they are not included within the worksystem. The combination of structures of both human and computer supporting their interacting behaviours is conceptualised as the user interface .

2.3.6 Resource costs of the user Work performed by interactive worksystems always incurs resource costs. Given the separability of the human and the computer behaviours, certain resource costs are associated directly with the user and distinguished as structural human costs and behavioural human costs. Structural human costs are the costs of the human structures co-extensive with the user. Such costs are incurred in developing and maintaining human skills and knowledge. More specifically, structural human costs are incurred in training and educating people, so developing in them the structures which will enable their behaviours necessary for effective working. Training and educating may augment or modify existing structures, provide the person with entirely novel structures, or perhaps even reduce existing structures. Structural human costs will be incurred in each case and will frequently be borne by the organisation. An example of structural human costs might be the costs of training a secretary in the particular style of layout required for an organisation’s correspondence with its clients, and in the operation of the computer by which that layout style can be created.

Structural human costs may be differentiated as cognitive, conative and affective structural costs of the user. Cognitive structural costs express the costs of developing the knowledge and reasoning abilities of people and their ability for formulating and expressing novel plans in their overt behaviour – as necessary for effective working. Conative structural costs express the costs of developing the activity, stamina and persistence of people as necessary for effective working. Affective structural costs express the costs of developing in people their patience, care and assurance as necessary as necessary for effective working. Behavioural human costs are the resource costs incurred by the user (i.e by human behaviours) in recruiting human structures to perform work. They are both physical and mental resource costs. Physical behavioural costs are the costs of physical behaviours, for example, the costs of making keystrokes on a keyboard and of attending to a screen display; they may be expressed without differentiation as physical workload. Mental behavioural costs are the costs of mental behaviours, for example, the costs of knowing, reasoning, and deciding; they may be expressed without differentiation as mental workload. Mental behavioural costs are ultimately manifest as physical behavioural costs.

When differentiated, mental and physical behavioural costs are conceptualised as the cognitive, conative and affective behavioural costs of the user. Cognitive behavioural costs relate to both the mental representing and processing of information, and the demands made on the individual`s extant knowledge, as well as the physical expression thereof in the formulation and expression of a novel plan. Conative behavioural costs relate to the repeated mental and physical actions and effort required by the formulation and expression of the novel plan. Affective behavioural costs relate to the emotional aspects of the mental and physical behaviours required in the formulation and expression of the novel plan. Behavioural human costs are evidenced in human fatigue, stress and frustration; they are costs borne directly by the individual.

2.4. Conception of Performance of the Interactive Worksystem and the User. In asserting the general design problem of HF (Section 2.1.), it was reasoned that: “Effectiveness derives from the relationship of an interactive worksystemwith its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs incurred by it. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed. ” This statement followed from the distinction between interactive worksystems performing work, and the work they perform. Subsequent elaboration upon this distinction enables reconsideration of the concept of performance, and examination of its central importance within the conception for HF. Because the factors which constitute this engineering concept of performance (i.e the quality and costs of work) are determined by behaviour, a concordance is assumed between the behaviours of worksystems and their performance: behaviour determines performance (see Ashby, 1956; Rouse, 1980).

The quality of work performed by interactive worksystems is conceptualised as the actual transformation of objects with regard to their transformation demanded by product goals. The costs of work are conceptualised as the resource costs incurred by the worksystem, and are separately attributed to the human and computer. Specifically, the resource costs incurred by the human are differentiated as: structural human costs – the costs of establishing and maintaining the structure supporting behaviour; and behavioural human costs – the costs of the behaviour recruiting structure to its own support. Structural and behavioural human costs were further differentiated as cognitive, conative and affective costs. A desired performance of an interactive worksystem may be conceptualised. Such a desired performance might either be absolute, or relative as in a comparative performance to be matched or improved upon. Accordingly, criteria expressing desired performance, may either specify categorical gross resource costs and quality, or they may specify critical instances of those factors to be matched or improved upon. Discriminating the user’s performance within the performance of the interactive worksystem would require the separate assimilation of human resource costs and their achievement of desired attribute state changes demanded by their assigned task goals.

Further assertions concerning the user arise from the conceptualisation of worksystem performance.

First, the conception of performance is able to distinguish the quality of the transform from the effectiveness of the worksystems which produce them. This distinction is essential as two worksystems might be capable of producing the same transform, yet if one were to incur a greater resource cost than the other, its effectiveness would be the lesser of the two systems.

Second, given the concordance of behaviour with performance, optimal human (and equally, computer) behaviours may be conceived as those which incur a minimum of resource costs in producing a given transform. Optimal human behaviour would minimise the resource costs incurred in producing a transform of given quality (Q). However, that optimality may only be categorically determined with regard to worksystem performance, and the best performance of a worksystem may still be at variance with the performance desired of it (Pd). To be more specific, it is not sufficient for human behaviours simply to be error-free. Although the elimination of errorful human behaviours may contribute to the best performance possible of a given worksystem, that performance may still be less than desired performance.- See Section 1.4. Where the possibility for expressing, by an absolute value, the desired performance of a system or artifact is associated with the hardness of the design problem. Conversely, although human behaviours may be errorful, a worksystem may still support a desired performance.

Third, the common measures of human ‘performance’ – errors and time, are related in this conceptualisation of performance. Errors are behaviours which increase resource costs incurred in producing a given transform, or which reduce the quality of transform, or both. The duration of human behaviours may (very generally) be associated with increases in behavioural user costs.

Fourth, structural and behavioural human costs may be traded-off in performance. More sophisticated human structures supporting the user, that is, the knowledge and skills of experienced and trained people, will incur high (structural) costs to develop, but enable more efficient behaviours – and therein, reduced behavioural costs.

Fifth, resource costs incurred by the human and the computer may be traded-off in performance. A user can sustain a level of performance of the worksystem by optimising behaviours to compensate for the poor behaviours of the computer (and vice versa), i.e., behavioural costs of the user and computer are traded-off. This is of particular concern for HF as the ability of humans to adapt their behaviours to compensate for poor computer-based systems often obscures the low effectiveness of worksystems.

This completes the conception for HF. From the initial assertion of the general design problem of HF, the concepts that were invoked in its formal expression have subsequently been defined and elaborated, and their coherence established.

2.5. Conclusions and the Prospect for Human Factors Engineering Principles

A conception for HF is a pre-requisite for the formulation of HF engineering principles. It is the concepts and their relations which express the HF general design problem and which would be embodied in HF engineering principles. The form of a conception for HF was proposed in Part II. Originating in a conception for an engineering discipline of HCI (Dowell and Long, 1988a), the conception for HF is postulated as appropriate for supporting the formulation of HF engineering principles. The conception for HF is a broad view of the HF general design problem. Instances of the general design problem may include the development of a worksystem, or the utilisation of a worksystem within an organisation. Developing worksystems which are effective, and maintaining the effectiveness of worksystems within a changing organisational environment, are both expressed within the problem.

References  

See complete version of the paper in 2.5.

1.4 Long and Dowell (1989) – HCI Engineering Discipline – Short Version 150 150 John

1.4 Long and Dowell (1989) – HCI Engineering Discipline – Short Version

Conceptions of the Discipline of HCI: Craft; Applied Science, and Engineering

John Long and John Dowell Ergonomics Unit, University College London, 26 Bedford Way, London. WC1H 0AP.

Pre-print: In: Sutcliffe, A. andMacaulay, L., (eds.) People and Computers V: Proceedings of the Fifth Conference of the British Computer Society.(pp. pp. 9-32). Cambridge University Press: Cambridge, UK.  http://discovery.ucl.ac.uk/15292/  .

….. First, consideration of disciplines in general suggests their complete definition can be summarised as: ‘knowledge, practices and a general problem having a particular scope, where knowledge supports practices seeking solutions to the general problem’. Second, the scope of the general problem of HCI is defined by reference to humans, computers, and the work they perform. Third, by intersecting these two definitions, a framework is proposed ……

The framework expresses the essential characteristics of the HCI discipline, and can be summarised as: ‘the use of HCI knowledge to support practices seeking solutions to the general problem of HCI’……

Contents 1. Introduction 1.1. Alternative Interpretations of the Theme 1.2. Alternative Conceptions of HCI: the Requirement for a Framework 1.3. Aims 2. A Framework for Conceptions of the HCI Discipline 2.1. On the Nature of Disciplines 2.2. Of Humans Interacting with Computers 2.3. The Framework for Conceptions of the HCI Discipline 3. Three Conceptions of the Discipline of HCI 3.1. Conception of HCI as a Craft Discipline 3.2. Conception of HCI as an Applied Science Discipline 3.3. Conception of HCI as an Engineering Discipline 4. Summary and Conclusions

1. Introduction 

1.1. Alternative Interpretations of the Theme……..

1.2. Alternative Conceptions of HCI: the Requirement for a Framework …….. A conception of the HCI discipline offers a unitary view; its value lies in the coherence and completeness with which it enables understanding of the discipline, how the discipline operates, and its effectiveness……..  A suitable structure for this purpose would be a framework identifying the essential characteristics of the HCI discipline……..

1.3. Aims …..  the aims of this paper are as follows: (i) to propose a framework for conceptions of the HCI discipline

2. A Framework for Conceptions of the HCI Discipline

Two prerequisites of a framework for conceptions of the HCI discipline are assumed. The first is a definition of disciplines appropriate for the expression of HCI. The second is a definition of the province of concern of the HCI discipline, which, whilst broad enough to include all disparate aspects, enables the discipline’s boundaries to be identified. Each of these prerequisites will be addressed in turn (Sections 2.1. and 2.2.). From them is derived a framework for conceptions of the HCI discipline (Section 2.3.). Source material for the framework is to be found in (Dowell & Long [1988]; Dowell & Long [manuscript submitted for publication]; and Long [1989]).

2.1. On the Nature of Disciplines

Most definitions assume three primary characteristics of disciplines: knowledge; practice; and a general problem. All definitions of disciplines make reference to discipline knowledge as the product of research or more generally of a field of study…….. All disciplines would appear to have knowledge as a component (for example, scientific discipline knowledge, engineering discipline knowledge, medical discipline knowledge, etc). Knowledge, therefore, is a necessary characteristic of a discipline. Consideration of different disciplines suggests that practice is also a necessary characteristic of a discipline. Further, a discipline’s knowledge is used by its practices to solve a general (discipline) problem. …….. The discipline of engineering includes the engineering practice addressing the general (engineering) problem of design. …….. Practice, therefore, and the general (discipline) problem which it uses knowledge to solve, are also necessary characteristics of a discipline. Clearly, disciplines are here being distinguished by the general (discipline) problem they address. …….. Yet consideration also suggests those general (discipline) problems each have the necessary property of a scope. Decomposition of a general (discipline) problem with regard to its scope exposes (subsumed) general problems of particular scopes……. 

Two basic properties of disciplines are therefore concluded. One is the property of the scope of a general discipline problem. The other is the possibility of division of a discipline into sub-disciplines by decomposition of its general discipline problem. Taken together, the three necessary characteristics of a discipline (and the two basic properties additionally concluded), suggest the definition of a discipline as: ‘the use of knowledge to support practices seeking solutions to a general problem having a particular scope’. It is represented schematically in Figure 1. This definition will be used subsequently to express HCI.

2.2. Of Humans Interacting with Computers

The second prerequisite of a framework for conceptions of the HCI discipline is a definition of the scope of the general problem addressed by the discipline. In delimiting the province of concern of the HCI discipline, such a definition might assure the completeness of any one conception (see Section 1.2.). HCI concerns humans and computers interacting to perform work. ……. Further, since both organisations and individuals have requirements for the effectiveness with which work is performed, also implicated is the optimisation of all aspects of the interactions supporting effectiveness. Taken together, these implications suggest a definition of the scope of the general (discipline) problem of HCI. It is expressed, in summary, as ‘humans and computers interacting to perform work effectively’; it is represented schematically in Figure 2. This definition, in conjunction with the general definition of disciplines, will now enable expression of a framework for conceptions of the HCI discipline.  2.3. The Framework for Conceptions of the HCI Discipline ….. Given the definition of its scope (above), and the preceding definition of disciplines, the general problem addressed by the discipline of HCI is asserted as: ‘the design of humans and computers interacting to perform work effectively’. It is a general (discipline) problem of design : its ultimate product is designs. The practices of the HCI discipline seek solutions to this general problem, for example: in the construction of computer hardware and software; in the selection and training of humans to use computers; in aspects of the management of work, etc. HCI discipline knowledge supports the practices that provide such solutions. ….. Hence, we may express a framework for conceptions of the discipline of HCI as: ‘the use of HCI knowledge to support practices seeking solutions to the general problem of HCI of designing humans and computers interacting to perform work effectively…… Importantly, the framework supposes the nature of effectiveness of the HCI discipline itself. There are two apparent components of this effectiveness. The first is the success with which its practices solve the general problem of designing humans and computers interacting to perform work effectively. It may be understood to be synonymous with ‘product quality’. The second component of effectiveness of the discipline is the resource costs incurred in solving the general problem to a given degree of success – costs incurred by both the acquisition and application of knowledge. It may be understood to be synonymous with ‘production costs’. Figure 3. Framework for Conceptions of the Discipline of HCI ……

3. Three Conceptions of the Discipline of HCI A review of the literature was undertaken to identify alternative conceptions of HCI, that is, conceptions of the use of knowledge to support practices solving the general problem of the design of humans and computers interacting to perform work effectively. The review identified three such conceptions. They are HCI as a craft discipline; as an applied scientific discipline; and as an engineering discipline……..

3.1. Conception of HCI as a Craft Discipline …….. Figure 4 Conception of HCI as a Craft Discipline ……..

3.2. Conception of HCI as an Applied Science Discipline …….. Figure 5 Conception of HCI as an Applied Science Discipline ……..

3.3. Conception of HCI as an Engineering Discipline The discipline of engineering may characteristically solve its general problem (of design) by the specification of designs before their implementation. It is able to do so because of the prescriptive nature of its discipline knowledge supporting those practices – knowledge formulated as engineering principles. Further, its practices are characterised by their aim of ‘design for performance’. Engineering principles may enable designs to be prescriptively specified for artefacts, or systems which when implemented, demonstrate a prescribed and assured performance. And further, engineering disciplines may solve their general problem by exploiting a decompositional approach to design. Designs specified at a general level of description may be systematically decomposed until their specification is possible at a level of description of their complete implementation. Engineering principles may assure each level of specification as a representation of the previous level. This Section summarises the conception (schematically represented in Figure 6) and attempts to indicate the effectiveness of such a discipline. The conception of HCI engineering principles assumes the possibility of a codified, general and testable formulation of HCI discipline knowledge which might be prescriptively applied to designing humans and computers interacting to perform work effectively. …….. 

……..The concepts described enable the expression of the general problem addressed by an engineering discipline of HCI as: specify then implement user behaviour {U} and computer behaviour {C}, such that {U} interacting with {C} An HF engineering principle would take as input a performance requirement of the interactive worksystem, and a specified behaviour of the computer, and prescribe the necessary interacting behaviour of the user.

4. Summary and Conclusions

…….. The proposal made here is that the general problem of HCI is the design of humans and computers interacting to perform work effectively. The qualification of the general problem as ‘design’, and the addition to the scope of that problem of ‘…. to perform work effectively’, has important consequences for the different conceptions of HCI….. ….. The different types of knowledge and the different types of practice have important consequences for the effectiveness of any discipline of HCI……

References

For References – see the full version of the paper 1.5.

 

1995/96 Kieran Duignan 150 150 John

1995/96 Kieran Duignan

1995/96: Kieran Duignan

 

Date of MSc: 1995/96

 

MSc Project Title:

Using Soft Systems Methodology to Elicit User Requirements for Adapting a Socio-technical System

 

Pre-MSc Background:

Counselling,   career guidance,  training research/development,  sales

 

Pre-MSc View of HCI/Cognitive Ergonomics:

As an emergent domain in Cognitive Psychology

 

Post-MSc View of HCI/Cognitive Ergonomics:

As a buoyantly emergent domain in Cognitive Psychology with potential both for improving organisational performance and for being abused by managers, inclined to indulge the dark side of their power.

 

Subsequent-to-MSc View of HCI/Cognitive Ergonomics:

As a domain in social, economic and organisational behaviour with enormous potential for improvement and for exploitation.

 

Additional Reflections:

Teaching and academic supervision on the MSc (Ergonomics) in University College London 1995/6 introduced me to a robust approach to applied research.  While it took me quite some time to figure out how to apply this approach in the sphere of work in which I have since concentrated – Safety Psychology and Ergonomics.

The course was a positively transformative experience beyond my expectations, by virtue of the quality of teaching and generous supervision, formal and informal, in the ‘learning community,’ facilitated by so many inspiring staff.

Inclusiveness remains another marked feature of the course for me:  15 years before The Equality Act 2010, diversity of several kinds was evident in the student group.  I still smile when I recall the occasion of acute stress I experienced after I created a portrait with twelve flags for an end-of-term celebration and found myself very sharply challenged by a normally very placid and amiable fellow, who strongly objected to the inclusion of the national flag of one Middle Eastern country, while omitting that of his own. I quickly amended the portrait to include the thirteenth.

For the information (and amusement) of students from other years, the end of term celebration in question included the presentation to Professor Long of a certificate, which read as follows:

The Tri-cycles and HIC Unit 95/96 bestow on

Professor John Long

Master of the House

A certificate of Special Merit for his intellectual leadership and linguistic prowess –

That surpasseth all understanding.

 

The last phrase might ring a bell or two for many students. However, John assures me that the certificate, now framed, continues to have pride of place in his kitchen.

 

 

 

 

 

 

Towards a Conception for an Engineering Discipline of Human Factors 150 150 John

Towards a Conception for an Engineering Discipline of Human Factors

 

John Dowell and John Long

Ergonomics Unit, University College London, 

26, Bedford Way, London. WC1H 0AP. 

Abstract  This paper concerns one possible response of Human Factors to the need for better user-interactions of computer-based systems. The paper is in two parts. Part I examines the potential for Human Factors to formulate engineering principles. A basic pre-requisite for realising that potential is a conception of the general design problem addressed by Human Factors. The problem is expressed informally as: ‘to design human interactions with computers for effective working’. A conception would provide the set of related concepts which both expressed the general design problem more formally, and which might be embodied in engineering principles. Part II of the paper proposes such a conception and illustrates its concepts. It is offered as an initial and speculative step towards a conception for an engineering discipline of Human Factors. In P. Barber and J. Laws (ed.s) Special Issue on Cognitive Ergonomics, Ergonomics, 1989, vol. 32, no. 11, pp. 1613-1536.

Part I. Requirement for Human Factors as an Engineering Discipline of Human-Computer Interaction

1.1 Introduction;

1.2 Characterization of the human factors discipline;

1.3 State of the human factors art;

1.4 Human factors engineering;

1.5 The requirement for an engineering conception of human factors.

 

1.1 Introduction

Advances in computer technology continue to raise expectations for the effectiveness of its applications. No longer is it sufficient for computer-based systems simply ‘to work’, but rather, their contribution to the success of the organisations utilising them is now under scrutiny (Didner, 1988). Consequently, views of organisational effectiveness must be extended to take account of the (often unacceptable) demands made on people interacting with computers to perform work, and the needs of those people. Any technical support for such views must be similarly extended (Cooley, 1980).

With recognition of the importance of ‘human-computer interactions’ as a determinant of effectiveness (Long, Hammond, Barnard, and Morton, 1983), Cognitive Ergonomics is emerging as a new and specialist activity of Ergonomics or Human Factors (HF). Throughout this paper, HF is to be understood as a discipline which includes Cognitive Ergonomics, but only as it addresses human-computer interactions. This usage is contrasted with HF as a discipline which more generally addresses human-machine interactions. HF seeks to support the development of more effective computer-based systems. However, it has yet to prove itself in this respect, and moreover, the adequacy of the HF response to the need for better human-computer interactions is of concern. For it continues to be the case that interactions result from relatively ad hoc design activities to which may be attributed, at least in part, the frequent ineffectiveness of systems (Thimbleby, 1984).

This paper is concerned to develop one possible response of HF to the need for better human-computer interactions. It is in two parts. Part I examines the potential for HF to formulate HF engineering principles for supporting its better response. Pre-requisite to the realisation of that potential, it concludes, is a conception of the general design problem it addresses. Part II of the paper is a proposal for such a conception.

The structure of the paper is as follows. Part I first presents a characterisation of HF (Section 1.2) with regard to: the general design problem it addresses; its practices providing solutions to that problem; and its knowledge supporting those practices. The characterisation identifies the relations of HF with Software Engineering (SE) and with the super-ordinate discipline of Human-Computer Interaction (HCI). The characterisation supports both the assessment of contemporary HF and the arguments for the requirement of an engineering HF discipline.

Assessment of contemporary HF (Section 1.3.) concludes that its practices are predominantly those of a craft. Shortcomings of those practices are exposed which indict the absence of support from appropriate formal discipline knowledge. This absence prompts the question as to what might be the Dowell and Long 3 formal knowledge which HF could develop, and what might be the process of its formulation. By comparing the HF general design problem with other, better understood, general design problems, and by identifying the formal knowledge possessed by the corresponding disciplines, the potential for HF engineering principles is suggested (Section 1.4.).

However, a pre-requisite for the formulation of any engineering principle is a conception. A conception is a unitary (and consensus) view of a general design problem; its power lies in the coherence and completeness of its definition of the concepts which can express that problem. Engineering principles are articulated in terms of those concepts. Hence, the requirement for a conception for the HF discipline is concluded (Section 1.5.).

If HF is to be a discipline of the superordinate discipline of HCI, then the origin of a ‘conception for HF’ needs to be in a conception for the discipline of HCI itself. A conception (at least in form) as might be assumed by an engineering HCI discipline has been previously proposed (Dowell and Long, 1988a). It supports the conception for HF as an engineering discipline of HCI presented in Part II.

 

1.2. Characterisation of the Human Factors Discipline

HF seeks to support systems development through the systematic and reasoned design of human-computer interactions. As an endeavour, however, HF is still in its infancy, seeking to establish its identity and its proper contribution to systems development. For example, there is little consensus on how the role of HF in systems development is, or should be, configured with the role of SE (Walsh, Lim, Long, and Carver, 1988). A characterisation of the HF discipline is needed to clarify our understanding of both its current form and any conceivable future form. A framework supporting such a characterisation is summarised below (following Long and Dowell, 1989).

Most definitions of disciplines assume three primary characteristics: a general problem; practices, providing solutions to that problem; and knowledge, supporting those practices. This characterisation presupposes classes of general problem corresponding with types of discipline. For example, one class of general problem is that of the general design problem1 and includes the design of artefacts (of bridges, for example) and the design of ‘states of the world’ (of public administration, for example). Engineering and craft disciplines address general design problems.

Further consideration also suggests that any general problem has the necessary property of a scope, delimiting the province of concern of the associated discipline. Hence may disciplines also be distinguished from each other; for example, the engineering disciplines of Electrical and Mechanical Engineering are distinguished by their respective scopes of electrical and mechanical artefacts. So, knowledge possessed by Electrical Engineering supports its practices solving the general design problem of designing electrical artefacts (for example, Kirchoff’s Laws would support the analysis of branch currents for a given network design for an amplifier’s power supply).

Although rudimentary, this framework can be used to provide a characterisation of the HF discipline. It also allows a distinction to be made between the disciplines of HF and SE. First, however, it is required that the super-ordinate discipline of HCI be postulated. Thus, HCI is a discipline addressing a general design problem expressed informally as: ‘to design human-computer interactions for effective working’. The scope of the HCI general design problem includes: humans, both as individuals, as groups, and as social organisations; computers, both as programmable machines, stand-alone and networked, and as functionally embedded devices within machines; and work, both with regard to individuals and the organisations in which it occurs (Long, 1989). For example, the general design problem of HCI includes the problems of designing the effective use of navigation systems by aircrew on flight-decks, and the effective use of wordprocessors by secretaries in offices.

The general design problem of HCI can be decomposed into two general design problems, each having a particular scope. Whilst subsumed within the general design problem of HCI, these two general design problems are expressed informally as: ‘to design human interactions with computers for effective working’; and ‘to design computer interactions with humans for effective working’.

Each general design problem can be associated with a different discipline of the superordinate discipline of HCI. HF addresses the former, SE addresses the latter. With different – though complementary – aims, both disciplines address the design of human-computer interactions for effective working. The HF discipline concerns the physical and mental aspects of the human interacting with the computer. The SE discipline concerns the physical and software aspects of the computer interacting with the human.

The practices of HF and SE are the activities providing solutions to their respective general design problems and are supported by their respective discipline knowledge. Figure 1 shows schematically this characterisation of HF as a sub-discipline of HCI (following Long and Dowell, 1989). The following section employs the characterisation to evaluate contemporary HF.

 

1.3. State of the Human Factors Art

It would be difficult to reject the claim that the contemporary HF discipline has the character of a craft (at times even of a technocratic art). Its practices can justifiably be described as a highly refined form of design by ‘trial and error’ (Long and Dowell, 1989). Characteristic of a craft, the execution and success of its practices in systems development depends principally on the expertise, guided intuition and accumulated experience which the practitioner brings to bear on the design problem.

It is also claimed that HF will always be a craft: that ultimately only the mind itself has the capability for reasoning about mental states, and for solving the under-specified and complex problem of designing user-interactions (see Carey, 1989); that only the designer’s mind can usefully infer the motivations underlying purposeful human behaviour, or make subjective assessments of the elegance or aesthetics of a computer interface (Bornat and Thimbleby, 1989).

The dogma of HF as necessarily a craft whose knowledge may only be the accrued experience of its practitioners, is nowhere presented rationally. Notions of the indeterminism, or the un-predictability of human behaviour are raised simply as a gesture. Since the dogma has support, it needs to be challenged to establish the extent to which it is correct, or to which it compels a misguided and counter-productive doctrine (see also, Carroll and Campbell, 1986).

Current HF practices exhibit four primary deficiencies which prompt the need to identify alternative forms for HF. First, HF practices are in general poorly integrated into systems development practices, nullifying the influence they might otherwise exert. Developers make implicit and explicit decisions with implications for user-interactions throughout the development process, typically without involving HF specialists. At an early stage of design, HF may offer only advice – advice which may all too easily be ignored and so not implemented. Its main contribution to the development of user-interactive systems is the evaluations it provides. Yet these are too often relegated to the closing stages of development programmes, where they can only suggest minor enhancements to completed designs because of the prohibitive costs of even modest re-implementations (Walsh et al,1988).

Second, HF practices have a suspect efficacy. Their contribution to improving product quality in any instance remains highly variable. Because there is no guarantee that experience of one development programme is appropriate or complete in its recruitment to another, re-application of that experience cannot be assured of repeated success (Long and Dowell, 1989).

Third, HF practices are inefficient. Each development of a system requires the solving of new problems by implementation then testing. There is no formal structure within which experience accumulated in the successful development of previous systems can be recruited to support solutions to the new problems, except through the memory and intuitions of the designer. These may not be shared by others, except indirectly (for example, through the formulation of heuristics), and so experience may be lost and may have to be re-acquired (Long and Dowell, 1989).

The guidance may be direct – by the designer’s familiarity with psychological theory and practice, or may be indirect by means of guidelines derived from psychological findings. In both cases, the guidance can offer only advice which must be implemented then tested to assess its effectiveness. Since the general scientific problem is the explanation and prediction of phenomena, and not the design of artifacts, the guidance cannot be directly embodied in design specifications which offer a guarantee with respect to the effectiveness of the implemented design. It is not being claimed here that the application of psychology directly or indirectly cannot contribute to better practice or to better designs, only that a practice supported in such a manner remains a craft, because its practice is by implementation then test, that is, by trial and error (see also Long and Dowell, 1989).

Fourth, there are insufficient signs of systematic and intentional progress which will alleviate the three deficiencies of HF practices cited above. The lack of progress is particularly noticeable when HF is compared with the similarly nascent discipline of SE (Gries, 1981; Morgan, Shorter and Tainsh, 1988).

These four deficiencies are endemic to the craft nature of contemporary HF practice. They indict the tacit HF discipline knowledge consisting of accumulated experience embodied in procedures, even where that experience has been influenced by guidance offered by the science of psychology (see earlier footnote). Because the knowledge is tacit (i.e., implicit or informal), it cannot be operationalised, and hence the role of HF in systems development cannot be planned as would be necessary for the proper integration of the knowledge. Without being operationalised, its knowledge cannot be tested, and so the efficacy of the practices it supports cannot be guaranteed. Without being tested, its knowledge cannot be generalised for new applications and so the practices it can support will be inefficient. Without being operationalised, testable, and general, the knowledge cannot be developed in any structured way as required for supporting the systematic and intentional progress of the HF discipline.

It would be incorrect to assume the current absence of formality of HF knowledge to be a necessary response to the indeterminism of human behaviour. Both tacit discipline knowledge and ‘trial and error’ practices may simply be symptomatic of the early stage of development of the discipline1. The extent to which human behaviour is deterministic for the purposes of designing interactive computer-based systems needs to be independently established. Only then might it be known if HF discipline knowledge could be formal. Section 1.4. considers what form that knowledge might take, and Section 1.5. considers what might be the process of its formulation.

 

1.4. Human Factors Engineering Principles

HF has been viewed earlier (Section 1.2.) as comparable to other disciplines which address general design problems: for example, Civil Engineering and Health Administration. The nature of the formal knowledge of a future HF discipline might, then, be suggested by examining such disciplines. The general design problems of different disciplines, however, must first be related to their characteristic practices, in order to relate the knowledge supporting those practices. The establishment of this relationship follows.

The ‘design’ disciplines are ranged according to the ‘hardness’ or ‘softness’ of their respective general design problems. ‘Hard’ and ‘soft’ may have various meanings in this context. For example, hard design problems may be understood as those which include criteria for their ‘optimal’ solution (Checkland, 1981). In contrast, soft design problems are those which do not include such criteria. Any solution is assessed as ‘better or worse’ relative to other solutions. Alternatively, the hardness of a problem may be distinguished by its level of description, or the formality of the knowledge available for its specification (Carroll and Campbell, 1986). However, here hard and soft problems will be generally distinguished by their determinism for the purpose, that is, by the need for design solutions to be determinate. In this distinction between problems is implicated: the proliferation of variables expressed in a problem and their relations; the changes of variables and their relations, both with regard to their values and their number; and more generally, complexity, where it includes factors other than those identified. The variables implicated in the HF general design problem are principally those of human behaviours and structures.

A discipline’s practices construct solutions to its general design problem. Consideration of disciplines indicates much variation in their use of specification as a practice in constructing solutions. 1 Such was the history of many disciplines: the origin of modern day Production Engineering, for example, was a nineteenth century set of craft practices and tacit knowledge. This variation, however, appears not to be dependent on variations in the hardness of the general design problems. Rather, disciplines appear to differ in the completeness with which they specify solutions to their respective general design problems before implementation occurs. At one extreme, some disciplines specify solutions completely before implementation: their practices may be described as ‘specify then implement’ (an example might be Electrical Engineering). At the other extreme, disciplines appear not to specify their solutions at all before implementing them: their practices may be described as ‘implement and test’ (an example might be Graphic Design). Other disciplines, such as SE, appear characteristically to specify solutions partially before implementing them: their practices may be described as ‘specify and implement’. ‘Specify then Implement’, therefore, and ‘implement and test’, would appear to represent the extremes of a dimension by which disciplines may be distinguished by their practices. It is a dimension of the completeness with which they specify design solutions.

Taken together, the dimension of problem hardness, characterising general design problems, and the dimension of specification completeness, characterising discipline practices, constitute a classification space for design disciplines such as Electrical Engineering and Graphic Design. The space is shown in Figure 2, including for illustrative purposes, the speculative location of SE.

Two conclusions are prompted by Figure 2. First, a general relation may be apparent between the hardness of a general design problem and the realiseable completeness with which its solutions might be specified. In particular, a boundary condition is likely to be present beyond which more complete solutions could not be specified for a problem of given hardness. The shaded area of Figure 2 is intended to indicate this condition, termed the ‘Boundary of Determinism’ – because it derives from the determinism of the phenomena implicated in the general design problem. It suggests that whilst very soft problems may only be solved by ‘implement and test’ practices, hard problems may be solved by ‘specify then implement’ practices.

Second, it is concluded from Figure 2 that the actual completeness with which solutions to a general design problem are specified, and the realiseable completeness, might be at variance. Accordingly, there may be different possible forms of the same discipline – each form addressing the same problem but with characteristically different practices. With reference to HF then, the contemporary discipline, a craft, will characteristically solve the HF general design problem mainly by ‘implementation and testing’. If solutions are specified at all, they will be incomplete before being implemented. Yet depending on the hardness of the HF general design problem, the realiseable completeness of specified solutions may be greater and a future form of the discipline, with practices more characteristically those of ‘specify then implement’, may be possible. For illustrative purposes, those different forms of the HF discipline are located speculatively in the figure.

Whilst the realiseable completeness with which a discipline may specify design solutions is governed by the hardness of the general design problem, the actual completeness with which it does so is governed by the formality of the knowledge it possesses. Consideration of the traditional engineering disciplines supports this assertion. Their modern-day practices are characteristically those of ‘specify then implement’, yet historically, their antecedents were ‘specify and implement’ practices, and earlier still – ‘implement and test’ practices. For example, the early steam engine preceded formal knowledge of thermodynamics and was constructed by ‘implementation and testing’. Yet designs of thermodynamic machines are now relatively completely specified before being implemented, a practice supported by formal knowledge. Such progress then, has been marked by the increasing formality of knowledge. It is also in spite of the increasing complexity of new technology – an increase which might only have served to make the general design problem more soft, and the boundary of determinism more constraining. The dimension of the formality of a discipline’s knowledge – ranging from experience to principles, is shown in Figure 2 and completes the classification space for design disciplines.

It should be clear from Figure 2 that there exists no pre-ordained relationship between the formality of a discipline’s knowledge and the hardness of its general design problem. In particular, the practices of a (craft) discipline supported by experience – that is, by informal knowledge – may address a hard problem. But also, within the boundary of determinism, that discipline could acquire formal knowledge to support specification as a design practice.

In Section 1.3, four deficiencies of the contemporary HF discipline were identified. The absence of formal discipline knowledge was proposed to account for these deficiencies. The present section has been concerned to examine the potential for HF to develop a more formal discipline knowledge. The potential would appear to be governed by the hardness of the HF general design problem, that is, by the determinism of the human behaviours which it implicates, at least with respect to any solution of that problem. And clearly, human behaviour is, in some respects and to some degree, deterministic. For example, drivers’ behaviour on the roads is determined, at least within the limits required by a particular design solution, by traffic system protocols. A training syllabus determines, within the limits required by a particular solution, the behaviour of the trainees – both in terms of learning strategies and the level of training required. Hence, formal HF knowledge is to some degree attainable. At the very least, it cannot be excluded that the model for that formal knowledge is the knowledge possessed by the established engineering disciplines.

Generally, the established engineering disciplines possess formal knowledge: a corpus of operationalised, tested, and generalised principles. Those principles are prescriptive, enabling the complete specification of design solutions before those designs are implemented (see Dowell and Long, 1988b). This theme of prescription in design is central to the thesis offered here.

Engineering principles can be substantive or methodological (see Checkland, 1981; Pirsig, 1974). Methodological Principles prescribe the methods for solving a general design problem optimally. For example, methodological principles might prescribe the representations of designs specified at a general level of description and procedures for systematically decomposing those representations until complete specification is possible at a level of description of immediate design implementation (Hubka, Andreason and Eder, 1988). Methodological principles would assure each lower level of specification as being a complete representation of an immediately higher level.

Substantive Principles prescribe the features and properties of artefacts, or systems that will constitute an optimal solution to a general design problem. As a simple example, a substantive principle deriving from Kirchoff’s Laws might be one which would specify the physical structure of a network design (sources, resistances and their nodes etc) whose behaviour (e.g., distribution of current) would constitute an optimal solution to a design problem concerning an amplifier’s power supply.

 

1.5. The Requirement for an Engineering Conception for Human Factors

The contemporary HF discipline does not possess either methodological or substantive engineering principles. The heuristics it possesses are either ‘rules of thumb’ derived from experience or guidelines derived from psychological theories and findings. Neither guidelines nor rules of thumb offer assurance of their efficacy in any given instance, and particularly with regard to the effectiveness of a design. The methods and models of HF (as opposed to methodological and substantive principles) are similarly without such an assurance. Clearly, any evolution of HF as an engineering discipline in the manner proposed here has yet to begin. There is an immediate need then, for a view of how it might begin, and how formulation of engineering principles might be precipitated.

van Gisch and Pipino (1986) have suggested the process by which scientific (as opposed to engineering) disciplines acquire formal knowledge. They characterise the activities of scientific disciplines at a number of levels, the most general being an epistemological enquiry concerning the nature and origin of discipline knowledge. From such an enquiry a paradigm may evolve. Although a paradigm may be considered to subsume all discipline activities (Long, 1987), it must, at the very least, subsume a coherent and complete definition of the concepts which in this case describe the General (Scientific) Problem of a scientific discipline. Those concepts, and their derivatives, are embodied in the explanatory and predictive theories of science and enable the formulation of research problems. For example, Newton’s Principia commences with an epistemological enquiry, and a paradigm in which the concept of inertia first occurs. The concept of inertia is embodied in scientific theories of mechanics, as for example, in Newton’s Second Law.

Engineering disciplines may be supposed to require an equivalent epistemological enquiry. However, rather than that enquiry producing a paradigm, we may construe its product as a conception. Such a conception is a unitary (and consensus) view of the general design problem of a discipline. Its power lies in the coherence and completeness of its definition of concepts which express that problem. Hence, it enables the formulation of engineering principles which embody and instantiate those concepts. A conception (like a paradigm) is always open to rejection and replacement.

HF currently does not possess a conception of its general design problem. Current views of the issue are ill-formed, fragmentary, or implicit (Shneiderman, 1980; Card, Moran and Newell, 1983; Norman and Draper, 1986). The lack of such a shared view is particularly apparent within the HF research literature in which concepts are ambiguous and lacking in coherence; those associated with the ‘interface’ (eg, ‘virtual objects’, ‘human performance’, ‘task semantics’, ‘user error’ etc) are particular examples of this failure. It is inconceiveable that a formulation of HF engineering principles might occur whilst there is no consensus understanding of the concepts which they would embody. Articulation of a conception must then be a pre-requisite for formulation of engineering principles for HF.

The origin of a conception for the HF discipline must be a conception for the HCI discipline itself, the superordinate discipline incorporating HF. A conception (at least in form) as might be assumed by an engineering HCI discipline has been previously proposed (Dowell and Long, 1988a). It supports the conception for HF as an engineering discipline presented in Part II.

In conclusion, Part I has presented the case for an engineering conception for HF. A proposal for such a conception follows in Part II. The status of the conception, however, should be emphasised. First, the conception at this point in time is speculative. Second, the conception continues to be developed in support of, and supported by, the research of the authors. Third, there is no validation in the conventional sense to be offered for the conception at this time. Validation of the conception for HF will come from its being able to describe the design problems of HF, and from the coherence of its concepts, that is, from the continuity of relations, and agreement, between concepts. Readers may assess these aspects of validity for themselves. Finally, the validity of the conception for HF will also rest in its being a consensus view held by the discipline as a whole and this is currently not the case.

Part II. Conception for an Engineering Discipline of Human Factors 

2.1 Conception of the human factors general design problem;

2.2 Conception of work and user; 2.2.1 Objects and their attributes; 2.2.2 Attributes and levels of complexity; 2.2.3 Relations between attributes; 2.2.4 Attribute states and affordance; 2.2.5 Organisations, domains (of application)2.2.6 Goals; 2.2.7 Quality; 2.2.8 Work and the user; and the requirement for attribute state changes;

2.3 Conception of the interactive worksystem and the user; 2.3.1 Interactive worksystems; 2.3.2 The user as a system of mental and physical human behaviours; 2.3.3 Human-computer interaction; 2.3.4 On-line and off-line behaviours; 2.3.5 Human structures and the user; 2.3.6 Resource costs and the user;

2.4 Conception of performance of the interactive worksystem and the user;

2.5 Conclusions and the prospect for Human Factors engineering principles

 The potential for HF to become an engineering discipline, and so better to respond to the problem of interactive systems design, was examined in Part I. The possibility of realising this potential through HF engineering principles was suggested – principles which might prescriptively support HF design expressed as ‘specify then implement’. It was concluded that a pre-requisite to the development of HF engineering principles, is a conception of the general design problem of HF, which was informally expressed as: ‘to design human interactions with computers for effective working’.
Part II proposes a conception for HF. It attempts to establish the set of related concepts which can express the general design problem of HF more formally. Such concepts would be those embodied in HF engineering principles. As indicated in Section 1.1, the conception for HF is supported by a conception for an engineering discipline of HCI earlier proposed by Dowell and Long (1988a). Space precludes re-iteration of the conception for HCI here, other than as required for the derivation of the conception for HF. Part II first asserts a more formal expression of the HF general design problem which an engineering discipline would address. Part II then continues by elaborating and illustrating the concepts and their relations embodied in that expression.
2.1. Conception of the Human Factors General Design Problem.
The conception for the (super-ordinate) engineering discipline of HCI asserts a fundamental distinction between behavioural systems which perform work, and a world in which work originates, is performed and has its consequences. Specifically conceptualised are interactive worksystems consisting of human and computer behaviours together performing work. It is work evidenced in a world of physical and informational objects disclosed as domains of application. The distinction between worksystems and domains of application is represented schematically in Figure 3.

Effectiveness derives from the relationship of an interactive worksystem with its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs it incurs. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed.

The concern of an engineering HCI discipline would be the design of interactive worksystems for performance. More precisely, its concern would be the design of behaviours constituting a worksystem {S} whose actual performance (Pa) conformed with some desired performance (Pd). And to design {S} would require the design of human behaviours {U} interacting with computer behaviours {C}. Hence, conception of the general design problem of an engineering discipline of HCI is expressed as: Specify then implement {U} and {C}, such that {U} interacting with {C} = {S} PaPd where Pd = fn. { Qd ,Kd } Qd expresses the desired quality of the products of work within the given domain of application, KD expresses acceptable (i.e., desired) costs incurred by the worksystem, i.e., by both human and computer.

The problem, when expressed as one of to ‘specify then implement’ designs of interactive worksystems, is equivalent to the general design problems characteristic of other engineering disciplines (see Section 1.4.).

The interactive worksystem can be distinguished as two separate, but interacting sub-systems, that is, a system of human behaviours interacting with a system of computer behaviours. The human behaviours may be treated as a behavioural system in their own right, but one interacting with the system of computer behaviours to perform work. It follows that the general design problem of HCI may be decomposed with regard to its scope (with respect to the human and computer behavioural sub-systems) giving two related problems. Decomposition with regard to the human behaviours gives the general design problem of the HF1 discipline as: Specify then implement {U} such that {U} interacting with {C} = {S} PaPd.

The general design problem of HF then, is one of producing implementable specifications of human behaviours {U} which, interacting with computer behaviours {C}, are constituted within a worksystem {S} whose performance conforms with a desired performance (Pd).

The following sections elaborate the conceptualisation of human behaviours (the user, or users) with regard to the work they perform, the interactive worksystem in which they are constituted, and performance.

 

2.2 . Conception of Work and the User

The conception for HF identifies a world in which work originates, is performed and has its consequences. This section presents the concepts by which work and its relations with the user are expressed.

2.2.1 Objects and their attributes

Work occurs in a world consisting of objects and arises in the intersection of organisations and (computer) technology. Objects may be both abstract as well as physical, and are characterised by their attributes. Abstract attributes of objects are attributes of information and knowledge. Physical attributes are attributes of energy and matter. Letters (i.e., correspondence) are objects; their abstract attributes support the communication of messages etc; their physical attributes support the visual/verbal representation of information via language.

2.2.2 Attributes and levels of complexity

The different attributes of an object may emerge at different levels within a hierarchy of levels of complexity (see Checkland, 1981). For example, characters and their configuration on a page are physical attributes of the object ‘a letter’ which emerge at one level of complexity; the message of the letter is an abstract attribute which emerges at a higher level of complexity.

Objects are described at different levels of description commensurate with their levels of complexity. However, at a high level of description, separate objects may no longer be differentiated. For example, the object ‘income tax return’ and the object ‘personal letter’ are both ‘correspondence’ objects at a higher level of description. Lower levels of description distinguish their respective attributes of content, intended correspondent etc. In this way, attributes of an object described at one level of description completely re-represent those described at a lower level.

2.2.3 Relations between attributes

Attributes of objects are related, and in two ways. First, attributes at different levels of complexity are related. As indicated earlier, those at one level are completely subsumed in those at a higher level. In particular, abstract attributes will occur at higher levels of complexity than physical attributes and will subsume those lower level physical attributes. For example, the abstract attributes of an object ‘message’ concerning the representation of its content by language subsume the lower level physical attributes, such as the font of the characters expressing the language. As an alternative example, an industrial process, such as a steel rolling process in a foundry, is an object whose abstract attributes will include the process’s efficiency. Efficiency subsumes physical attributes of the process, – its power consumption, rate of output, dimensions of the output (the rolled steel), etc – emerging at a lower level of complexity.

Second, attributes of objects are related within levels of complexity. There is a dependency between the attributes of an object emerging within the same level of complexity. For example, the attributes of the industrial process of power consumption and rate of output emerge at the same level and are inter-dependent.

2.2.4 Attribute states and affordance

At any point or event in the history of an object, each of its attributes is conceptualised as having a state. Further, those states may change. For example, the content and characters (attributes) of a letter (object) may change state: the content with respect to meaning and grammar etc; its characters with respect to size and font etc. Objects exhibit an affordance for transformation, engendered by their attributes’ potential for state change (see Gibson, 1977). Affordance is generally pluralistic in the sense that there may be many, or even, infinite transformations of objects, according to the potential changes of state of their attributes.

Attributes’ relations are such that state changes of one attribute may also manifest state changes in related attributes, whether within the same level of complexity, or across different levels of complexity. For example, changing the rate of output of an industrial process (lower level attribute) will change both its power consumption (same level attribute) and its efficiency (higher level attribute).

2.2.5 Organisations, domains (of application), and the requirement for attribute state changes

domain of application may be conceptualised as: ‘a class of affordance of a class of objects’. Accordingly, an object may be associated with a number of domains of application (‘domains’). The object ‘book’ may be associated with the domain of typesetting (state changes of its layout attributes) and with the domain of authorship (state changes of its textual content). In principle, a domain may have any level of generality, for example, the writing of letters and the writing of a particular sort of letter.

Organisations are conceptualised as having domains as their operational province and of requiring the realisation of the affordance of objects. It is a requirement satisfied through work. Work is evidenced in the state changes of attributes by which an object is intentionally transformed: it produces transforms, that is, objects whose attributes have an intended state. For example, ‘completing a tax return’ and ‘writing to an acquaintance’, each have a ‘letter’ as their transform, where those letters are objects whose attributes (their content, format and status, for example) have an intended state. Further editing of those letters would produce additional state changes, and therein, new transforms.

2.2.6 Goals

Organisations express their requirement for the transformation of objects through specifying goals. A product goal specifies a required transform – a required realisation of the affordance of an object. In expressing the required transformation of an object, a product goal will generally suppose necessary state changes of many attributes. The requirement of each attribute state change can be expressed as a task goal, deriving from the product goal. So for example, the product goal demanding transformation of a letter making its message more courteous, would be expressed by task goals possibly requiring state changes of semantic attributes of the propositional structure of the text, and of syntactic attributes of the grammatical structure. Hence, a product goal can be re-expressed as a task goal structure, a hierarchical structure expressing the relations between task goals, for example, their sequences.

In the case of the computer-controlled steel rolling process, the process is an object whose transformation is required by a foundry organisation and expressed by a product goal. For example, the product goal may specify the elimination of deviations of the process from a desired efficiency. As indicated earlier, efficiency will at least subsume the process’s attributes of power consumption, rate of output, dimensions of the output (the rolled steel), etc. As also indicated earlier, those attributes will be inter-dependent such that state changes of one will produce state changes in the others – for example, changes in rate of output will also change the power consumption and the efficiency of the process. In this way, the product goal (of correcting deviations from the desired efficiency) supposes the related task goals (of setting power consumption, rate of output, dimensions of the output etc). Hence, the product goal can be expressed as a task goal structure and task goals within it will be assigned to the operator monitoring the process.

2.2.7 Quality

The transformation of an object demanded by a product goal will generally be of a multiplicity of attribute state changes – both within and across levels of complexity. Consequently, there may be alternative transforms which would satisfy a product goal – letters with different styles, for example – where those different transforms exhibit differing compromises between attribute state changes of the object. By the same measure, there may also be transforms which will be at variance with the product goal. The concept of quality (Q) describes the variance of an actual transform with that specified by a product goal. It enables all possible outcomes of work to be equated and evaluated.

2.2.8 Work and the user

  Conception of the domain then, is of objects, characterised by their attributes, and exhibiting an affordance arising from the potential changes of state of those attributes. By specifying product goals, organisations express their requirement for transforms – objects with specific attribute states. Transforms are produced through work, which occurs only in the conjunction of objects affording transformation and systems capable of producing a transformation.

From product goals derive a structure of related task goals which can be assigned either to the human or to the computer (or both) within an associated worksystem. The task goals assigned to the human are those which motivate the human’s behaviours. The actual state changes (and therein transforms) which those behaviours produce may or may not be those specified by task and product goals, a difference expressed by the concept of quality.

Taken together, the concepts presented in this section support the HF conception’s expression of work as relating to the user. The following section presents the concepts expressing the interactive worksystem as relating to the user.

 

2.3. Conception of the Interactive Worksystem and the User.

The conception for HF identifies interactive worksystems consisting of human and computer behaviours together performing work. This section presents the concepts by which interactive worksystems and the user are expressed.

2.3.1 Interactive worksystems

Humans are able to conceptualise goals and their corresponding behaviours are said to be intentional (or purposeful). Computers, and machines more generally, are designed to achieve goals, and their corresponding behaviours are said to be intended (or purposive1). An interactive worksystem   (‘worksystem’) is a behavioural system distinguished by a boundary enclosing all human and computer behaviours whose purpose is to achieve and satisfy a common goal. For example, the behaviours of a secretary and wordprocessor whose purpose is to produce letters constitute a worksystem. Critically, it is only by identifying that common goal that the boundary of the worksystem can be established: entities, and more so – humans, may exhibit a range of contiguous behaviours, and only by specifying the goals of concern, might the boundary of the worksystem enclosing all relevant behaviours be correctly identified.

Worksystems transform objects by producing state changes in the abstract and physical attributes of those objects (see Section 2.2). The secretary and wordprocessor may transform the object ‘correspondence’ by changing both the attributes of its meaning and the attributes of its layout. More generally, a worksystem may transform an object through state changes produced in related attributes. An operator monitoring a computer-controlled industrial process may change the efficiency of the process through changing its rate of output.

The behaviours of the human and computer are conceptualised as behavioural sub-systems of the worksystem – sub-systems which interact1. The human behavioural sub-system is here more appropriately termed the user. Behaviour may be loosely understood as ‘what the human does’, in contrast with ‘what is done’ (i.e. attribute state changes in a domain). More precisely the user is conceptualised as:

a system of distinct and related human behaviours, identifiable as the sequence of states of a person2 interacting with a computer to perform work, and corresponding with a purposeful (intentional) transformation of objects in a domain3 (see also Ashby, 1956).

Although possible at many levels, the user must at least be expressed at a level commensurate with the level of description of the transformation of objects in the domain. For example, a secretary interacting with an electronic mailing facility is a user whose behaviours include receiving and replying to messages. An operator interacting with a computer-controlled milling machine is a user whose behaviours include planning the tool path to produce a component of specified geometry and tolerance.

2.3.2 The user as a system of mental and physical behaviours

The behaviours constituting a worksystem are both physical as well as abstract. Abstract behaviours are generally the acquisition, storage, and transformation of information. They represent and process information at least concerning: domain objects and their attributes, attribute relations and attribute states, and the transformations required by goals. Physical behaviours are related to, and express, abstract behaviours.

Accordingly, the user is conceptualised as a system of both mental (abstract) and overt (physical) behaviours which extend a mutual influence – they are related. In particular, they are related within an assumed hierarchy of behaviour types (and their control) wherein mental behaviours generally determine, and are expressed by, overt behaviours. Mental behaviours may transform (abstract) domain objects represented in cognition, or express through overt behaviour plans for transforming domain objects.

So for example, the operator working in the control room of the foundry has the product goal required to maintain a desired condition of the computer-controlled steel rolling process. The operator attends to the computer (whose behaviours include the transmission of information about the process). Hence, the operator acquires a representation of the current condition of the process by collating the information displayed by the computer and assessing it by comparison with the condition specified by the product goal. The operator`s acquisition, collation and assessment are each distinct mental behaviours, conceptualised as representing and processing information. The operator reasons about the attribute state changes necessary to eliminate any discrepancy between current and desired conditions of the process, that is, the set of related changes which will produce the required transformation of the process. That decision is expressed in the set of instructions issued to the computer through overt behaviour – making keystrokes, for example.

The user is conceptualised as having cognitive, conative and affective aspects. The cognitive aspects of the user are those of their knowing, reasoning and remembering, etc; the conative aspects are those of their acting, trying and persevering, etc; and the affective aspects are those of their being patient, caring, and assured, etc. Both mental and overt human behaviours are conceptualised as having these three aspects.

2.3.3 Human-computer interaction

Although the human and computer behaviours may be treated as separable sub-systems of the worksystem, those sub-systems extend a “mutual influence”, or interaction whose configuration principally determines the worksystem (Ashby, 1956).

Interaction is conceptualised as: the mutual influence of the user (i.e., the human behaviours) and the computer behaviours associated within an interactive worksystem.

Hence, the user {U} and computer behaviours {C} constituting a worksystem {S}, were expressed in the general design problem of HF (Section 2.1) as: {U} interacting with {C} = {S}

Interaction of the human and computer behaviours is the fundamental determinant of the worksystem, rather than their individual behaviours per se. For example, the behaviours of an operator interact with the behaviours of a computer-controlled milling machine. The operator’s behaviours influence the behaviours of the machine, perhaps in the tool path program – the behaviours of the machine, perhaps the run-out of its tool path, influences the selection behaviour of the operator. The configuration of their interaction – the inspection that the machine allows the operator, the tool path control that the operator allows the machine – determines the worksystem that the operator and machine behaviours constitute in their planning and execution of the machining work.

The assignment of task goals then, to either the human or the computer delimits the user and therein configures the interaction. For example, replacement of a mis-spelled word required in a document is a product goal which can be expressed as a task goal structure of necessary and related attribute state changes. In particular, the text field for the correctly spelled word demands an attribute state change in the text spacing of the document. Specifying that state change may be a task goal assigned to the user, as in interaction with the behaviours of early text editor designs, or it may be a task goal assigned to the computer, as in interaction with the ‘wrap-round’ behaviours of contemporary wordprocessor designs. The assignment of the task goal of specification configures the interaction of the human and computer behaviours in each case; it delimits the user.

2.3.4 On-line and off-line behaviours

The user may include both on-line and off-line human behaviours: on-line behaviours are associated with the computer’s representation of the domain; offline behaviours are associated with non-computer representations of the domain, or the domain itself.

As an illustration of the distinction, consider the example of an interactive worksystem consisting of behaviours of a secretary and a wordprocessor and required to produce a paper-based copy of a dictated letter stored on audio tape. The product goal of the worksystem here requires the transformation of the physical representation of the letter from one medium to another, that is, from tape to paper. From the product goal derives the task goals relating to required attribute state changes of the letter. Certain of those task goals will be assigned to the secretary. The secretary’s off-line behaviours include listening to, and assimilating the dictated letter, so acquiring a representation of the domain directly. By contrast, the secretary’s on-line behaviours include specifying the represention by the computer of the transposed content of the letter in a desired visual/verbal format of stored physical symbols.

On-line and off-line human behaviours are a particular case of the ‘internal’ interactions between a human’s behaviours as, for example, when the secretary’s typing interacts with memorisations of successive segments of the dictated letter.

2.3.5 Human structures and the user

  Conceptualisation of the user as a system of human behaviours needs to be extended to the structures supporting behaviour.

Whereas human behaviours may be loosely understood as ‘what the human does’, the structures supporting them can be understood as ‘how they are able to do what they do’ (see Marr, 1982; Wilden, 1980). There is a one to many mapping between a human`s structures and the behaviours they might support: the structures may support many different behaviours.

In co-extensively enabling behaviours at each level, structures must exist at commensurate levels. The human structural architecture is both physical and mental, providing the capability for a human’s overt and mental behaviours. It provides a represention of domain information as symbols (physical and abstract) and concepts, and the processes available for the transformation of those representations. It provides an abstract structure for expressing information as mental behaviour. It provides a physical structure for expressing information as physical behaviour.

Physical human structure is neural, bio-mechanical and physiological. Mental structure consists of representational schemes and processes. Corresponding with the behaviours it supports and enables, human structure has cognitive, conative and affective aspects. The cognitive aspects of human structures include information and knowledge – that is, symbolic and conceptual representations – of the domain, of the computer and of the person themselves, and it includes the ability to reason. The conative aspects of human structures motivate the implementation of behaviour and its perseverence in pursuing task goals. The affective aspects of human structures include the personality and temperament which respond to and supports behaviour.

To illustrate the conceptualisation of mental structure, consider the example of structure supporting an operator’s behaviours in the foundry control room. Physical structure supports perception of the steel rolling process and executing corrective control actions to the process through the computer input devices. Mental structures support the acquisition, memorisation and transformation of information about the steel rolling process. The knowledge which the operator has of the process and of the computer supports the collation, assessment and reasoning about corrective control actions to be executed.

The limits of human structure determine the limits of the behaviours they might support. Such structural limits include those of: intellectual ability; knowledge of the domain and the computer; memory and attentional capacities; patience; perseverence; dexterity; and visual acuity etc. The structural limits on behaviour may become particularly apparent when one part of the structure (a channel capacity, perhaps) is required to support concurrent behaviours, perhaps simultaneous visual attending and reasoning behaviours. The user then, is ‘resource’ limited by the co-extensive human structure.

The behavioural limits of the human determined by structure are not only difficult to define with any kind of completeness, they will also be variable because that structure can change, and in a number of respects. A person may have self-determined changes in response to the domain – as expressed in learning phenomena, acquiring new knowledge of the domain, of the computer, and indeed of themselves, to better support behaviour. Also, human structure degrades with the expenditure of resources in behaviour, as evidenced in the phenomena of mental and physical fatigue. It may also change in response to motivating or de-motivating influences of the organisation which maintains the worksystem.

It must be emphasised that the structure supporting the user is independent of the structure supporting the computer behaviours. Neither structure can make any incursion into the other, and neither can directly support the behaviours of the other. (Indeed this separability of structures is a pre-condition for expressing the worksystem as two interacting behavioural sub-systems.) Although the structures may change in response to each other, they are not, unlike the behaviours they support, interactive; they are not included within the worksystem. The combination of structures of both human and computer supporting their interacting behaviours is conceptualised as the user interface .

2.3.6 Resource costs of the user

Work performed by interactive worksystems always incurs resource costs. Given the separability of the human and the computer behaviours, certain resource costs are associated directly with the user and distinguished as structural human costs and behavioural human costs.

Structural human costs are the costs of the human structures co-extensive with the user. Such costs are incurred in developing and maintaining human skills and knowledge. More specifically, structural human costs are incurred in training and educating people, so developing in them the structures which will enable their behaviours necessary for effective working. Training and educating may augment or modify existing structures, provide the person with entirely novel structures, or perhaps even reduce existing structures. Structural human costs will be incurred in each case and will frequently be borne by the organisation. An example of structural human costs might be the costs of training a secretary in the particular style of layout required for an organisation’s correspondence with its clients, and in the operation of the computer by which that layout style can be created.

Structural human costs may be differentiated as cognitive, conative and affective structural costs of the user. Cognitive structural costs express the costs of developing the knowledge and reasoning abilities of people and their ability for formulating and expressing novel plans in their overt behaviour – as necessary for effective working. Conative structural costs express the costs of developing the activity, stamina and persistence of people as necessary for effective working. Affective structural costs express the costs of developing in people their patience, care and assurance as necessary as necessary for effective working.

Behavioural human costs are the resource costs incurred by the user (i.e by human behaviours) in recruiting human structures to perform work. They are both physical and mental resource costs. Physical behavioural costs are the costs of physical behaviours, for example, the costs of making keystrokes on a keyboard and of attending to a screen display; they may be expressed without differentiation as physical workload. Mental behavioural costs are the costs of mental behaviours, for example, the costs of knowing, reasoning, and deciding; they may be expressed without differentiation as mental workload. Mental behavioural costs are ultimately manifest as physical behavioural costs.

When differentiated, mental and physical behavioural costs are conceptualised as the cognitive, conative and affective behavioural costs of the user. Cognitive behavioural costs relate to both the mental representing and processing of information, and the demands made on the individual`s extant knowledge, as well as the physical expression thereof in the formulation and expression of a novel plan. Conative behavioural costs relate to the repeated mental and physical actions and effort required by the formulation and expression of the novel plan. Affective behavioural costs relate to the emotional aspects of the mental and physical behaviours required in the formulation and expression of the novel plan. Behavioural human costs are evidenced in human fatigue, stress and frustration; they are costs borne directly by the individual.

 

2.4. Conception of Performance of the Interactive Worksystem and the User.

In asserting the general design problem of HF (Section 2.1.), it was reasoned that:

“Effectiveness derives from the relationship of an interactive worksystemwith its domain of application – it assimilates both the quality of the work performed by the worksystem, and the costs incurred by it. Quality and cost are the primary constituents of the concept of performance through which effectiveness is expressed. ”

This statement followed from the distinction between interactive worksystems performing work, and the work they perform. Subsequent elaboration upon this distinction enables reconsideration of the concept of performance, and examination of its central importance within the conception for HF.

Because the factors which constitute this engineering concept of performance (i.e the quality and costs of work) are determined by behaviour, a concordance is assumed between the behaviours of worksystems and their performance: behaviour determines performance (see Ashby, 1956; Rouse, 1980). The quality of work performed by interactive worksystems is conceptualised as the actual transformation of objects with regard to their transformation demanded by product goals. The costs of work are conceptualised as the resource costs incurred by the worksystem, and are separately attributed to the human and computer. Specifically, the resource costs incurred by the human are differentiated as: structural human costs – the costs of establishing and maintaining the structure supporting behaviour; and behavioural human costs – the costs of the behaviour recruiting structure to its own support. Structural and behavioural human costs were further differentiated as cognitive, conative and affective costs.

A desired performance of an interactive worksystem may be conceptualised. Such a desired performance might either be absolute, or relative as in a comparative performance to be matched or improved upon. Accordingly, criteria expressing desired performance, may either specify categorical gross resource costs and quality, or they may specify critical instances of those factors to be matched or improved upon.

Discriminating the user’s performance within the performance of the interactive worksystem would require the separate assimilation of human resource costs and their achievement of desired attribute state changes demanded by their assigned task goals. Further assertions concerning the user arise from the conceptualisation of worksystem performance. First, the conception of performance is able to distinguish the quality of the transform from the effectiveness of the worksystems which produce them. This distinction is essential as two worksystems might be capable of producing the same transform, yet if one were to incur a greater resource cost than the other, its effectiveness would be the lesser of the two systems.

Second, given the concordance of behaviour with performance, optimal human (and equally, computer) behaviours may be conceived as those which incur a minimum of resource costs in producing a given transform. Optimal human behaviour would minimise the resource costs incurred in producing a transform of given quality (Q). However, that optimality may only be categorically determined with regard to worksystem performance, and the best performance of a worksystem may still be at variance with the performance desired of it (Pd). To be more specific, it is not sufficient for human behaviours simply to be error-free. Although the elimination of errorful human behaviours may contribute to the best performance possible of a given worksystem, that performance may still be less than desired performance. Conversely, although human behaviours may be errorful, a worksystem may still support a desired performance.

Third, the common measures of human ‘performance’ – errors and time, are related in this conceptualisation of performance. Errors are behaviours which increase resource costs incurred in producing a given transform, or which reduce the quality of transform, or both. The duration of human behaviours may (very generally) be associated with increases in behavioural user costs.

Fourth, structural and behavioural human costs may be traded-off in performance. More sophisticated human structures supporting the user, that is, the knowledge and skills of experienced and trained people, will incur high (structural) costs to develop, but enable more efficient behaviours – and therein, reduced behavioural costs.

Fifth, resource costs incurred by the human and the computer may be traded-off in performance. A user can sustain a level of performance of the worksystem by optimising behaviours to compensate for the poor behaviours of the computer (and vice versa), i.e., behavioural costs of the user and computer are traded-off. This is of particular concern for HF as the ability of humans to adapt their behaviours to compensate for poor computer-based systems often obscures the low effectiveness of worksystems.

This completes the conception for HF. From the initial assertion of the general design problem of HF, the concepts that were invoked in its formal expression have subsequently been defined and elaborated, and their coherence established.

 

2.5. Conclusions and the Prospect for Human Factors Engineering Principles

Part I of this paper examined the possibility of HF becoming an engineering discipline and specifically, of formulating HF engineering principles. Engineering principles, by definition prescriptive, were seen to offer the opportunity for a significantly more effective discipline, ameliorating the problems which currently beset HF – problems of poor integration, low efficiency, efficacy without guarantee, and slow development.

A conception for HF is a pre-requisite for the formulation of HF engineering principles. It is the concepts and their relations which express the HF general design problem and which would be embodied in HF engineering principles. The form of a conception for HF was proposed in Part II. Originating in a conception for an engineering discipline of HCI (Dowell and Long, 1988a), the conception for HF is postulated as appropriate for supporting the formulation of HF engineering principles.

The conception for HF is a broad view of the HF general design problem. Instances of the general design problem may include the development of a worksystem, or the utilisation of a worksystem within an organisation. Developing worksystems which are effective, and maintaining the effectiveness of worksystems within a changing organisational environment, are both expressed within the problem. In addition, the conception takes the broad view on the research and development activities necessary to solve the general design problem and its instantiations, respectively. HF engineering research practices would seek solutions, in the form of (methodological and substantive) engineering principles, to the general design problem. HF engineering practices in systems development programmes would seek to apply those principles to solve instances of the general design problem, that is, to the design of specific users within specific interactive worksystems. Collaboration of HF and SE specialists and the integration of their practices is assumed.

Notwithstanding the comprehensive view of determinacy developed in Part I, the intention of specification associated with people might be unwelcome to some. Yet, although the requirement for design and specification of the user is being unequivocally proposed, techniques for implementing those specifications are likely to be more familiar than perhaps expected – and possibly more welcome. Such techniques might include selection tests, aptitude tests, training programmes, manuals and help facilities, or the design of the computer.

A selection test would assess the conformity of a candidates’ behaviours with a specification for the user. An aptitude test would assess the potential for a candidates’ behaviours to conform with a specification for the user. Selection and aptitude tests might assess candidates either directly or indirectly. A direct test would observe candidates’ behaviours in ‘hands on’ trial periods with the ‘real’ computer and domain, or with simulations of the computer and domain. An indirect test would examine the knowledge and skills (i.e., the structures) of candidates, and might be in the form of written examinations. A training programme would develop the knowledge and skills of a candidate as necessary for enabling their behaviours to conform with a specification for the user.Such programmes might take the form of either classroom tuition or ‘hands on’ learning. A manual or on-line help facility would augment the knowledge possessed by a human, enabling their behaviours to conform with a specification for the user. Finally, the design of the computer itself, through the interactions of its behaviours with the user, would enable the implementation of a specification for the user.

To conclude, discussion of the status of the conception for HF must be briefly extended. The contemporary HF discipline was characterised as a craft discipline. Although it may alternatively be claimed as an applied science discipline, such claims must still admit the predominantly craft nature of systems development practices (Long and Dowell, 1989). No instantiations of the HF engineering discipline implied in this paper are visible, and examples of supposed engineering practices may be readily associated with craft or applied science disciplines. There are those, however, who would claim the craft nature of the HF discipline to be dictated by the nature of the problem it addresses. They may maintain that the indeterminism and complexity of the problem of designing human systems (the softness of the problem) precludes the application of formal and prescriptive knowledge. This claim was rejected in Part I on the grounds that it mistakes the current absence of formal discipline knowledge as an essential reflection of the softness of its general design problem. The claim fails to appreciate that this absence may rather be symptomatic of the early stage of the discipline`s development. The alternative position taken by this paper is that the softness of the problem needs to be independently established. The general design problem of HF is, to some extent, hard – human behaviour is clearly to some useful degree deterministic – and certainly sufficiently deterministic for the design of certain interactive worksystems. It may accordingly be presumed that HF engineering principles can be formulated to support product quality within a systems development ethos of ‘design for performance’.

The extent to which HF engineering principles might be realiseable in practice remains to be seen. It is not supposed that the development of effective systems will never require craft skills in some form, and engineering principles are not seen to be incompatible with craft knowledge, particularly with respect to their instantiation (Long and Dowell, 1989). At a minimum, engineering principles might be expected to augment the craft knowledge of HF professionals. Yet the great potential of HF engineering principles for the effectiveness of the discipline demands serious consideration. However, their development would only be by intention, and would be certain to demand a significant research effort. This paper is intended to contribute towards establishing the conception required for the formulation of HF engineering principles.

References Ashby W. Ross, (1956), An Introduction to Cybernetics. London: Methuen.

Bornat R. and Thimbleby H., (1989), The Life and Times of ded, Text Display Editor. In J.B. Long and A.D. Whitefield (ed.s), Cognitive Ergonomics and Human Computer Interaction. Cambridge: Cambridge University Press.

Card, S. K., Moran, T., and Newell, A., (1983), The Psychology of Human Computer Interaction, New Jersey: Lawrence Erlbaum Associates.

Carey, T., (1989), Position Paper: The Basic HCI Course For Software Engineers. SIGCHI Bulletin, Vol. 20, no. 3.

Carroll J.M., and Campbell R. L., (1986), Softening up Hard Science: Reply to Newell and Card. Human Computer Interaction, Vol. 2, pp. 227-249.

Checkland P., (1981), Systems Thinking, Systems Practice. Chichester: John Wiley and Sons.

Cooley M.J.E., (1980), Architect or Bee? The Human/Technology Relationship. Slough: Langley Technical Services.

Didner R.S. A Value Added Approach to Systems Design. Human Factors Society Bulletin, May 1988. Dowell J., and

Long J. B., (1988a), Human-Computer Interaction Engineering. In N. Heaton and M . Sinclair (ed.s), Designing End-User Interfaces. A State of the Art Report. 15:8. Oxford: Pergamon Infotech.

Dowell, J., and Long, J. B., 1988b, A Framework for the Specification of Collaborative Research in Human Computer Interaction, in UK IT 88 Conference Publication 1988, pub. IEE and BCS.

Gibson J.J., (1977), The Theory of Affordances. In R.E. Shaw and J. Branford (ed.s), Perceiving, Acting and Knowing. New Jersey: Erlbaum.

Gries D., (1981), The Science of Programming, New York: Springer Verlag.

Hubka V., Andreason M.M. and Eder W.E., (1988), Practical Studies in Systematic Design, London: Butterworths.

Long J.B., Hammond N., Barnard P. and Morton J., (1983), Introducing the Interactive Computer at Work: the Users’ Views. Behaviour And Information Technology, 2, pp. 39-106.

Long, J., (1987), Cognitive Ergonomics and Human Computer Interaction. In P. Warr (ed.), Psychology at Work. England: Penguin.

Long J.B., (1989), Cognitive Ergonomics and Human Computer Interaction: an Introduction. In J.B. Long and A.D. Whitefield (ed.s), Cognitive Ergonomics and Human Computer Interaction. Cambridge: Cambridge University Press.

Long J.B. and Dowell J., (1989), Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering. In Sutcliffe A. and Macaulay L., Proceedings of the Fifth Conference of the BCS HCI SG. Cambridge: Cambridge University Press.

Marr D., (1982), Vision. New York: Wh Freeman and Co. Morgan D.G.,

Shorter D.N. and Tainsh M., (1988), Systems Engineering. Improved Design and Construction of Complex IT systems. Available from IED, Kingsgate House, 66-74 Victoria Street, London, SW1.

Norman D.A. and Draper S.W. (eds) (1986): User Centred System Design. Hillsdale, New Jersey: Lawrence Erlbaum;

Pirsig R., 1974, Zen and the Art of Motorcycle Maintenance. London: Bodley Head.

Rouse W. B., (1980), Systems Engineering Models of Human Machine Interaction. New York: Elsevier North Holland.

Shneiderman B. (1980): Software Psychology: Human Factors in Computer and Information Systems. Cambridge, Mass.: Winthrop.

Thimbleby H., (1984), Generative User Engineering Principles for User Interface Design. In B. Shackel (ed.), Proceedings of the First IFIP conference on Human-Computer Interaction. Human-Computer Interaction – INTERACT’84. Amsterdam: Elsevier Science. Vol.2, pp. 102-107.

van Gisch J. P. and Pipino L.L., (1986), In Search of a Paradigm for the Discipline of Information Systems, Future Computing Systems, 1 (1), pp. 71-89.

Walsh P., Lim K.Y., Long J.B., and Carver M.K., (1988), Integrating Human Factors with System Development. In: N. Heaton and M. Sinclair (eds): Designing End-User Interfaces. Oxford: Pergamon Infotech.

Wilden A., 1980, System and Structure; Second Edition. London: Tavistock Publications.

This paper has greatly benefited from discussion with others and from their criticisms. We would like to thank our collegues at the Ergonomics Unit, University College London and in particular, Andy Whitefield, Andrew Life and Martin Colbert. We would also like to thank the editors of the special issue for their support and two anonymous referees for their helpful comments. Any remaining infelicities – of specification and implementation – are our own.

1985/87 Steve Howard 150 150 John

1985/87 Steve Howard

Sadly, Steve died in 2013, well before his time. However, his wife Anna was so positive about Steve’s time at UCL and so keen for him to figure in the Reflections, that she wrote a contribution on his behalf. I was, of course, delighted and it appears here. I guessed at Steve’s likely response to the initial questions, with help from his colleagues and friends and especially Frank Vetere (see below).

Date of MSc: 1986/1987

 

MSc  Project Title: Interface Design for a Medical Demonstration System: a case study in designing a software user interface.

Pre MSc Background: Practical engineering experience and a degree in psychology.

Pre MSc View of HCI/Cognitive Ergonomics: Aware of applied psychology; but no detailed knowledge of, or exposure to, HCI/Cognitive Ergonomics


Post MSc View of HCI/Cognitive Ergonomics: Aware in some detail of HCI/Cognitive Ergonomics as evidenced by his MSc project.

Subsequent to MSc View of HCI/ Cognitive Ergonomics: 
Steve’s view can be inferred from his subsequent career, as described by Frank Vetere, ex-student and colleague: Steve went on to study (PhD), teach and research HCI/Cognitive Ergonomics, as evidenced by his publications. Soon after joining Swinburne University of Technology in 1990, Steve chaired OZCHI’94 and laid the foundations for what is now an important regional conference. He established SCHIL (the Swinburne Computer-Human Interaction Laboratory) and continued to strengthen Australian HCI by being technical co-chair at OZCHI in 1997 and INTERACT in 2000. In 2000, Steve moved to the University of Melbourne, where he immediately started the Interaction Design Group. Under his leadership, the group soon grew to become almost half the department, and a powerhouse of HCI research in Australia. From 2007-2010, Steve was the head of department of Information Systems. In 2009, he was appointed full professor and in 2011 became the inaugural director of the Melbourne School of Information. Steve’s contribution to HCI was recognised in 2008, when he was awarded the CHISIG medal for service to the Australian HCI community.
In just over 12 years at Melbourne University, his achievements were indeed extraordinary, but they were never at the cost of his integrity, his humanity or his generosity. His achievements were infused with a deep respect for the value of people as human beings. 

Additional Reflections:

 

Anna Howard writes: Steve Howard, at the age of 17 years, realised mid-way through a four year fitter and turner apprenticeship with British Nuclear Fuels, based at Capenhurst, Chester, UK , that he really did not want to spend his working life with his arms immersed in cold oil. BUT where to??

Having been told by his high school teachers aged 16, that he would never amount to much, so might as well leave school. He attempted to make his own choices; but rejected by MANWEB, his suggestion of a future in carpentry was dismissed out of hand by his Father. His guitar playing was not good enough to join Bob Dylan and the Band, so the only acceptable option was either to join the Army or to get an apprenticeship, which was the tradition in Steve’s family. Especially lucky were lads, whose Dads were already employed in places, where apprenticeships were available. Steve’s Father arranged a visit to his place of employment Capenhurst, where he was a fireman.

It was obvious Steve did have an interest in, and most importantly an understanding of, engineering, so not surprisingly he did well and was considered a good apprentice. An example of his skill is now a very treasured piece of ‘artwork’ in our home.

 

Jump forward to 1986 and the day Steve started at UCL; his fortuitous meeting with the ‘main man’ Prof. JL, a very grown up experience; something like going from our mini to a VW Passat, once we became parents!!

From the moment Steve set foot in the place, we knew he was in the right space for his emerging interests. He found in his fellow students a wide span of experiences and social levels, which opened up his mind to the possibilities of an academic career making new age technology  (which he himself had struggled with at undergraduate level) easier to comprehend. Rachael’s expertise was a great comfort as Steve found himself overwhelmed at times, struggling with so many unknowns.

Family and friends would ask “What is Steve doing?” I would answer Ergonomics. “Oh yes we have heard of that, it is a scandinavian design for cars and furniture”!!

Steve had a hard route to climb; but climb it he did. Our love for each other enabled Steve to rise to a position unimaginable, when we met in 1982.

Being at the start of a ‘new’ discipline was exciting; but the challenges enormous, not least the daily journey across London by tube and train to NPL Teddington (his sponsor), with sudden bouts of serious anxiety attacks. Pivotal in his progress was the steady guidance of his sponsor Dianne Murray.

 

January 1990, there was an early morning job interview conference call from our home in Walsall to Melbourne, Australia. Coincidentally, it was Steve’s rather unique additional apprenticeship qualification alongside his UCL MSc that got him the position at Swinburne University, then known as Swinburne Institute of Technology.

From 1999 to his untimely death in April 2013, he was at a place he felt to be the very essence of UCL, Melbourne University, where he had unlimited opportunity in research and revelled in the opportunities to promote his passion HCI.

UCL was never far from Steve’s thoughts and he spoke of its influence on his thinking Prof JL, often remembering lectures, field visits and the pathology lab at The Royal Free Hospital with clarity!

Every academic institution has a significant part to play in our exciting, fast-shrinking world and for future students to have access to excellence, it is important there be an archive, where information on the development of a subject be available, so I am pleased UCL is taking this initiative in addition to this website.

Steve answered some of the questions; but remained excited by the on-going development of a subject at the very core of what it is to be human. Questioning.

 

 

1999/2000: Simon Li 150 150 John

1999/2000: Simon Li

 Date of MSc: 1999/2000

 

 MSc Project Title:  :

 Navigation in Small Screen Devices

 

Pre-MSc Background:  

BSc Psychology

 

Pre-MSc View of HCI/Cognitive Ergonomics:

Have no idea of how knowledge from psychology can be applied to the design of technology but was very eager to find out. At the time, it all sounded a bit like magic – by understanding the human (psychology) we can better design computer systems that are user-friendly. Also, I did not think (at all) about the distinction between science and engineering but I have always viewed psychology (especially cognitive psychology) and computer science as sciences; and I guess my pre-MSc view of HCI was that it was an applied science.

 

Post-MSc View of HCI/Cognitive Ergonomics:

So, I discovered then HCI is not magic. Had trouble in making the link between what we know about psychology and the design of technology. It was the HCI methods (e.g. task analysis, evaluation techniques, etc) that were more useful when it comes to designing or redesigning technology – experience from the design week and the MSc project.

Slowly seeing the distinction between science and engineering but couldn’t get my head round the conception of an engineering discipline for HCI. Still saw HCI as an applied science because of the HCI psychology (e.g. GOMS) that we had learnt about.

 

Subsequent-to-MSc View of HCI/Cognitive Ergonomics:

My view of HCI as a discipline changed from applied science to engineering – this happened when I did my PhD at UCLIC (the successor of the EU). My research interest in HCI has always been in the scientific knowledge of HCI but the distinction between the science, design and engineering components of HCI has helped me to build my own HCI identity.

 

Additional Reflections:

I am not sure if I (will ever) understand the conception of HCI but it has helped me to build my HCI identity and see HCI as a discipline in its own right. I still have fond memories of the lectures on the D+L conception – they are very good at stirring debates (sometimes quite heated too) among some students during pub hours after lectures. We often get two main groups of students in the after-lecture pub sessions: one group would be the passionate (or argumentative) students who want to try to understand the conception by having a debate; or some think they understand and just want to debate about it. The other group would be students who just do not see the relevance of it or simply do not want to try to understand; and their reactions to having such debate in a pub usually comprise of expressions like “oh no”, “can we talk about something else?”, etc. Therefore, to the students, the lectures on the conception of HCI is a bit like marmite – you either love it or ate it.

 

 

 

 

Old Papers Never Die – They Only Fade Away…… 150 150 John

Old Papers Never Die – They Only Fade Away……

 

 

Interacting with the Computer: a Framework

 

John Long Comment 1

The title remains an appropriate one. However, given its subsequent references to: ‘domains’; ‘applications’; ‘application domains’; ‘tasks’ etc, it must be assumed that the interaction is: ‘to do something’; ‘to perform tasks’; ‘to achieve aims or goals’; or some such. Further modeling of such domains/applications, beyond that  of text processing, would be required for any re-publication of the paper and in the light of advances in computing technology – see earlier. The issue is pervasive – see also Comments 6, 35, 37, 40 and 41.

 

Comment 2

‘A Framework’ is also considered to be appropriate. Better than ‘a conception’, which promises greater completeness, coherence and fitness-for-purpose (unless, of course, these criteria are explicitly taken on-board). However, the Framework must explicitly declare and own its purpose, as later set out in the paper and referenced in Figure 1. See also Comments 15, 19, 27, and 42.

J. Morton, P. Barnard, N. Hammond* and J.B. Long

M.R.C. Applied Psychology Unit, Cambridge, England *also IBM Scientific Centre, Peterlee, England

Recent technological advances in the development of information processing systems will inevitably lead to a change in the nature of human-computer interaction.

Comment 3

‘Recent technological advances’ in 1979 centred around the personal, as opposed to the main-frame, computer. To-day there are a plethora of advances in computing technology – see Commentary Introduction earlier for a list of examples. A re-publication of the paper would require a major up-date to address these new applications, as well as their associated users. Any up-date would need to include additional models and tools for such address, as well as an assessment of the continued suitability of the models and tools, proposed in the ’79 paper.

Direct interactions with systems will no longer be the sole province of the sophisticated data processing professional or the skilled terminal user. In consequence, assumptions underlying human-system communication will have to be re-evaluated for a broad range of applications and users. The central issue of the present paper concerns the way in which this re-evaluation should occur.

First of all, then, we will present a characterisation of the effective model which the computer industry has of the interactive process.

Comment 4

We contrasted our ’79 models/theories with a single computer industry’s model. To-day, there are many types of HCI model/theory. A recent book on the subject listed 9 types of ‘Modern Theories’ and 6 types of ‘Contemporary Theories’ (Rogers, 2012). The ‘industry model’ has, of course, itself evolved and now takes many forms (Harper et al., 2008). Any re-publication of the ’79 paper would have to specify both with which HCI models/theories it wished to be  contrasted and with what current industry models.

The shortcoming of the model is that it fails to take proper account of the nature of the user and as such can not integrate, interpret, anticipate or palliate the kinds of errors which the new user will resent making. For remember that the new user will avoid error by adopting other means of gaining his ends, which can lead either to non-use or to monstrously inefficient use. We will document some user problems in support of this contention and indicate the kinds of alternative models which we are developing in an attempt to meet this need.

The Industry’s Model (IM)

The problem we see with the industry’s model of the human-computer interaction is that it is computer-centric. In some cases, as we shall see, it will have designer-centric aspects as well.

Comment 5

In 1979, all design was carried out by software engineers. Since then, many other professionals have become involved – initially psychologists, then HCI-trained practitioners, graphic designers, ethnomethodologists, technocratic artists etc. However, most design (as opposed to user requirements gathering or evaluation) is still performed by software engineers. Any re-publication of this paper would have to identify the different sorts of design activity, to assess their relative contribution to computer- and designer-centricity respectively and the form of support appropriate to each, which it might offer  – see also Comments 14, 15, 21 (iv) and (v), 27 and 41 (iv).

 

To start off with, consider a system designed to operate in a particular domain of activity.

Comment 6

Any re-published paper would have to develop more further the concept of ‘domain’ (see Comment 1). The development would need to address: 1. The computer’s version of the domain and its display thereof. There is no necessary one-to-one relationship (consider the pilot alarm systems in the domain of air traffic management). Software engineer designers might specify the former and HCI designers the latter; and 2. To what extent the domain is an ‘image of the world and its resources’. See Comments 1, 35, 37, 40 and 41.

In the archetypal I.M. the database is neutralised in much the same kind of way that a statistician will ritually neutralise the data on which he operates, stripping his manipulation of any meaning other than the purely numerical one his equations impose upon the underlying reality. This arises because the only version of the domain which exists at the interface is that one which is expressed in the computer. This version, perhaps created by an expert systems analyst on the best logical grounds and the most efficient, perhaps, for the computations which have to be performed, becomes the one to which the user must conform. This singular and logical version of the domain will, at best, be neutral from the point of view of the user. More often it will be an alien creature, isolating the user and mocking him with its image of the world and its resources to which he must haplessly conform.

Florid language? But listen to the user talking.

Comment 7

The ’79 user data are now quite out of date, both in terms of their content, means of acquisition and associated technology, compared with more recent data. However, current user-experience continues to have much in common with that of the past. Up-dated data are required to confirm this continuity.

“We come into contact with computer people, a great many of whom talk a very alien language, and you have constant difficulty in trying to sort out this kind of mid-Atlantic jargon.”

“We were slung towards what in my opinion is a pretty inadequate manual and told to get on with it”

“We found we were getting messages back through the terminal saying there’s not sufficient space on the machine. Now how in Hell’s name are we supposed to know whether there’s sufficient space on the machine?” .

In addition the industry’s model does not really include the learning process; nor does it always take adequate note of individual’s abilities and experience:

“Documentation alone is not sufficient; there needs to be the personal touch as well . ”

“Social work being much more of an art than a science then we are talking about people who are basically not very numerate beginning to use a machine which seems to be essentially numerate.”

Even if training is included in the final package it is never in the design model. Is there anyone here, who, faced with a design choice asked the questions “Which option will be the easiest to describe to the naive user? Which option will be easiest to understand? Which option will be easiest to learn and remember?”

Comment 8

Naive users, of course, continue to exist to-day. However, there are many more types of users than naive and professional of interest to current HCI researchers. Differences exist between users of associated technologies (robotic versus ambient); from different demographics (old versus young); at different stages of development (nursery versus teenage children); from different cultures (developed versus less developed) etc. These different types of user would need some consideration in any re-publication. 

Let us note again the discrepancy between the I.M. view of error and ours . For us errors are an indication of something wrong with the system or an indication of the way in which training should proceed. In the I.M. errors are an integral part of the interaction. For the onlooker the most impressive part of a D.P. interaction is not that it is error free but that the error recovery procedures are so well practised that it is difficult to recognise them for what they are .

Comment 9

As well as this important distinction, concerning errors, they need to be related to ‘domains’, applications’ and ‘effectiveness’ or ‘performance’ and not just user (or indeed computer) behaviour. See Comment 6 earlier and Comments 35, 36, 37 and 38 later.

Errorless performance may not be acceptable (consider air traffic expedition). Errorful behaviour may be acceptable (consider some e-mail errors). A re-published ’79 paper would have to take an analytic/technical(that is Framework grounded) view of error and not just a simple adoption of users’ (lay-language) expression. This problem is ubiquitous in HCI, both past and present.

We would not want it thought that we felt the industry was totally arbitrary . There are a number of natural guiding principles which most designers would adhere to. See also Comment 16.

Comment 10

We contrast here two types of principle, which designers might adhere to: 1. IM principles, as ‘intuitive, non-systematic, not totally arbitrary’; and our proposed principles, as ‘systematic’. In the light of this contrast, we need to set out clearly: 1. What and how are our principles ‘systematic’? and 2.  How does this systematicity guarantee/support better design?

Note that in Figure 1 later, there is an ‘output to system designers’. Is this output expressed in (systematic) principles? If not, what would be its form of expression? Any form of expression would raise the same issues raised earlier for ‘sysematic principles’.

We do not anticipate meeting a system in which the command DESTROY has the effect of preserving the information currently displayed while PRESERVE had the effect of erasing the operating system. However , the principles employed are intuitive and non-systematic. Above all they make the error of embodying the belief that just as there can only be one appropriate representation of the domain, so there is only one kind of human mind.

A nice example of a partial use of language constraints is provided by a statistical package called GENSTAT. This package permits users to have permanent userfiles and also temporary storage in a workfile. The set of commands associated with these facilities are :

PUT – copies from core to workfile

GET – copies from workfile to core

FILE – defines a userfile

SAVE – copies from workfile to userfile

FETCH – copies from userfile to workfile

The commands certainly have the merit that they have the expected directionality with respect to the user. However to what extent do, for example, FETCH and GET relate naturally to the functions they have been assigned? No doubt the designers have strong intuitions about these assignments. So do users and they do not concur. We asked 40 people here at the A. P.U. which way round they thought the assignments should go: nineteen of these agreed with the system designers, 21 went the 0ther  way . The confidence levels of rationalisations were very convincing on both sides!

The problem then, is not just that systems tend to be designer-centric but that the designers have the wrong model either of the learning process or of the non-D.P. users’ attitude toward error. A part-time user is going to be susceptible to memory failure and, in particular, to interference from outside the computer system. du Boulay and O’ Shea [I] note that naive users can use THEN in the sense of ‘next’ rather then as ‘implies’. This is inconceivable to the IM for THEN is almost certainly a full homonym for most D.P. and the appropriate meaning the appropriate meaning thoroughly context-determined .

Comment 11

The GENSTAT example was so good for our purposes, that it has taken considerable reflection to wonder if there really is a natural language solution, which would avoid memory failure and/or interference. It is certainly not obvious.

The alternative would be to add information to a menu or somesuch (rather like in our example). But this is just the sort of solution IM software engineers might propose. Where would that leave any ‘systematic’ principles’? – see Comment 10 earlier.

An Alternative to the Industry Model

The central assumption for the system of the future will be ‘systems match people’ rather than ‘people match systems’. Not entirely so, as we shall elaborate, for in principle, the capacity and perspectives of the user with respect to a task domain could well change through interaction with a computer system.

Comment 12

In general, the alternative aims to those of the IM promise well. The mismatch, however, seems to be expressed at a more abstract level than that of the ‘task domain’ – the ‘alien creature, isolating the user and mocking him with its image of the world and its resources to which he must haplessly conform’ – see earlier in the paper. Suppose the mismatch is at this specific level, where does this leave, for example, the natural language mismatch? Of course, we could characterise domain-specific mismatches, for example, the contrasting references to ambient environment in air- and sea-traffic management, although for professional, not for naive users. Such mismatches would require a form of domain model absent from the original paper. However, the same issue arises in the domains of letter writing and  planning by means of ‘to do’ lists. Either way, the application domain mismatch needs to be addressed, along with that of natural language.

 

But the capacity to change is more limited than the variety  available in the system .

Comment 13

The contrast ‘personal versus mainframe computer’ and the parallel contrast ‘occasional/naive versus professional user’ served us very well in ’79. But the explosion of new computing technology (see Comment 3 earlier) and associated users requires a more refined set of contrasts. There are, of course, still occasional naive users; but these are mainly in the older population and constitute a modest percentage of current users. However, with demographic changes and a longer-living older population, it would not be an uninteresting sub-set of all present users. A re-publication, which wanted to restrict its range in the manner of the ’79 paper, might address ‘older users’ and domestic/personal computing technology. An interesting associated domain might be ‘computer-supported co-operative social and health care’. We could be our own ‘subjects, targets, researchers, and designers’, as per Figure 1 later.

Our task, then, is to characterise the mismatch between man and computer in such a way that permits us to direct the designer’s effort.

Comment 14

Directing the designer’s efforts are strong words and need to be linked to the notion and guarantee of principles – see Comment 10 and Figure 1 ‘output to designers’. Such direction of design needs to be aligned with scientific/applied scientific or engineering aims (see Comments 15 and 18).

In doing this we are developing two kinds of tool, conceptual and empirical. These interrelate within an overall scheme for researching human-computer interaction as shown in Figure 1.

 

Comment 15

Figure 1 raises many issues:

1. Empirical studies require their own form of conceptualisation, for example: ‘problems’; ‘variables’; ‘tasks’ etc. These concepts would need specification before they could be conceptualised in the form of multiple models and operationalised for system designers.

2. What is the relationship between ‘hypothesis’ and the thories/knowledge of Psychology? Would the latter inform the former? If so, how exactly? This remains an endemic problem for applied science (see Kuhn, 1970).

3. Are ‘models’, as represented here, in some (or any) sense Psychological theories or knowledge? The point needs to be clarified – see also Comment 15 (1) earlier.

4. What might be the ‘output to system designers’ – guidelines; principles; systematic heuristics; hints and tips; novel design solutions; methods; education/training etc? See also Comment 14.

5. How is the ‘output to system designers’ to be validated? There is no arrow back to either ‘models’ or ‘working hypotheses’. At the very least, validation requires: conceptualisation; operationalisation; test; and generalisation. But with respect to what – hypotheses for understanding phenomena or with respect to designing artefacts?

 

Relating Conceptual and Empirical Tools

Comment 16

The relationship between conceptual and analytic tools and their illustration reads like engineering. In ’79, I thought that we were doing ‘applied science’ (following in the footsteps of Donald Broadbent, the MRC/APU’s director in 1979). The distinction between engineering and applied science needs clarification in any republished version of the original paper.

Interestingly enough, Card, Moran and Newell (1983) claimed to be doing ‘engineering’. Their primary models were the Human Information Processing (HIP) Model and the Goals, Operators, Methods and Strategies (GOMS) Model. There is some interesting overlap with some of our multiple models; but also important differences. One option for a republished paper would be to keep to the ’79 multiple models. An alternative option would to augment the HIP and GOMS with the ’79 multiple models (or vice versa), to offer a (more) complete expression of either approach taken separately.

 

The conceptual tools involve the development of a set of analytic frameworks appropriate to human computer interaction. The empirical tools involve the development of valid test procedures both for the introduction of new systems and the proving of the analytic tools. The two kinds of tool are viewed as fulfilling functions comparable to the role of analytic and empirical tools in the development of technology. They may be compared with the analytic role of physics, metallurgy and aerodynamics in the development of aircraft on the one hand and the empirical role of a wind tunnel in simulating flight on the other hand.

Empirical Tools

The first class of empirical tool we have employed is the observational field study, with which we aim to identify some of the variables underlying both the occasional user’s perceptions of the problems he encounters in the use of a computer system, and the behaviour of the user at the terminal itself.

Comment 17

Observational field studies have undergone considerable development since ’79. Many have become ethnomethodological studies, to understand the context of use, others have become front-ends to user-centred design methodologies, intended to be conducted in parallel to those of software engineering. Neither sort of development is addressed by our original paper. Both raise numerous issues, including: the mutation of lay-language into technical language; the relationship between user opinions/attitudes and behaviour; the relationship between the simulation of domains of application and experimental studies; the integration of multiple variables into design; etc.

The opinions cited above were obtained in a study of occasional users discussing the introduction and use of a system in a local government centre [2]. The discussions were collected using a technique which is particularly free from observer influence [3 ].

In a second field study we obtained performance protocols by monitoring users while they solved a predefined set of problems using a data base manipulation language [4 ]. We recorded both terminal performance and a running commentary which we asked the user to make, and wedded these to the state of the machine to give a total picture of the interaction. The protocols have proved to be a rich source of classes of user problem from which hypotheses concerning the causes of particular types of mismatch can be generated.

Comment 18

HCI has never given the concept of ‘classes of user problem’  the attention that it deserves. Clearly, HCI has a need for generality (see Comment 10, concerning (systematic) principles with their implications of generalisation). Of course, generalising over user problems is critical; but so more comprehensively is generalising over ‘design problems’. The latter might express the ineffectiveness of users interacting with computers to perform tasks (or somesuch). The original paper does not really say much about generalisation – its conceptualisation; operationalisation; test; and  – taken together –  validation. Any republication would have to rise to this challenge.

Comment 19

The concept of ’cause’ here is redolent of science, for example, as in Psychology. See also Comment 18, as concerns phenomena and Comment 15 for a contrast with engineering. Science and engineering are very different disciplines. Any re-publication would have to address this difference and to locate the multiple models and their application with respect to it.

 

There is thus a close interplay between these field studies, the generation of working hypotheses and the development of the conceptual frameworks. We give some extracts from this study in a later section.

Comment 20

This claim would hold for both a scientific (or applied scientific) and an engineering endeavour. See also Comments 15 and 18 earlier. However, both would be required to align themselves with Figures 1 and 2 of the original paper. 

A third type of empirical tool is used to test specific predictions of the working hypothesis.

Comment 21

The testing of predictions (which in conjunction with the explanation of phenomena, together constituting understanding) suggests the notion of science (see Comments 18 and 19), which can be contrasted with the prescription of design solutions (which in conjunction with the diagnosis of design problems, together constituting design of artefacts), as engineering (see Comment 15). The difference concerning the purpose of multiple models needs clarification.

The tool is a multi-level interactive system which enables the experimenter to simulate a variety of user interfaces, and is capable of modeling and testing a wide range of variables [5]. It is based on a code-breaking task in which users perform a variety of string-manipulation and editing functions on coded messages.

It allows the systematic evaluation of notational, semantic and syntactic variables. Among the results to be extensively reported elsewhere is that if there is a common argument in a set of commands, each of which takes two arguments, then the common argument must come first for greatest ease of use. Consistency of argument order is not enough: when the common argument consistently comes second no advantage is obtained relative to inconsistent ordering of arguments [6].

Comment 22

The 2-argument example is persuasive on the face of it; but is it a ‘principle’ (see Comment 10) and might it appear in the ‘output to designers’ (Figure 1 and Comment 15(4))? If so, how is its domain independence established? This point raises again the issue of generalisation – see also Comment 17.

Conceptual Tools

Since we conceive the problem as a cognitive one, the tools are from the cognitive sciences.

Comment 23

The claim is in no way controversial. However, it raises the question of whether the interplay between these cognitive tools and the working hypotheses (see Figure 1) also contribute to Cognitive Science (that is, Psychology)? See also Comment 15(3). Such a contribution would be in addition to the ‘output to designers’ of Figure 1.

 

Also we define the problem as one with those users who would be considered intellectually and motivationally qualified by any normal standards. Thus we do not admit as a potential solution that of finding “better” personnel, or simply paying them more, even if such a solution were practicable.

Comment 24

If ‘design problem’ replaced ‘user problem’ (see also Comment 18), then better personnel and/or better pay might indeed contribute to the design (solution) of the design problem. The two types of problem, that is, design problem and user problem need to be related and grounded in the Framework. The latter, for example, might be conceptualised as a sub-set of the former. Eitherway, additional conceptualisation of the Framework is required. See also Comment 18.

The cognitive incompatibility we describe is qualitative not quantitative and the mismatch we are looking for is one between the user’s concept of the system structure and the real structure: between the way the data base is organised in the machine and the way it is organised in the head of the user: the way in which system details are usually encountered by the user and his preferred mode of learning.

The interaction of human and computer in a problem-solving environment is a complex matter and we cannot find sufficient theory in the psychological literature to support our intuitive needs. He have found it necessary to produce our own theories, drawing mainly on the spirit rather than the substance of established work.

Comment 25

It sounds like our ‘own’ theories are indeed psychological theories (or would be if constructed). See also Comments 21 and 23.

Further than this, it is apparent that the problem is too complex for us to be able to use a single theoretical representation.

Comment 26

Decomposition (as in multiple models) is a well-tried and trusted solution to complexity. However, re-integration will be necessary at some stage and for some purpose. Understanding (Psychology) and design of artefacts (HCI) would be two such (different) purposes. They need to be distinguished. See also Comment 15(5).

The model should not only be appropriate for design, it should also give a means of characterising errors – so as to understand their origins and enable corrective measures to be taken.

Comment 27

What characterises a ‘model appropriate for design’? (see also Comment 15(4) and(5)). Design would have to be conceptualised for this purpose. Features might be derived from field studies of designer practice (see Figure 1); but a conceptualisation would not be ‘given’; but would have to be constructed (in the manner of the models). This construction would be a non-trivial undertaking. But how else could models be assured to be fit-for-(design)purpose? See also Comment 14).

Take the following protocol.

The user is asked to find the average age of entries in the block called PEOPLE.

“I’ll have a go and see what happens” types: *T <-AVG(AGE,PEOPlE)

machine response: AGE – UNSET BLOCK

“Yes, wrong, we have an unset block. So it’s reading AGE as a block, so if we try AGE and PEOPLE the other way round maybe that’ll work.”

This is very easy to diagnose and correct. The natural language way of talking about the target of the operation is mapped straight into the argument order. The cure would be to reverse the argument order for the function AVG to make it compatible.

Comment 28

Natural language here is used both to diagnose ‘user problems’ and to propose solutions to those problems. Natural language, however, does not appear in the paper as a model, as such. Its extensive nature in psychology/linguistics would prohibit such inclusion. Further, there are many theories of natural language and no agreement as to their state of validation (or rejection). However, the model appears as a block in the BIM (see Figure 2). The model/representation, of course, might be intuitive, in the form and practice of lay-language, which we all possess. However, such intuitions would also be available to software engineers and would not distinguish systematic from non-systematic principles ( see Comment 10). The issue would need to be addressed in any re-publication of the ’79 paper.

 

The next protocol is more obscure. The task is the same as in the preceding one.

“We can ask it (the computer) to bring to the terminal the average value of this attribute.”

types: *T -AVG( AGE)

machine response: AVG(AGE) – ILLEGAL NAME

“Ar.d it’s still illegal. .. ( … ) I’ve got to specify the block as well as the attribute name.”

Well of course you have to specify the block. How else is the machine going to know what you’re talking about? A very natural I.M. response. How can we be responsible for feeble memories like this.

However, a more careful diagnosis reveals that the block PEOPLE is the topic of the ‘conversation’ in any case.

Comment 29

Is ‘topic of conversation’, as used here an intuition, derived from lay-language or a sub-set of some natural language theory, derived form Psychology/Linguistics? This is a good example of the issue raised by Comment 28. The same question could be asked of the use of ‘natural language conventions’, which follows next.

 

The block has just been used and the natural language conventions are quite clear on the point.

We have similar evidence for the importance of human-machine discourse structures from the experiment using the code-breaking task described above. Command strings seem to be more ‘cognitively compatible’ when the subject of discourse (the common argument) is placed before the variable argument. This is perhaps analogous to the predisposition in sentence expression for stating information which is known or assumed before information which is new [7]. We are currently investigating this influence of natural language on command string compatibility in more detail.

Comment 30

These natural language interpretations and the associated argumentation remain both attractive and plausible. However, command languages in general (with the exception of programmers) have fallen out of favour. Given the concept of the domain of application/tasks and the requirements of the Goal Structure Model, some addition to the natural language model would likely be required for any re-publication of the ’79 paper. Some relevance-related, plan-based speech act theory might commend itself in this case.

 

The Block Interaction Model

Comment 31

The BIM remains a very interesting and challenging model and was (and remains) ahead of its time. For example, the very inclusion of the concept of domain (as a hospital; jobs in an employment agency etc); but, in addition, the associated representations of the user, the computer and the workbase. Thirty-four years later, HCI researchers are still ‘trying to pick the bits/blocks out of that’ in complex domains such as air traffic and emergency services management. Further development of the BIM in the form of more completely modeled exemplars would be required by any republished paper.

Systematic evidence from empirical studies, together with experience of our own, has led us to develop a conceptual analysis of the information in the head of the user (see figure 2). Our aim with one form of analysis is to identify as many separable kinds of knowledge as possible and chart their actual or potential interactions with one another. Our convention here is to use a block diagram with arrows indicating potential forms of interference. This diagram enables us to classify and thus group examples of interference so that they could be counteracted in a coordinated fashion rather than piecemeal. It also enables us to establish a framework within which to recognise the origin of problems which we haven’t seen before. Figure 2 is a simplified form of this model. The blocks with double boundaries, connected by double lines, indicate the blocks of information used by the ideal user. The other lines indicate prime classes of interference. The terminology we have used is fairly straightforward: Domain – the range of the specific application of a system. This could be a hospital, a city’s buildings, a set of knowledge such as jobs in ~n employment agency. Objects – the elements in the particular data base. They could be a relational table, patients’ records. I Representation of domain I Representa ti on of work-base version of domain domain Representation of problem Operations – the computer routines which manipulates the objects. Labels – the letter sequences which activate operators which, together with arguments and syntax, constitute the commands. Work base – in general, people using computer systems for problem solving have had experience of working in a non-computerised work environment either preceding the computerisation or at least in parallel with the computer system. The representation of this experience we call the work-base version. There will be overlap between this and the users representation of the computer’s version of the domain; but there will be differences as well, and these differences we would count as potential sources of interference. There may be differences in ·the underlying structure of the data in the two cases, for example, and will certainly be differences in the objects used. Thus a user found to be indulging in complex checking procedures after using the command FILE turned out to be perplexed that the material filed was still present on the screen. With pieces of paper, things which are filed actually go there rather than being copied. Here are some examples of interference from one of our empirical studies [4]:

Interference on the syntax from other languages. Subject inserts necessary blanks to keep the strings a fixed length.

“Now that’s Matthewson, that’s 4,7, 10 letters, so I want 4 blanks”

types: A+<:S:NAME = ‘MATTHEWSON ‘:>PEOPLE

Generalised interference

“Having learned how reasonably well to manipulate one system, I was presented with a totally different thing which takes months to learn again.”

 

 

 

Interference of other machine characteristics on machine view

“I’m thinking that the bottom line is the line I’m actually going to input. So I couldn’t understand why it wasn’t lit up at the bottom there, because when you’re doing it on (another system) it’s always the bottom line.”

Comment 32

These examples do not do justice to the BIM – see Comment 31. More complete and complex illustrations are required.

 

 

The B.I.M. can be used in two ways. We have illustrated its utility in pinpointing the kinds of interference which can occur from inappropriate kinds of information. We could look at the interactions in just the opposite way and seek ways of maximising the benefits of overlap. This is, of course, the essence of ‘cognitive compatibility’ which we have already mentioned. Trivially, the closer the computer version of the domain maps onto the user’s own version of the domain the better. What is less obviou~ is that any deviations should be systematic where possible.

Comment 33

In complex domains (see Comment 31), the user’s own model is almost always implicit. Modeling that representation is itself non-trivial. A re-published paper would have to make at least a good stab at it.

In the same way, it is pointless to design half the commands so that they are compatible with the natural language equivalents and use this as a training point if the other half, for no clear reason, deviate from the principle. If there are deviations then they should form a natural sub-class or the compatibility of the other commands will be wasted.

Information Structures

In the block interaction model we leave the blocks ill-defined as far as their content is concerned. Note that we have used individual examples for user protocols as well as general principles in justifying and expanding upon the distinctions we find necessary. What we fail to do in the B. I .M. is to characterise the sum of knowledge which an individual user carries around with him or brings to bear upon the interaction. We have a clear idea of cognitive compatibility at the level of an individual. If this idea is to pay then these structures must be more detailed.

There is no single way of talking about information structures. At one extreme there is the picture of the user’s knowledge as it apparently reveals itself in the interaction; the view, as it were, that the terminal has of its interlocutor. From this point of view the motivation for any key press is irrelevant. This is clearly a gross oversimplification.

The next stage can be achieved by means of a protocol. In it we would wish to separate out those actions which spring from the users concept of the machine and those actions which were a result of him being forced to do something to keep the interaction going. This we call ‘heuristic behaviour’. This can take the form of guessing that the piece of information which is missing will be consistent with some other system or machine. “If in doubt, assume that it is Fortran” would be a good example of this. The user can also attempt to generalise from aspects of the current system he knows about. One example from our study was where the machine apparently failed to provide what the user expected. In fact it had but the information was not what he had expected. The system was ready for another command but the user thought it was in some kind of a pending state, waiting with the information he wanted. In certain other stages – in particular where a command has produced a result which fills up the screen – he had to press the ENTER key – in this case to clear the screen. The user then over-generalised from this to the new situation and pressed the ENTER key again, remarking

“Try pressing ENTER again and see what happens.”

We would not want to count the user’s behaviour in this sequence as representing his knowledge of the system – either correct knowledge or incorrect knowledge. He had to do something and couldn’t think of anything else. When the heuristic behaviour is eliminated we are left with a set of information relevant to the interaction. With respect to the full, ideal set of such information, this will be deficient with respect to the points, at which the user had to trust to heuristic behaviour.

Comment 34

The concept of ‘heuristic behaviour’ has never received the attention that it deserves in HCI research, although it must be recognised that much user interactive behaviour is of this kind. The proliferation of new interactive technologies (see Comment 3) is likely to increase this type of behviour by users attempting to generalise across technologies. A re-published paper would have better to relate the dimension of heuristic to that of correctness both with respect to user knowledge and user behaviour.

Note that it will also contain incorrect information as well as correct information; all of it would be categorised by the user as what he knew, if not all with complete confidence, certainly with more confidence than his heuristic behaviour. The thing which is missing from B.I.M. and I.S. is any notion of the dynamics of the interaction. We find we need three additional notations at the moment to do this. One of these describes the planning activity of the user, one charts the changes in state of user and machine and one looks at the general cognitive processes which are mobilised.

Comment 35

The list of models required, in addition to the B.I.M. and the I.S. is comprehensive – planning, user-machine state changes, and cognitive processes. However, it might be argued that yet another model is required – one which maps the changes of the domain as a function of the user-computer interactive behaviours. The domain can be modeled as object-attribute-state (or value) changes, resulting from user-computer behaviours, supported respectively by user-computer structures. Such models currently exist and could be exploited by any re-published paper.

Goal Structure Model

The user does some preparatory work before he presses a key. He must formulate some kind of plan, however rudimentary. This plan can be represented, at least partially, as a hierarchical organisation. At the top might be goals such as “Solve problem p” and at the bottom “Get the computer to display Table T”. The Goal Structure model will show the relationships among the goals.

Comment 36

The G.S.M. is a requirement for designing human-computer interactions. However, it needs to be related in turn to the domain model (see Comments 31, 32 and 33). In the example, the document in the G.S.M. is transformed by the interactive user-computer behaviours from ‘unedited’ to ‘edited’. Any hierarchy in the G.S.M. must take account of any other type of hierarchy, for example, ‘natural’, represented in the domain model (see also Comment 35). The whole issue of so-called situated plans a la Suchman would have to be addressed and seriously re-assessed (see also Comment 37).

This can be compared with the way of  structuring the task imposed by the computer. For example, a user’s concept of editing might lead to the goal structure:

 

Comment 37

HCI research has never recovered from loosing the baby with the bath-water, following Suchman’s proposals concerning so-called ‘situated actions’. Using the G.S.M, a republished paper could bring some much needed order to the concepts of planning. Even the simple examples provided here make clear that such ordering is possible.

Two problems would arise here. Firstly the new file has to be opened at an ‘unnatural’ place. Secondly the acceptance of the edited text changes from being a part of the editing process to being a part of the filing process.

The goal structure model, then, gives us a way of describing such structural aspects of the user’s performance and the machines requirements. Note that such goals might be created in advance or at the time a node is evaluated. Thus the relationship of the GSM to real time is not simple.

The technique for determining the goal structure may be as simple as asking the user “What are you trying to do right now and why?” This,may be sufficient to reveal procedures which are inappropriate for the program being used.

Comment 38

Complex domain models, for example, of air traffic management and control would require more sophisticated elicitation procedures than simple user questioning. User knowledge, supporting highly skilled and complex tasks is notoriously difficult to pin down, given its implicit nature. So-called ‘domain experts’ would be a possible substitute; but that approach raises problems of its own (for example, when experts disagree). A re-published paper would at least have to recognise this problem.

State Transition Model

In the course of an interaction with a system a number of changes take place in the state of the machine. At the same time the user’s perception of the machine state is changing. It will happen that the user misjudges the effect of one command and thereafter’ enters others which from an outside point of view seem almost random. Our point is, as before, that the interaction can only be understood from the point of view of the user.

 

Comment 39

The S.T.M. needs in turn to be related to the domain model (See Comments 31 and 35). These required linkings raise the whole issue of multiple-model re-integration (see also Comment 26).

This brings us to the third of the dynamic aspects of the interaction: the progress of the user as he learns about the system.

Comment 40

As with the case of ‘heuristic behaviour’, HCI research has never treated seriously enough the issue of ‘user learning’. Most experiments record only initial engagement with an application or at least limited exposure. Observational studies sometimes do better. We are right to claim that users learn (and attempt to generalise). Designers, of course, are doing the same thing, which results in (at least) two moving targets. Given our emphasis on ‘cognitive mismatch’ and the associated concept of ‘interference’, we need to be able to address the issue of user learning in a convincing manner, at least for the purposes in hand.

 

Let us explore some ways of representing such changes. Take first of all the state of the computer. This change is a result of user actions and can thus be represented as a sequence of Machine States (M.S.) consequent on user action.

If the interaction is error free, changes in the representations would follow changes in the machine states in a homologous manner. Errors will occur if the actual machine state does not match its representation.

Comment 41

At some stage and for some purpose, the S.T.S surely needs to be related to the G.S.M. (and or the domain model). Such a relationship would raise a number of issues, for example, ‘errors’ (see Comment 9) and the need to integrate multiple-models (see also Comments 26 and 39).

We will now look at errors made by a user of an interactive data enquiry system. We will see errors which reveal both the inadequate knowledge of the particular machine state or inadequate knowledge of the actions governing transitions between states. The relevant components of the machine are the information on the terminal display and the state of a flag shown at the bottom right hand corner of the display which ‘informs the user of some aspects of the machine state (ENTER … or  OUTPUT … ). In addition there is a prompt, “?”, which indicates that the keyboard is free to be used, there is a key labelled ENTER. In the particular example the user wishes to list the blocks of data he has in his workspace. The required sequence of machine states and actions is:

 

The machine echoes the command and waits with OUTPUT flag showing.

User: “Nothing happening. We’ve got an OUTPUT there in the corner I don’ t know what that means.

The user had no knowledge of MS2: we can hypothesise his representation of the transition to be:

 

This is the result of an overgeneralisation. Commands are obeyed immediately if the result is short, unless the result is block data of any size. The point of this is that the data may otherwise wipe everything from the screen. With block data the controlling program has no lookahead to check the size and must itself simply demand the block, putting itself in the hands of some other controlling program. We see here then a case where the user needs to have some fairly detailed and otherwise irrelevant information about the workings of the system in order to make sense of (as opposed to learn by rote) a particular restriction.

The user was told how to proceed, types ENTER, and the list of blocks is displayed together with the next prompt. However, further difficulties arise because the list of blocks includes only one name and the user was expecting a longer listing. Consequently he misconstrues the state of the machine. (continuing from previous example)

User types ENTER

Machine replies with block list and prompt.

Flag set to ENTER …

“Ah, good, so we must have got it right then.

A question mark: (the prompt). It doesn’t give me a listing. Try pressing ENTER again and see what happens.”

User types ENTER

“No? Ah, I see. Is that one absolute block, is that the only blocks there are in the workspace?”

This interaction indicates that the user has derived a general rule for the interaction:

“If in doubt press ENTER”

After this the user realises that there was only one name in the list. Unfortunately his second press of the ENTER key has put the machine into Edit mode and the user thinks he is in command mode. As would be expected the results are strange.

At this stage we can show the machine state transitions and the user’s representation together in a single diagram, figure 3.

This might not be elegant but it captures a lot of features of the interaction which might otherwise be missed.

Comment 42

The S.T.M. includes ‘machine states’ and the user’s representation thereof. Differences between the two are likely to identify both errors and cognitive mismatches. However, the consequences – effective or ineffective interactions and domain transformations – are not represented; but need to be related to the G.S.M. ( and the domain model). This raises, yet again, the issue of the relations between multiple-models required in the design process (see Figure 1 and Comments 26 and 39).

The final model we use calls upon models currently available in cognitive psychology which deal with the dynamics of word recognition and production, language analysis and information storage and retrieval. The use of this model is too complex for us to attempt a summary here.

Comment 43

Address of the I.P.M. is noticeable only by its intended absence. This may have been an appropriate move at the time. However, any re-published paper would have to take the matter further. In so doing, at least the following issues would need to be addressed:

1. The selection of appropriate Psychology/Language I.P.M.s, of which there are very many, all in different states of development and validation (or rejection).  (Note Card et al’s synthesis and simplification of such a model in the form of the HIP – see Comment 15).

2. The relation of the I.P.M. to all other models (see Comments 26, 35, 41 and 42).

3. The need to tailor any I.P.M. to the particular domain of concern to any application, for example, air traffic management (see Comments 6 and 39).

4. The level of description of the I.P.M. See also 1. above.

5. The use of any I.P.M. by designers (see Figure 1).

6. The ‘guarantee’ that Psychology brings to such models in the case of their use in design and the nature of its validation.

Conclusion

We have stressed the shortcomings of what we have called the Industrial Model and have indicated that the new user will deviate considerably from this model. In its place we have suggested an alternative approach involving both empirical evaluations of system use and the systematic development of conceptual analyses appropriate to the domain of person-system interaction. There are, of course, aspects of the I.M. which we have no reason to disagree with, for example, the idea that the computer can beneficially transform the users view of the problems with which he is occupied. However, we would appreciate it if someone would take the trouble to support this point with clear documentation. So far as we can see it is simply asserted.

Comment 44

In the 34 years, following publication of our original paper, numerous industry practitioners, trained in HCI models and methods, would claim to have produced ‘clear documentation’, showing that the ‘computer can beneficially transform the user’s view of the problems with which he is occupied’. This raises the whole (and vexed) question of how HCI has moved on since 1979, both in terms of the number and effectiveness of trained/educated HCI practitioners. HCI community progress, clear to everyone, needs to be contrasted with HCI discipline progress, unclear to some.

Finally we would like to stress that nothing we have said is meant to be a solution – other than the methods. We do not take sides for example, on the debate as to whether or not interactions should be in natural language – for we think the question itself is a gross oversimplification. What we do know is that natural language interferes with the interaction and that we need to understand the nature of this interference and to discover principled ways of avoiding it.

Comment 45

Natural language understanding and interference smacks of science. Principled ways of avoiding interference smacks of engineering. What is the relationship between the two? What is the rationale for the relationship? What is the added-value to design (see also Comment 15).

And what we know above all is that the new user is most emphatically not made in the image of the designer.

Comment 46

The original paper essentially conceptualises and illustrates the need for the  proposed ‘Framework for HCI’. That was evil, sufficient unto the day thereof. However, what it lacks thirty-four years later is any exemplars, for example, following Kuhn’s requirement for knowledge development and validation. The exemplars would be needed for any re-publication of the paper and would require the complete, coherant and fit-for-purpose – operationalisation, test and generalisation of the Framework, as set out in Figure 1. A busy time for someone……

 

References

[1 ] du Boulay, B. and O’Shea, T. Seeing the works: a strategy of teaching interactive programming. Paper presented at Workshop on ‘Computing Skills and Adaptive Systems’, Liverpool, March 1978.

[2] Hammond, N.V., Long, J.B. and Clark, l.A. Introducing the interactive computer at work: the users’ views. Paper presented at Workshop on ‘Computing Skills and Adaptive Systems’, Liverpool, March 1978.

[3] Wilson. T. Choosing social factors which should determine telecommunications hardware design and implementation. Paper presented at Eighth International Symposium on Human Factors in Telecommunications, Cambridge, September 1977.

[4] Documenting Human-computer Mismatch with the occasional interactive user. APU/IBM project report no. 3, MRC Applied Psychology Unit. Cambridge, September 1978.

[5] Hammond, N.V. and Barnard, P.J. An interactive test vehicle for the investigation of man-computer interaction. Paper presented at BPS Mathematical and Statistical Section Meeting on ‘Laboratory Work Achievable only by Using a Computer’, London, September 1978.

[6] An interactive test vehicle for the study of man-computer interaction. APU/IBM project report no. 1,MRC Applied Psychology Unit, Cambridge, September 1978.

[7] Halliday, M.A.K. Notes on transitivity and theme in English. Part 1. Journal of Linguistics, 1967, 3, 199-244.

 

 

 

FIGURE 3: STATE TRANSITION EXAMPLE