Festschrift Published Papers

These papers were published in the Festschrift, 2010.

John Long Comments on Festschrift Published Papers 150 150 John

John Long Comments on Festschrift Published Papers

 

John Long Comments on Festschrift Published Papers

 

In my own contribution to the Festschrift (Some Celebratory HCI Reflections on a Celebratory HCI Festschrift), I celebrated ‘the Festschrift papers themselves (both accepted and rejected), their authors and their reviewers’. I also went on to write: ‘My natural instinct is to peer review the papers. Space and my honoured status forbid such a review. However, I hope to do this elsewhere (I owe it to the authors and myself)’. The elsewhere has arrived and is now here.

 

However, on reflection, to peer review the Festschrift papers now seems inappropriate. They were peer reviewed before publication and are unlikely to be re-published, at least in their present form. Nevertheless, I would still like to respond to the papers (‘what I owe myself’) and to contribute to the ideas, expressed in them (‘what I owe the authors’). My ‘response’ and ‘contribution’, then take the form of a commentary – a set of comments on the ideas put forward by the authors. The comments are wide-ranging from simple clarifications to complex suggestions as to how the ideas might be developed further. The comments are intended to be constructive, even when critical. They are my way of expressing my thanks to the authors for their contributions to the Festschrift. I hope they find my comments both interesting and useful.

 

 

Festschrift Introduction 150 150 admin

Festschrift Introduction

 

Alistair Sutcliffe

Manchester Business School, Booth Street West, Manchester M15 6PB, UK E-mail address: Alistair.Sutcliffe@mbs.ac.uk

Ann Blandford

UCL Interaction Centre, University College London, Malet Place Engineering Building (8th floor), Gower Street, London WC1E 6BT, UK Tel.: +44 (0)20 7679 0688. E-mail addresses: A.Blandford@ucl.ac.uk
John Long's Comment 1 on this Paper
The article is the introduction to the Festschrift by Alistair Sutcliffe and Ann Blandford, the two editors, both of whom I have known for more years than I care to remember.

Blandford I first met during her MRC/APU Cambridge days, when, with Phil Barnard and Michael Harrison (among others), she worked on the Amodeus project. We continue to bump into each other, of course, at UCL.

It was Sutcliffe (along with Linda Macaulay), who invited me to present a keynote paper at People and Computers V (1989), which became the Long and Dowell ‘Conceptions of the Discipline of HCI: Craft, Applied Science and Engineering’, of which more (yes much more, I am afraid) anon. According to the editors of the proceedings, earlier conferences ‘have reflected on the possible directions and immaturity of HCI, this year’ (1989)’, we intend to focus on the emerging maturity of the discipline’. As an aside, contrast my view, as expressed in the Festschrift: ‘HCI is still in its early stages. Trends and visions come and go. The field is too immature for any consensus agreement…..’ No necessary contradiction. Perhaps the emergence of maturity in HCI simply takes a long time.

Initially, I was not very keen on a personal Festschrift, preferring instead a celebration of the work of the Ergonomics and HCI Unit at UCL. However, the Editors’ view quite rightly prevailed. Their introduction, as might be expected, is taken up with a short summary of each of the papers, published in the Festschrift. Hence, it prompts few comments. It is included here for completeness.

It is a pleasure to introduce this special Festschrift edition in honour of John Long’s contribution to Human Computer Interaction and the science of design more broadly.

Comment 2
The claim that Long contributed to “the science of design more broadly” is accepted, because some HCI theorists consider HCI to be a design science (for example, Carroll in his Festschrift paper writes: “although HCI was always conceived as a design science ……”). However, many do not. Like them, Long eschews the concept of HCI as a design science. In contrast, he claims to have contributed: to HCI, as a Design Discipline and to HCI, as an Engineering Design Discipline (Long and Dowell, 1989); to a conception of the HCI Engineering Design Problem (Dowell and Long, 1989 and 1998); to HCI Engineering Design Models and Methods; and to HCI Engineering Design Principles (Long, 2010). There is, then, a contrast between HCI, as a Science. and HCI, as Engineering. However, Long and Dowell do refer to their work as ‘an epistemological enquiry’. As such, it could obviously constitute a phenomenon for the ‘science of design’.

John Long is one of the founders of our discipline in the UK and contributed significantly to the emergence of HCI in the international arena.

Comment 3
 While the claim is (modestly) accepted, that Long is one of the founders of HCI in the UK (or at least that he was there at the start), the uniqueness of the claim depends on who else might also be considered a founder. Our listing might differ in interesting ways. 

As readers will see from the collection of papers which review and develop Long’s work, he questioned the nature of HCI at a deep level in proposing, with John Dowell, his well known and much cited ‘conception’ of the discipline (Long and Dowell, 1989; Dowell and Long, 1998). However, John Long’s HCI research started with work on menu design (Barnard et al., 1977), and frameworks of HCI attracted his attention soon afterwards (Morton et al., 1979). In addition to his theory research, Long’s contributions have included method development exploring the convergence of software engineering and HCI (Lim and Long, 1994), evaluation methodology (Denley and Long, 1997), analysis and design of socio-technical systems (Smith et al., 1997), and applying his expertise and knowledge to CSCW (Lambie and Long, 2002) and requirements engineering (Denley and Long, 2001).

 

In addition to his research, John Long developed the UCL Ergonomics unit from its origins in human factors into a research centre in Human Computer Interaction, now the UCL Interaction Centre (UCLIC).

Comment 4
 Although ‘Human Factors’ and ‘Ergonomics’ are often used interchangeably, the founders of the Ergonomics Unit would certainly have considered its origins to be in Ergonomics. Long’s main contribution was to introduce research to the Unit and in particular to develop research into HCI.

 

He introduced the first specialist Masters course in HCI that has produced usability, HCI and human factors experts who have taken their knowledge and John Long’s influence throughout the UK and worldwide. Long’s influence through the diaspora of his PhD students, post-doctoral researchers and the large number of graduates from UCL has been immense, spreading HCI in academia and industry throughout the world.

Comment 5
 ‘Throughout the world’ is a strong claim; but, on (modest) balance, fair enough. Particular areas influenced include: Europe (France; Holland; Belgium); USA; South America (Brazil; Columbia); and the Far East (Hong Kong; Singapore; Malaysia; China).

 

 

Fourteen papers were submitted to this special edition from a variety of leading HCI researchers, including several who were Long’s PhD students. After the review process five papers were accepted for publication.

The first two papers in this special edition focus on Long’s conception of the HCI discipline and develop the authors’ viewpoint about where HCI is developing as a discipline since the conception was published and revised in the last millennium (Long and Dowell, 1989; Dowell and Long, 1998).

Comment 6
 To be clear, the HCI Discipline Conception (Long and Dowell, 1989) has never been revised. The HCI Design Problem Conception (Dowell and Long, 1989) has been re-expressed as the Cognitive Engineering Design Problem (Dowell and Long, 1998).

The special edition thus forms part of the debate on the future of HCI that has appeared in different forms in recent years, reflected in a wide range of papers: for example Carroll (2001), or Rogers’ surveys (1999, 2004) of the diversity of theoretical and pragmatic approaches to HCI as it evolves into new areas of technical and interactive endeavour. Other authors have explored the development of design and more situated, contextual interpretations of HCI (Dourish, 2004; McCarthy and Wright, 2005) demonstrating the diversity of debate which was started by John Long’s work.

Comment 7
 A debate indeed started by the work of Long and, of course, by that of others.

 

Carroll takes the opportunity to re-join the debate he shared with John Long for many years. He reviews the history of HCI in the science and engineering tradition which was the predominant focus in the 1980s and 1990s, but disagrees with the contention that HCI should be viewed as an engineering discipline. Instead, Carroll proposes that designs and artefacts can be seen as theories and reusable knowledge in their own right, a view which he developed into a rival framework for HCI in the task-artfact cycle. He argues that Long’s framework makes an over-rigid distinction between applied science, engineering and craft, since craft can deliver generalisable knowledge, while science can be directly applied to design via specialised models. Carroll then widens the debate to the future of HCI, noting that as applications have diversified from office work to entertainment, collaboration and social computing, HCI needs to develop as a meta-discipline of design, to evolve design quality beyond usability and master the techniques for delivering innovative and satisfying designs.

Dix also critiques the Long and Dowell framework of HCI, noting that the discipline has changed radically in recent years to embrace aesthetics, fun, entertainment and many new design goals. Dix also points out that the engineering view in the early years may have been more relevant when little HCI knowledge existed, especially in industry; however, he argues that successful design also requires considerable tacit knowledge to interpret design problems. He notes that the models and rules from science may often be mis-applied since the original assumptions and limitations are lost when knowledge is reused. He contends that HCI has succeeded through educating designers, but argues that a new conception is needed which focuses on methodology, in the true sense of the word: a study of process and methods. He argues that we need to develop, critique and integrate various techniques and processes by which we evaluate designs, in order to establish quality and generalisable knowledge. He believes that HCI should become a meta-discipline of design methodology.

The following three papers all demonstrate how John Long’s intellectual legacy has been developed in three very different directions, which form three samples of a much broader literature on UCL-authored methods, tools and applications influenced by Long’s mentoring and encouragement.

Wild reviews the UCL concept of HCI and work system design, then describes how he has extended it to address current concerns in service-oriented systems. Services marketing and service-oriented design are grounded in activity modelling, expanding approaches to domain analysis and the contextual influences of Long’s work. Wild argues that affective values and aesthetic aspects of the new HCI agenda can be integrated within Long’s engineering conception. Rather than retreating to discursive, craft-based approaches to emotion, motivation and values in design, he argues for systematic application of psychological knowledge. Wild illustrates his argument with a case study of design trade-off analysis for services using his ABFS method. The paper concludes with a discussion of the future contribution of HCI to service systems research in business and technology design.

Hill also develops the work systems heritage of Long’s research with a method for analysing socio-technical systems. Her PCMT framework analyses tasks in collaborative systems with a cognitive action model for communication, coordination and use of resources. The framework is applied to an emergency management case study to demonstrate how problems of poor coordination and access to resources can be discovered through modeling, leading to organisational, training and technology solutions to remedy potential pathologies with complex systems and operational procedures.

In the last paper, Salter demonstrates the intellectual reach of Long’s legacy in a method development study that takes the science and engineering elements of the HCI conception and applies them to the discipline of economics.

Comment 8
 Strictly speaking, neither the HCI Discipline Conception  nor the HCI Engineering Conception has elements of Science, as such. The user model of the Engineering Conception, however, obviously draws on the information processing tradition of Psychology.

 

Based on Kuhn’s framework for science paradigms, Salter argues that the engineering can be extended with formal processes that assure a set of requirements are matched by an artefact or design within the scope of generalised classes of problems. An approach to design of markets in microeconomics is reviewed and systematised following Long’s recommendations for production of engineering knowledge as principles, rules and laws.

Comment 9
 According to Long (2010), HCI Engineering Knowledge consists of: Models; Methods; and Principles. ‘Rules’ might properly be included in these types of knowledge; but ‘laws’ are eschewed, as typifying Science.

 

 

The ‘market engineering’ method is applied to review the history of doctors’ work and training allocation systems. Salter argues that the problem abstraction concerns matching doctors’ preferences and skills to the available jobs; and that this problem can be solved with preference–order matching algorithms derived from game theoretic approaches and suitable work system design to capture preferences. He then applies his engineering concept to the current financial crises where the matching problem abstraction can be used to realign and redesign banks so their services match the needs of different clients more transparently and effectively.

The special edition ends with a postscript in which we invited John Long to re-join the debate presented by Carroll and Dix, as well as review the contributions of the other three papers.

Comment 10
 In the event, Long reviewed all the papers similarly, that is, with the aim of ‘addressing by way of clarifications, issues problematic for EU research’.

A fitting testimonial to his work is to let him have the last word. It has been a pleasure working with John Long, and with all the authors, the reviewers and Dianne Murray of IWC to create this Festschrift special edition. The reviewers are listed below.

1. Reviewers for the Festschrift

We thank all the reviewers who made this Festschrift possible:

Anne Anderson, Chris Baber, Jonathan Back, David Benyon, Alan Blackwell, Paul Cairns, Gilbert Cockton, Andrew Dearden, Linda Macaulay, Neil Maiden, John McCarthy, Shailey Minocha, Andrew Monk, Fabio Paterno, Stephen Payne, Mark Perry, Alan Dix, Ellen Do, Gavin Doherty, John Dowell, Janet Finlay, Dominic Furniss, Phil Gray, Michael Harrison, Jean-Michel Hoc, Steve Howard, Chris Johnson, Hilary Johnson, Yvonne Rogers, Dominique Scapin, Helen Sharp, Wally Smith, Harold Thimbleby, Gerrit van der Veer, Frank Vetere, Peter Wild, Stephanie Wilson, Trevor Wood Harper, William Wong, Peter Wright

References

Barnard, P.J., Morton, J., Long, J.B., Ottley, P., 1977. Planning menus for display: Some effects of their structure and content on user performance. In: IEE Conference Publications No. 150: Displays for Man–Machine Systems. IEE, London.

Carroll, J.M. (Ed.), 2001. Human–Computer Interaction in the New Millennium. ACM Press, New York.

Denley, I., Long, J.B., 1997. A planning aid for human factors evaluation practice. Behaviour and Information Technology 16 (4/5), 203–219.

Denley, I., Long, J.B., 2001. Multidisciplinary practice in requirements engineering: problems and criteria for support. In: People and Computers XV – Interaction without Frontiers. Joint Proceedings of HCI 2001 and IHM 2001. Springer, London.

Dourish, P., 2004. Where the Action Is: The foundations of Embodied Interaction. MIT Press, Cambridge, MA.

Dowell, J., Long, J.B., 1998. A conception of the cognitive engineering design problem. Ergonomics 41 (2), 126–139.

Lambie, T., Long, J.B., 2002. Co-operative systems design: a challenge of the mobility age. In: Engineering CSCW. IOS Press, Amsterdam.

Lim, K.Y., Long, J.B., 1994. The MUSE Method for Usability Engineering. Cambridge University Press, Cambridge.

Long, J.B., Dowell, J., 1989. Conceptions for the discipline of HCI: Craft, applied science, and engineering. In: People and Computers V: Proceedings of Fifth Conference of the BCS HCI SIG. Cambridge University Press, Cambridge.

McCarthy, J., Wright, P., 2005. Technology as Experience. MIT Press, Cambridge MA.

Morton, J., Barnard, P., Hammond, N., Long, J.B., 1979. Interacting with the computer: a framework. In: Boutmy, E.J., Danthine, A. (Eds.), Teleinformatics’79. Springer,
Berlin.

Rogers, Y., 1999. Instilling interdisciplinarity: HCI from the perspective of cognitive
science. SIGCHI Bulletin 31 (3), 4–8.

Rogers, Y., 2004. New theoretical approaches for human–computer interaction.
Annual Review of Information. Science and Technology 38, 87–143.

Smith, M.W., Hill, B., Long, J.B., Whitefield, A.D., 1997. A design-oriented framework for modelling the planning and control of multiple task work in Secretarial Office Administration. Behaviour and Information Technology 16 (3), 161–

 

 

 

 

 

 

Some celebratory HCI reflections on a celebratory HCI festschrift 150 150 admin

Some celebratory HCI reflections on a celebratory HCI festschrift

John Long

University College London, UCL Interaction Centre, MPEB 8th Floor, Gower Street, London WC1E 6BT, United Kingdom
‘And so I face the final curtain,
Regrets I’ve had a  few’ (Sinatra).
but on celebratory reflection
‘Non, rien de rien,
Non, je ne regrette rien’ (Piaf).

 
Festschrifts are meant to be celebratory (Wikipedia, 2009). My reflections too. First, I celebrate the very idea of an HCI festschrift. A bit premature perhaps; but better early, than too late, and it will serve to encourage others. Congratulations, then, to Alistair and Ann, along with Diane, the IWC publishers and the festschrift authors for making it happen. I am very touched. However, this festschrift cannot be of the traditional kind. HCI is still in its early stages. Trends and visions continue to come and go. The field is too immature for any consensus agreement to celebrate individual contributor’s legacies to HCI discipline progress. In this respect, it does not help, that my own work has been intimately bound up for 30 years with that of colleagues. First, from the MRC Applied Psychology Unit, Cambridge (MRC/APU), then from the Ergonomics and HCI Unit at University College London (EU/UCL). Even reflecting on one’s own festschrift might be considered unusual by some (again, all credit to the editors).

Second, I would like to celebrate myself. After all, a festschrift needs someone to honour. However, having reflected long and hard about how I got here, I have not come up with much by way of an explanation, other than having the luck to work with bright and engaging colleagues. As an aside, I put down what success I have enjoyed to never working with PhD students not cleverer than myself and never working with MSc students cleverer than myself. When I came across the latter, I converted them into the former (they know who they are).

Third, I would like to celebrate the world of HCI. Obviously, students, practitioners and researchers, who identify themselves with HCI and who together make up the HCI community. But also IT professionals, outside the community, who do not identify with HCI; but who actually design so many of the interfaces in use today. Most IT interfaces continue to be designed and implemented by such professionals. We forget them at our (professional) peril (see my ‘Hopes’ for HCI later).

Fourth, I would like to celebrate the attempts of the HCI community to make of itself an HCI design discipline (or disciplines). Twenty years ago, craft, applied science and engineering were identified as: ‘possible alternative and equally legitimate’ such attempts’ (Long and Dowell, 1989, in their Discipline Conception for HCI). These attempts (and others) continue and should be celebrated, because, as they point out, one discipline ‘might be usefully but indirectly informed by the discipline knowledge of another’. Further, such mutual support ‘maximises the exploitation of what is known and practised in HCI. . . it encourages the notion of a community of HCI, superordinate to that of any single discipline…’ (also from Long and Dowell, whose greater truth is conceded by Carroll (2010)).

Fifth, I would like to celebrate HCI engineering, as one of these attempts to make of HCI a design discipline. Dowell and Long (1989, 1998) argue that such a discipline would acquire design knowledge in the form of design principles. The latter would support the diagnosis of design problems (see Hill, 2010) and the prescription of design solutions (see Salter, 2010). The design practice would be ‘specify then implement’ (the principle ensuring that the design solution required no testing). The scope of such principles would be determined by the ‘hardness’ of the design problems (that is, the extent to which they can be specified) and the (relative) determinism of the human behaviours, which the principles implicate (within the limits required for a particular design solution, for example, as in road traffic system protocols). Dowell and Long propose a conception for the general HCI design problem (referred to here as the HCI Design Problem Conception) of ‘users interacting with computers to perform work effectively’. The Conception ‘expresses the problem more formally and which might be embodied in (HCI) engineering principles’.

Sixth, I would like to celebrate the members and the work (referred to here as EU research) of the EU/UCL. Members included: MSc and PhD students; academic and administrative staff; visitors; and researchers. It was the greatest place to work and play (both hard). Early work consisted of psychology, writing up research from my MRC/APU, Cambridge PhD days (Long, 1980); applied psychology, originating with Donald Broadbent, my PhD supervisor (Long, 1995) and now under the new guise of Cognitive Ergonomics (Long and Whitefield, 1989). However, all of the EU research would consider itself to be engineering, of one sort or another. It attempted to advance the state of HCI in the short to medium term. All used the Discipline Conception. Most used the Design Problem Conception. The work covered most areas of HCI: user requirements (a method for multi-disciplinary practice – Denley and Long, 2001); design (MUSE a Method for Usability Engineering – Lim and Long, 1994); and evaluation (a planning aid to support evaluation practice – Denley and Long, 1997). Design-oriented substantive knowledge, in the form of user, interactive worksystem and domain models, as well as the methodological knowledge, required for their application to design, were both acquired (Smith, et al., 1997, in the domain of secretarial office administration; Hill and Long, 1996, in the domain of emergency services management; and Timmer and Long, 2002, in the domain of air traffic management – see also Hill, 2010). Later EU research, using both Discipline and Design Problem Conceptions, attempted to acquire formal HCI design principles, as envisaged by Long and Dowell earlier – principles, which offer a better guarantee in solving design problems, than other forms of knowledge, such as heuristics, guidelines, models and methods. Better because principles support the derivation of design solutions, given design problems. In the interests of clarity, this research is referred to here as HCI (Principles) Engineering (see Carroll, Wild and Hill later). Early and initial HCI design principles have been proposed for ‘hard’ problems in the domains of domestic energy management (Stork, 1999) and of business-to-customer electronic commerce (Cummaford, 2007).

Taken together, the two lines of EU research provided support for (iterative) ‘specify and implement’ design practice (models and methods) and ‘specify then implement’ practice (principles). Judgement on their success or failure, I leave to others.

Seventh, I would like to celebrate the festschrift papers themselves (both accepted and rejected), their authors, and their reviewers. My natural instinct is to peer review the papers. Space and my honoured status forbid such a review. However, I hope to do this elsewhere (I owe it to the authors and to myself). Instead, in the spirit of the festschrift, I will attempt to address, by way of clarifications, issues problematic for EU research. For copies of papers, referencing the author’s work – see Long (2010), this issue.

Carroll’s paper (2010) raises some serious issues for the research, as well as offering some (unwitting) complements. Although an unreconstructed ‘existentialist’ in my private life, I am delighted to be considered a ‘positivist’ (although perhaps not a ‘hoary’ one), when it comes to HCI engineering. Far from being discourteous, I take it as a complement. Like one of Carroll’s reviewers, any time I fly, I am thankful for the odd positivist engineer in the design team of the airplane. Also a complement to be linked with the name of Alan Newell (in the same sentence, no less). Pity Carroll castigates us both for ‘going too far’. Otherwise, Carroll finds little to celebrate in the Long and Dowell Discipline Conception (1989). His celebration is positively (and prematurely) funereal. It is also a pity he focusses only on the Discipline Conception paper, because the issues he raises are all addressed elsewhere. They relate to his doubts concerning: the specifiability of designs; the determinism of human behaviours, implicated by them; and the extended scope of HCI since 1989.

The issues are not really a problem for EU ‘models and methods’ research (see earlier). For example, Hill (2010) uses both Discipline and Design Problem Conceptions to diagnose design problems and to reason about design solutions to those problems in the domain of emergency services management. Both are specified well enough to establish the relations between them for the purposes of the research. The human behaviours implicated are those of trained emergency personnel, as specified by co-ordination protocols and so, deterministic enough for the purposes of specifying a possible design solution.

The issues are, however, serious for HCI (Principles) Engineering. They are addressed in Dowell and Long (1989, 1998). The specifiability of designs and the determinism of human behaviours, implicated by them, all depend on the ‘hardness’ of the design problem (see in particular Fig. 2, a classification of design disciplines, which plots discipline practices against discipline knowledge with respect to the ‘hardness’ or ‘softness’ of general design problems). Early and initial design principles have already been referenced, concerning EU research (Stork, 1999; Cummaford, 2007). The completeness of design problem specification is only with respect to design problem solution specification and the design principle specification, which supports the formal derivation of the latter from the former. Stork’s principles are in the domain of domestic energy management and Cummaford’s in electronic commerce (the leisure, pleasure and education of Carroll’s extensions to HCI, since 1989, which he claims the Discipline Conception cannot express). The ‘complete-for-purpose’ specifications, along with the additional value-based difference between actual and desired performance, that is the design problem of ‘humans interacting with computers to perform work effectively’ (see Wild’s paper later) demonstrates that: (1) EU research is far from ‘snarled in the intellectual trap of the Discipline Conception; but that the latter is a necessary pre-requisite for the Design Problem Conception; (2) The latter’s analysis is far from ‘nihilistic’, since it is a necessary pre-requisite for HCI design principles, which show promise of offering more reliable HCI design knowledge to date, than any other types of such knowledge; and (3) The two Conceptions offer clear criteria by which their effectiveness can be judged, so meeting Carroll’s requirement for social construction (without being abjectly subjectivist). Thus, both Carroll’s technical issues and philosophical ‘swipes’ are considered to be countered.

Dix’s paper (2010) focusses on the challenge of methodological thinking in HCI. He raises the issue of ‘work’ as the (too limited) scope of HCI engineering (see also Carroll, 2010; Wild, 2010); but notes the later expression as ‘any activity seeking effective performance’, so including some leisure, domestic, and entertainment activities. Dix also raises the issue of developing (more) reliable HCI design knowledge (the original motivation of the two Conceptions). In this respect, he argues the need for validation of such knowledge by justification and evaluation. I concur completely. However, I have argued elsewhere that validation needs to include: conceptualisation; operationalisation; test (Dix’s evaluation); and generalisation (Long, 1996, 1997, 1999). Further, without some kind of consensus conception (as part of Dix’s ‘common ground’), researchers cannot validate, or even compare, each other’s work, so making design knowledge more reliable. If we cannot agree on what is a design problem (or whatever), how can we possibly agree which model/method/principle prescribes a design solution and so validate the knowledge (Long, 1997)? Hence, the need for conceptualisation and so, conceptions.

Wild’s paper (2010) applies the HCI Design Problem Conception to services and services research. He raises some critical issues for EU research. First, whether HCI engineering can address social; hedonic; and experiential concerns, as required by the design of some services. Also, whether it can accommodate aesthetics, experience, emotion and value(s) (see also Carroll and by Dix earlier). If these concerns can be even partially specified, they can be represented by the Design Problem Conception (see Lambie, et al. (1998), as concerns co-operative work and Lambie and Long (2002), as concerns the engineering of CSCW). For example, a computer games ‘fun’ interactive (work) system would seek to transform the state of the games playing user in terms of a domain object ‘experience’ (from ‘undesired to desired’), made up of two sub-objects ‘fun’ (from ‘none to much’) and ’emotion’ (from ‘none to good’). The user, as part of the worksystem might accrue motivational (‘conative’) and emotional (‘affective’), as well as ‘cognitive’ costs. However, HCI Engineering Principles could only be developed more widely (another of Wild’s issues), if the associated design problems were ‘hard’ (see Carroll earlier). In the absence of such problems, principles could not be developed more widely. Wild rightly concludes, that ‘either we need to consider in more depth what it would mean to ‘engineer ‘such services, including experience or we need to work out the relationship between the different perspectives on development’. My view is that both approaches should be pursued and related (see my ‘Hopes’ for HCI later). Last, Wild raises the issue of ‘value(s)’ (value; quality; choice; worth etc.), as critical for service design and how they might relate to service effectiveness. The Conceptions express a design problem, as the difference between actual and desired performance for some worksystem with respect to its domain of application, specified as how well the work is performed (‘task quality’) and the workload (‘user costs’) in performing the work that well. Value would be expressed as (part of) the rationale for (re)designing the system, such that actual equals desired performance (see also Hill, 2010). Alternatively, if value is to be part of a value (work) system, then it should be treated as ‘fun’ earlier, for example, by postulating a ‘value’ domain object (transformed from ‘nil to positive’). Eitherway, the Conceptions can accommodate value(s) and relate them to effectiveness.

Hill’s paper (2010) reports research, using both Conceptions to develop models of the UK Emergency Management Response System. The latter co-ordinates the emergency services, including police, medical and fire, when they respond to disasters. A method is proposed, which uses the models to diagnose design problems and to support reasoning about design solutions. Together, the models and method constitute design knowledge. This research raises two critical issues for EU research.

First, the relationship between design problems and user requirements (Denley and Long, 2001). Hill never mentions the latter, generally considered to be the starting point for the HCI practice of system development. Support for such development surely requires some view of how user requirements and design problems might relate. Following the two Conceptions, design problems occur, when actual performance does not equal (usually less than) desired performance (see Wild earlier). In contrast, user requirements have no such constraints. I suggest, then, a non-co-extensive relationship. All design problems can be expressed as (potential) user requirements; but not vice versa. This difference needs to be acknowledged both by HCI research and practice (see Salter later and this issue, who addresses the same problem, in terms of ‘client requirements’).

The second critical issue, raised by Hill, is the relationship between ‘models and methods’ research and Principles research (see Stork, 1999; Cummaford, 2007 earlier). Hill recognises ‘validated design principles, supporting general solutions to general classes of design problem, as the most effective support for practice in the longer term’; but does not identify a relationship between the two types of research. Whether, for example, one can
build on the other and in particular, whether her work can, in some way, form the basis of Principles research. It may be that there is no relation of this kind. However, since both types of research share the same Conceptions, this seems unlikely. The relationship may have been implicit or poorly understood (or both), when Hill began the research (who were the supervisors, I wonder?). The most plausible set of relations, I would suggest, are as follows. ‘Models and methods’ research shows promise to be carried forward into Principles research, if it succeeds in specifying the models and methods themselves, in terms of the Design Problem Conception (as in Hill’s research). It is more promising, if the models and methods also support the diagnosis of design problems (again, as in Hill’s research). It is even more promising, if the models and methods prescribe design solutions to the diagnosed design problems (only informally and by way of illustration in Hill’s research). However, it is most promising, if the problems and solutions are completely specified, for Principles research to attempt to identify the commonalities (and the non-commonalities) between them, to support the formulation of a principle by which the solution is formally derivable (or not) from the problem. These are all ways in which ‘methods and models’ research can support Principles research.

Salter’s paper (2010) applies the HCI Discipline Conception to economic systems. He does so by means of a generic conception of an engineering discipline. This raises two important issues for EU research. The first is the scope of HCI and the expression of its general design problem. Long and Dowell (1989) assumed these to be the same for all of the HCI disciplines of craft, applied science and engineering (which, in contrast, differed in their knowledge and practices). Salter’s generic conception sets out criteria by which a discipline can be considered an engineering one. For example, ‘Criterion 1: The description of the general problem should describe the requirements component and the artefact component of the problem and the relationship between them’. Long and Dowell did this for HCI engineering, expressing its problem as: ‘to design users interacting with computers to perform work effectively’, such that actual equals desired performance. However, because Salter’s criterion is specific to engineering, it leaves open the possibility that other HCI disciplines may have a different expression. For example, craft and applied science disciplines rarely, if ever, refer to the domain. In contrast, the domain is critical to both engineering Conceptions, as it grounds the worksystem and is the basis for ‘task quality’ and so effectiveness, along with ‘user costs’. Possible differences between HCI disciplines expression of its general problem would have implications for the relations between disciplines and so consensus (and discipline progress).

The second issue, raised by Salter, concerns the distinction he makes between empirical and formal techniques, for example, as they appear in his Design Practice Exemplars (Fig. 8) and his Research Exemplars (Fig. 9). Neither Conceptions address this point generally, except in the case of the formality of design principles. However, there appears to be a need to relate user requirements to design problems, in the case that the former are insufficiently specified to qualify as the latter (see Hill’s paper earlier). Hence, the relation cannot be formal, unlike a principle’s derivation of a solution from a problem; but only ‘informal’ (preferred to Salter’s ’empirical’). The distinction, then, between formal and informal knowledge (Salter’s techniques) needs to be at least more generally referenced by the Conceptions.

I would like to bring these reflections to a close by celebrating the future of HCI in the form of some ‘Fond Hopes’. First, I hope there will be more festschrifts, perhaps even of the more traditional kind, celebrating individual legacies, with more confidence, born of an increased consensus of what HCI is, what it does and how well it does it.

Second, I hope that HCI research improves the effectiveness of the design knowledge, which it acquires to support HCI design practices (a hope shared by festschrift authors – knowledge which is ‘more assured’ (Carroll), ‘more reliable’ (Dix) and ‘offering a better guarantee’ (Hill). Anyone who doubts this need should seriously consider: (1) How much interface design is performed by IT professionals outside the HCI community; (2) How little actual design, as opposed to related studies or evaluation, is carried out by individual HCI practitioners (as consultants) or even by those working as teams in large organisations; and (3) How much design is performed with little or no reference to HCI design knowledge (of any or no conception), other than perhaps evaluation. But how is this much needed improvement in HCI design knowledge to be achieved? In my view, It can only come about, if HCI research and practice diagnose more design problems and prescribe more design solutions and in so doing evaluate the effectiveness of HCI design knowledge (of whatever kind).

Third, I hope that HCI engineering (both as validated design principles and as models and methods as their pre-cursor – see Hill earlier) continues to be pursued, as one among a number of alternative and equally legitimate approaches to HCI. HCI needs all the help it can get. Inventors will be needed to invent in ways likely to be neither understood nor codifiable. Craft will be needed to address ‘soft’ problems and, along with other approaches, to provide an initial response to revolutionary technological and socio-cultural changes. Applied science will be needed by those either wishing to understand HCI better or who wish to borrow and transform psychology/sociology/anthropology, ethnomethodology, etc. models and methods into HCI ones. But I hope HCI engineering will also be needed to codify the design knowledge required to diagnose design problems and to prescribe design solutions for ‘hard’ problems. I wish it well.

References

Carroll, J.M., 2010. Conceptualizing a possible discipline of human-computer interaction. Interacting with Computers 22 (1), 3–12.

Cummaford, S.J.O., 2007. HCI Engineering Design Principles: Acquisition of ClassLevel Knowledge. Unpublished Doctoral Thesis, University College London.

Denley, I., Long, J.B., 1997. A planning aid for human factors evaluation practice. Behaviour and Information Technology 16 (4/5), 203–219.

Denley, I., Long, J.B., 2001. Multidisciplinary practice in requirements engineering: problems and criteria for support. In: Blandford, A., Vanderdonckt, J., Gray, P. (Eds.), People and Computers XV – Interaction without Frontiers. Joint Proceedings of HCI 2001 and IHM 2001. Springer Verlag, London.

Dix, A., 2010. Human-computer interaction: a stable discipline, a nascent science,
and the growth of the long tail. Interacting with Computers 22 (1), 13–27.

Dowell, J., Long, J.B., 1989. Towards a conception for an engineering discipline of human factors. Ergonomics 32, 1513–1535.

Dowell, J., Long, J., 1998. Target paper: conception of the cognitive engineering design problem. Ergonomics 41 (2), 126–139.

Hill, B., 2010. Diagnosing co-ordination problems in emergency management response to disasters. Interacting with Computers 22 (1), 43–55.

Hill, B., and Long, J., 1996. A preliminary model of the planning and control of the combined response to disaster. In: Proceedings of the 8th European Conference On Cognitive Ergonomics (ECCE8), Granada, Spain, pp. 57–62.

Lambie, T., Long, J., 2002. Engineering CSCW. In: Blay-Fornarino, M., Pinna-Dery, A.M., Schmidt, K., Zarate, P. (Eds.), Co-operative Systems Design: A Challenge of the Mobility Age. IOS Press, Amsterdam.

Lambie, T., Stork, A., and Long, J., 1998. The co-ordination mechanism and cooperative work. In: Proceedings of the 9th European Conference On Cognitive Ergonomics (ECCE9), Limerick, Ireland, pp. 163–166.

Lim, K.Y., Long, J.B., 1994. The MUSE Method for US Ability Engineering. Cambridge University Press, UK.

Long, J., 1980. Effects of prior context on two-choice absolute judgements without feedback. In: Nickerson, R.S. (Ed.), Attention and Performance VIII. Erlbaum, Hillsdale, NJ.

Long, J., 1995. Commemorating Donald Broadbent’s contribution to the field of applied cognitive psychology: a discussion of the special issue papers. Applied Cognitive Psychology 9 (S1), 197–215.

Long, J., 1996. Specifying relations between research and the design of human– computer interactions. International Journal of Human–Computer Studies 44 (6), 875–920.

Long, J., 1997. Research and the design of human–computer interactions or ‘whatever happened to validation’? In: Proceedings of HCI’97, Bristol, pp. 223–243.

Long, J., 1999. Specifying relations between research and the practice of solving applied problems: an illustration from the planning and control of multiple task work in medical reception. In: Gopher, D., Koriat, A. (Eds.), Attention and Performance XVII. MIT Press, Cambridge, MA, pp. 259–284.

Long, J.B., Dowell, J., 1989. Conceptions for the discipline of HCI: craft, applied science and engineering. In: Sutcliffe, A., Macaulay, L. (Eds.), Proceedings of the Fifth Conference of BCS HCI SIG. Cambridge University Press, UK.

Long, J.B., Whitefield, A.D. (Eds.), 1989. Cognitive Ergonomics and Human– Computer Interaction. Cambridge University Press, UK.

Salter, I.K., 2010. Applying the conception of HCI engineering to the design of economic systems. Interacting with Computers 22 (1), 56–57.

Smith, M.W., Hill, B., Long, J.B., Whitefield, A.D., 1997. Modelling the relationship between planning, control, perception and execution behaviours in interactive worksystems. In: Monk, York A., Diaper, D., Harrison, M. (Eds.), Proceedings of the Seventh BCS HCI SIG Conference. Cambridge University Press, UK.

Stork, A., 1999. Towards Engineering Principles for Human–Computer Interaction (Domestic Energy Planning and Control). Unpublished Doctoral Thesis, University College London.

Timmer, P., Long, J., 2002. Expressing the effectiveness of planning horizons. Le Travail Humain 65 (2), 103–126.

Wikipedia, 2009. The free encyclopaedia – see under ‘Festschrift’.

Wild, P.J., 2010. Longing for service: bringing the UCL conception towards services
research. Interacting with Computers 22 (1), 28–42.

Further reading

Long, J., 2010. Some celebratory HCI reflections on a celebratory HCI festschrift. Interacting with Computers 22 (1), 68–71.

 

 

 

 

 

 

Longing for service: Bringing the UCL Conception towards services research 150 150 admin

Longing for service: Bringing the UCL Conception towards services research

Longing for service: Bringing the UCL Conception towards services research

Peter J. Wild

Institute for Manufacturing, University of Cambridge, 17 Charles Babbage Road, UK

 

John Long's Comment 1 on this Paper

Although I am sure that I have met Wild a couple of times at conferences, I really do not know him personally, although I am aware of his work and I have not had any extended discussions with him. It is with great interest, then, that I read his contribution to the Festschrift, in which he applies the Long and Dowell (1989) and Dowell and Long (1989) Conceptions to Services Research without having worked at the Ergonomics and HCI Unit at UCL. It is a very welcome example of HCI researchers building on each other’s work, an issue which I have raised a number of times in my comments on other Festschrift contributions. I am very pleased, in the comments, which follow, to try to contribute to Wild’s attempt to apply the Conceptions.

Abstract

There has been an increase in the relevance of and interest in services and services research. There is a acknowledgement that the emerging field of services science will need to draw on multiple disciplines and practices. There is a growing body of work from Human–Computer Interaction (HCI) researchers and practitioners that consider services, but there has been limited interaction between service researchers and HCI. We argue that HCI can provide two major elements of interest to service science: (1) the user centred mindset and techniques; and (2) concepts and frameworks applicable to understanding the nature of services. This second option is of major concern in this paper, where we consider Long’s work (undertaken with John Dowell) on a Conception for HCI. The conception stands as an important antecedent to our own work on a framework that: (a) relates the various strands of servicer research; and (b) can be used to provide high-level integrative models of service systems. Core concepts of the UCL Conception such as domain, task, and structures and behaviours partially help to relate systematically different streams of services research, and provide richer descriptions of them. However, if the UCL Conception is moved towards services additional issues and challenges arise. For example, the kinds of domain changes that are made in services differ; services exist in a wider environment; and that effectiveness judgements are dependent on values. We explore these issues and provide reflections on the status of HCI and Service Science.

1. Introduction

As well as becoming an ever more important part of local and global economies, services and service design are emerging, crossing, and in some cases redefining disciplinary boundaries. Papers have emerged in HCI venues that have explicitly examined services (e.g. Chen et al., 2009; Cyr et al., 2007; Magoulas and Chen, 2006; van Dijk et al., 2007). Service has emerged as a frequent metaphor for a range of computing applications, web based, pervasive and ubiquitous: here researchers and practitioners often talk of services instead of applications. This is in addition to a service metaphor in Service-Oriented Architectures (Luthria and Rabhi, 2009; Papazoglou and van den Heuvel, 2007), and the Software as a Service concept. The user, value, and worth centred ethos of HCI (e.g. Cockton, 2006; McCarthy and Wright, 2004), is making its way into service design approaches (e.g. Cottam and Leadbeater, 2004; Jones and Samalionis, 2008; Parker and Heapy, 2006; Reason et al., 2009).

Definitions of services stress the intangible, activity, and participatory nature of services (e.g. Hill, 1977; Lovelock, 1983; Lovelock and Gummesson, 2004; Rathmell, 1974; Shostack, 1977; Vargo and Lusch, 2004a). Hill defined services as ”some change is brought about in the condition of some person or good, with the agreement of the person concerned or economic unit owning the good (1977, p. 318).” This definition suggest that services are activities upon objects and artefacts, both natural (people, pets, gardens) and designed (cars, houses, computers) as well as concrete (e.g. bodies, equipment) and abstract (e.g. education, publishing, therapy).

Comment 2

 

The general discipline problem of HCI is: ‘humans and computers interacting to perform work effectively’ (Long and Dowell, 1989). Work is conceived as: ’any activity seeking effective performance’ (Long, 1996). Services, as ‘activities upon objects and artefacts’, thus have much in common with HCI, although the latter puts more emphasis on design, technology and effectiveness.

Hill is also keen to stress the role of exchange, and to distinguish between activities that can and cannot be solely performed by oneself, noting that ”if an individual grows his own vegetables or repairs his own car, he is engaged in the production of goods and services. On the other hand, if he runs a mile to keep fit, he is not engaged because he can neither buy nor sell the fitness he acquires, nor pay someone else to keep fit for him (1977, p. 317).” Hence services are potentially transferable activities performed by self or other to achieve a range of benefits (e.g. save money, sense of accomplishment). Some can be legally enforced onto an economic unit (e.g. tax, insurance, MOT), therefore implying a forced transfer.

Recently, the monikers service science and service systems have emerged from initiatives to support an interdisciplinary dialogue on services (IfM and IBM, 2008). Service systems have been defined as ”dynamic configurations of people, technologies, organisations and shared information that create and deliver value to customers, providers and other stakeholders (IfM and IBM, 2008, p. 1),” with service science being ”the study of service systems and of the co-creation of value within complex constellations of integrated resources (Vargo et al., 2008, p. 145).” There is much in common, both conceptually and empirically between HCI and service science.

Comment 3

Note that: ‘HCI’ and ‘Service Science’ does not necessarily mean the same as: ‘HCI Science’ and ‘Service Science’. The latter have ‘Science’ in common, whose discipline problem is understanding, expressed as the explanation and prediction of natural phenomena. The former may differ, for example, HCI as engineering, whose discipline problem is design for effectiveness, as diagnosis and prescription (Dowell and Long, 1989). The use of HCI to inform Service Science needs to be sensitive to such differences.

Including the goal to create robust and repeatable activities/experiences that are objectively and/or experientially successful; continuing issues in the speed of change in the phenomena they are studying; and the theory–practice gap. However, despite potential opportunities and overlaps, this is not a rebranding of HCI by another name.

Hence, as an emerging area, service science could benefit from HCI’s experience, specifically: (1) the user centred mindset and techniques; and (2) concepts and frameworks for understanding the nature of services.

Comment 4

Some researchers conceive of HCI as a science , or as an applied science (Carroll, 2010; Dix, 2010). Associated concepts and frameworks would indeed support understanding of HCI and might support understanding of Service Science too. However, concepts and frameworks for HCI as engineering (‘design for effective performance’) are unlikely to support Service Science (but might well support Service Engineering).

 

It is the second area that is the major concern of this paper, although we return to the issue of user centred mindset in the paper’s conclusions. HCI has both produced and adopted rich theoretical tooling in its efforts to understand Interaction with and through IT artefacts. Whilst seemingly diverse with ontological and epistemological differences they share a common concern to represent the structure of individual and collective activities in a manner that informs the design of new IT artefacts and activities. This key role of activity representations in HCI is often background in favour of a view centred on the technology being developed. However, it is the activity and latterly the experience of that activity that are being supported/enabled by technology that is one of HCI’s key methodological outputs.

One of this paper’s concerns is with the UCL Conception (Dowell and Long, 1989) one of the conceptual frameworks put forwards for HCI. The conception offers a set of abstractions for HCI, and has guided several streams of work within UCL and elsewhere, Diaper noted that it is ”perhaps the most sophisticated framework for the discipline of HCI (2004, p. 15)”, with its emphasis on effectiveness/ performance being a key part of Diaper’s reasoning behind this assertion. Several of the concepts from Dowell and Long’s work have informed our own framework developed to relate different strands of service research together (Wild et al., 2009a,b) and thus Dowell and Long’s work acts as an important antecedent to our work in services.

Comment 5

Wild’s treatment of the Dowell and Long (1989) Conception for an engineering discipline of HCI, as an ‘important antecedent’, implies at least some consensus. This is precisely the pre-requisite for HCI researchers to build on each other’s work to develop more, and more reliable, HCI knowledge and so a mature discipline. See also Dix Comments 3, 4, 10 and 13.

 

1.1. Paper overview

With this context in mind, Section 2 is concerned with providing an outline of relevant aspects of services research by covering service definitions. In addition, the section covers our understanding of the UCL Conception (Section 2.2) and examines core concepts from the UCL Conception to different strands of services research (Section 2.2.1). Section 3 first covers the Activity Based Framework for Services (ABFS), our own framework, of which the UCL Conception is an important antecedent. We then consider a number of issues that prevented us from applying Dowell and Long’s concepts as-is to represent and relate strands of services research and model service systems. Finally, we provide a number of illustrative examples of the use of the ABFS for modelling service systems (Section 3.2). Section 4 summarises and concludes the paper, discussed whether services are within the remit of HCI; and if so what they may face when interacting with Service Marketing and Service Operations, two areas that have had a focus on services and have varying claims to user representation and/or involvement.

2. Relevant literature

2.1. Services: a necessarily short and biased overview

It is difficult to trace the growth in importance of services because of differences in the ways that they are defined and reported over time and between countries (Hill, 1977, 1999). Economic downturns aside, a figure that is often cited is that services account for 70–80% of Western economic activity (IfM and IBM, 2008; Parker and Heapy, 2006). During recent years, a number of monikers have been put forwards for a shift to service as the focus of economic and intellectual activity.1 Focus on listing these terms alone will distract the reader and this overview concerns service definitions. The aim is to provide an overview of the different disciplinary perspectives on services.

To even the casual observer the term services embraces a number of different forms and contexts; from intangible services undertaken on abstract objects (such as information and knowledge); via services on people (such as medicine and education); through to maintenance procedures on hardware. In addition, many contexts include all these types of services. Large scale availability contracts for complex products can involve information gathering, forecasting, education and training, the supply of additional tools (e.g. IT artefacts and Support Equipment) in addition to the maintenance of the actual product (Goedkoop et al., 1999; Terry et al., 2007).

Work within Economics and Services Marketing has attempted to provide generic characterisations that show the commonality between these different types of services. The earliest work on service definition was in Economics. Hill (1999) provides a good review of thought on services in Economics, he covers the travails that Economists such as Smith, Say, Senior, Mill, Marshall and Hicks went through in trying to define services. This early work characterised services as different to material goods, and involving different forms of production and delivery. In addition, ownership rights could be established over goods and because of their material nature they can be stored and inventoried, as well as having their life extended through maintenance or remanufacture. In contrast, services were deemed as intangible, variable in quality, and could not be owned or stored. During the 1970s and 1980s, Services Marketing emerged as a discipline in its own right to study flows of services between producers and consumers, working from the view that products and services were different enough to warrant an approach different to mainstream marketing. Two literature reviews (Fisk et al., 1993; Zeithaml et al., 1985) helped to solidify four characteristics as the core distinctions between products and services, namely Intangibility, Heterogeneity, Inseparability, and Perishability (IHIP). Vargo and Lusch summarised these four features as ”Intangibility—lacking the palpable or tactile quality of goods. Heterogeneity—the relative inability to standardize the output of services in comparison to goods. Inseparability of production and consumption—the simultaneous nature of service production and consumption compared with the sequential nature of production, purchase, and consumption that characterizes physical products. Perishability—the relative inability to inventory services as compared to goods (Vargo and Lusch, 2004b, p. 326)”. Lovelock and Gummesson (2004) characterised the IHIP qualities as forming a ‘textbook consensus’ on how services Marketing represented itself to its own students, and to other disciplines. The IHIP characteristics partially enabled services marketing to ‘break away’ from mainstream product-oriented marketing and fuelled a number of research streams, including representation and evaluation of services. Several papers (Lovelock and Gummesson, 2004; Vargo and Lusch, 2004b; Wyckham et al., 1975) question the IHIP characteristics as a foundation for Services Marketing. Some researchers suggest refinements (Hill, 1999; Wild et al., 2007). Others suggest alternative paradigms/logics such as Nonownership (Lovelock and Gummesson, 2004) or the Service-Dominant logic (Vargo and Lusch, 2004a).

We return to the Service-Dominant logic later, but one of the most useful refinements of the core IHIP ideas come from Lovelock (1983) and Hill (1999). Hill (1999) argued for a retention of a distinction between services and goods. However, the dyad needed refining into a triad, with Intangible goods (e.g. books, music compositions, films, processes, plans, blueprints and computer programs) being added. Hill recognised that intangible goods need a manifestation mechanism. Traditionally this has been via physical media, but increasingly the medium is virtual (albeit one which is still reliant on computer memory structures). Lovelock (1983) provided a number of useful characterisations of services, one of which was that they can be tangible or intangible, we gain a fourfold division, tangible and intangible goods; and tangible and intangible service activities (Wild et al., 2009a).

Comment 6

This division between tangible and intangible goods and services would seem to correspond to Dowell and Long’s (1989) use of the physical and abstract. The latter division would apply both to goods and to services.

A marketable offering will in general provide a range of such tangible and intangible elements (Shostack, 1977), it is also possible to refine each of the four entities and the relationships between them, as well as relate them to other product and service attributes (Wild et al., 2007).

Outside of economics and marketing, there has been strong and growing interest in services with research undertaken in disciplines such as engineering, manufacturing, computing, and design. A number of approaches have emerged that explicitly tackle the design of services, or the co-design of product(s) and services. Prominent approaches include Product-Service systems (PSS, Goedkoop et al., 1999; Tukker, 2004) and Functional Products (FP, Alonso-Rasgado et al., 2004). PSS has often been associated with the sustainability agenda, with a key idea being the substitution of a general service function (e.g. transportation) for a specific product (e.g. personally owned cars) to reduce the ecological impact of high-levels of under-utilised or inefficient products. Goedkoop et al. (1999) defined a PSS as ”a marketable set of products and services capable of jointly fulfilling a user’s need. The PS system is provided by either a single company or by an alliance of companies. It can enclose products (or just one) plus additional services. It can enclose a service plus an additional product. And product and service can be equally important for the function fulfilment (1999, p. 18).” PSS can be seen as designing the PSS Service-In (Wild et al., 2009b), in contrast the Functional Product (FP, Alonso-Rasgado et al., 2004) and Industrial Product-Service Systems (IPS2, Aurich et al., 2007) approaches work from the Product-Out (Wild et al., 2009b), offering services that can feasibly be offered around a product, or product family. The latter approaches tend to be explicitly motivated to seek additional profit and revenue opportunities from service (Wild et al., 2009b) along with a general shift from ‘product plus parts’ to providing overall product availability (Terry et al., 2007). The design foci for PSS/FP/IPSS can cover the product, its support artefacts, related activities and necessary social and organisational structures (Goedkoop et al., 1999; Wild et al., 2009b). There can be an over emphasis on technical issues in proposed PSS/FP/IPS2 design approaches put forwards (see Roy and Shehab, 2009), however because of the influence of marketing there can be elements of customer representation and interaction in the proposed design processes (e.g. Alonso-Rasgado et al., 2004). Ironically, despite the professed interest in the environmental issues work in PSS has provided little in depth theorisation about ecological and environmental factors (c.f. Costanza et al., 1997; Hawken et al., 1999).

In computing, a service metaphor – rather than mathematical or physical ones (i.e. functions or modules) – has driven developments in Service-Oriented Architectures (Luthria and Rabhi, 2009; Papazoglou and van den Heuvel, 2007). The risk is that SOA researchers see such architectures as being solely concerned with application-to-application interaction (Kounkou et al., 2008), ignoring the fact that these applications are carrying out activities for people. Some work links business process models and SOAs; but service as an analogy for software modularity has been be argued by Kounkou et al. (2008) to miss deeper user centred abstractions based on the needs and values of various stakeholders. Despite a variety of perspectives within the process modelling communities (Melão and Pidd, 2000) there is sparse evidence that a user-centred perspective is being taken in such process modelling efforts. Work in progress is moving towards remedying by integrating HCI knowledge into the SOA development lifecycle practices (Kounkou et al., 2008); or supporting composition of services by non technical users (Namoune et al., 2009). In addition to work in SOA there is concept of Software as a Service (SaaS) (Bennett et al., 2000), whereby an IT artefact is used, but transfer of ownership does not take place (Bennett et al., 2000). These concepts have interacted with the SOA and Cloud Computing communities who have largely focussed on exploring architectures. Again we see a lack of input from HCI knowledge and practice and a corresponding lukewarm reception (Pring and Lo, 2009).

There are a number of service design approaches from the design community (e.g. Cottam and Leadbeater, 2004; Jégou and Manzini, 2008; Nelson, 2002; Parker and Heapy, 2006). In many ways design practitioners are the vanguard of the interaction between HCI concepts and approaches and the design of services (see Jones and Samalionis, 2008; Parker and Heapy, 2006; Reason et al., 2009). Parker and Heapy assert that the prevailing mindset for services is that they are ”seen as a commodity, rather than something deeper, a form of human interaction (p. 8)”. They utilise a number of techniques from HCI, but there is little linkage of their work to explanatory accounts of design processes. Without such explanations it is unclear how much service design success is due to the craft skill of the designers involved and how much is down to the methods and ethos employed. Nelson (2002) a designer and systems theorist discussed various service metaphors (Lip, Room, Social/Public Military/Protective) along with their strengths and weaknesses. Nelson’s concern is to use design processes to enable an approach that combines the best of these service metaphors whilst avoiding their downsides. The goal of such ”full service is adequate essential and significant to the well-being of the clients and stakeholders” (Nelson, 2002, p. 46).

Comment 7

Wild’s reference to HCI ‘explanatory accounts of design processes’ is consistent with his earlier references to HCI and Service Science (according to one meaning at least) – see Comment 3. Like Dix and Carroll, Wild needs to make explicit, and so how to validate, the relation between Science, as understanding and Applied Science/Engineering as design (see also Dix Comment 1).

 

Such full service involves: a relationship of maturity and complexity; conspiracy of empathy and creative struggle; a contract between equals where all parties have a voice; provides for the common good; and evokes the uncommon good (Nelson, 2002, p. 416). Nelson has offered little in the way of methodological support, but related work by Cockton in HCI could assist the exploration of service values (e.g. Cockton, 2006).

Nelson’s consideration of values driving design practice and those that should be enabled by services leads us to consider a position has emerged in Service Marketing, the Service-Dominant Logic (SDL Vargo and Lusch, 2004a). Here all marketable offerings, whether classed as goods or services, are considered to provide an element of service. Building on Hill’s (1977) definition presented previously, service is defined as ”the application of specialized competences (knowledge and skills), through deeds, processes, and performances for the benefit of another entity or the entity itself (Vargo and Lusch, 2004a).

Comment 8

This definition of Service is not inconsistent with the Conception of the HCI discipline, as knowledge supporting practice to solve the design problem of humans interacting with computers to perform effective work, here desired changes to goals and services.

Products – whether physical or software – along with services, exist to provide service. When we buy a product such as car or bike, we gain both the product and the benefits of skills of those who produce and supply it to us. When we use a service such as a bus, we gain the benefit from the journey, and the skills and capabilities of the bus company, but also temporary use/benefit of their products. The distinctions between products and services – it is argued – become irrelevant (Vargo and Lusch, 2004b), they are both approaches to providing something the recipient cannot or will not do themselves. Stauss (2005) is one of the most strident critics of this position noting that the fact that physical goods and services both bring about value does not necessarily imply that both are also produced in the same way or they bring about the same kind of value in the same way.

A key part of the SDL is with value-in-exchange rather than value-in-use.

Comment 9

HCI seems to have little to say currently, as concerns the difference between ‘value-in-exchange’ and ‘value-in-use’. However, there is no reason to doubt that the difference, providing that it is specifiable, can be reflected in the concept of effectiveness over time, rather than at a single point in time.

Essentially value-in-exchange views the value or benefit of a product or services as not being realised until the product is used or services are used/enacted. In contrast Value-in-Exchange is the value actually received from use or a product or receipt of a service. Whilst the two concepts have been around since Aristotle, it is claimed the predominant mindset in society has been valuein-exchange (Ramirez, 1999; Vargo and Lusch, 2004a). However, beyond the distinction between value-exchange and value-in-use there remains considerable ambiguity within Vargo and Lusch’s literature on the very meaning of value. Their original paper (Vargo and Lusch, 2004a) uses the term value over a 100 times without definition.2 Since Vargo and Lusch’s earlier presentations a 10th Foundation Principle for the SDL has been added that states ”value is always uniquely and phenomenological determined by the beneficiary (Vargo and Lusch, 2008).” This appeal to the subjective and intersubjective nature of value is predated by work in Economics such as that of the ‘Austrian’ school.3 Finally, the SDL builds on two types of resources. Operand resources: are physical resources upon which an operation or act is performed to produce an effect. Operant resources: are employed to act upon operand resources, and concern issues such as knowledge and skills, they are ”likely to be dynamic and infinite and not static … they enable humans to multiply the value of natural resources and to create additional operant resources (Vargo and Lusch, 2004a, p. 3)”.

HCI may be a discipline well placed to explore the implications of this conceptualisation of product and service use. Its focus on user participation in design processes could be extended to provide analysis and guidance throughout the life of artefacts.

Comment 10

Wild’s claim here is consistent with the proposal made in Comment 8.

 

There is relevant work on issues such as aesthetics (Hassenzahl et al., 2000; Lindgaard and Whitfield, 2004), experience (McCarthy and Wright, 2004; Sengers, 2003), emotion (Harper et al., 2008), and value(s) (Cockton, 2007; Harper et al., 2008) to draw upon in turning the SDL into a methodologically tractable approach.

Comment 11

There is, indeed, HCI research into aesthetics, experience, emotion, and values, as claimed by Wild. However, the work has yet to be turned into a ‘methodologically tractable approach’ for HCI, never mind about product and services over time. However, the research remains available for application by SDL (Service Dominant Logic), as stated by Wild.

 

However, HCI has had to date limited interest in post-delivery usage. Data can be collected as a basis for the redesign of the next version, but the founding methodological principle of early and continuous focus on users and their tasks fades once an artefact is delivered (Wild and Macredie, 2000). To assume that an IT artefact assessed as effective at launch will remain so throughout its lifetime seems naive. Therefore, whilst in principle HCI is pursuing Value-in-Use its actual performance in evaluating it is below its potential and stated ethos.

Finally, our concern turns to Service representation. Shostack (1984) developed and introduced Service Blueprinting, which has become the primary representation technique for services. It originally included: the temporal order of customer and service provider actions; the timings on these actions; tangibles in support of the activities, and the line of the visibility (i.e. actions that the Service recipients can and cannot see in their exchanges with a service provider). Later publications have refined the work (e.g. Bitner et al., 2007; Fließ and Kleinaltenkamp, 2004), including a spiral lifecycle model. Service Blueprinting is described in a numerous services marketing textbooks; and has been elaborated in papers by Fließ and Kleinaltenkamp (2004) and Bitner et al. (2007). There has been an increase in the number of lines of interaction. Fließ and Kleinaltenkamp (2004) list five: (1) interaction: separates customer and supplier interaction; (2) visibility: what customers see; (3) internal interaction: front and back office capabilities; (4) order penetration: activities that are independent and dependent on customers; and (5) implementation: separates planning, management, control, and support activities. Of all the work in Services Marketing Service Blueprinting has been one of the approaches used most ‘outside’ of the discipline, for example, in research and practice in Functional Products (Alonso-Rasgado et al., 2004), Design (Parker and Heapy, 2006), and Product-Service Systems (Morelli, 2006). There is however, no known conceptual or empirical comparison of Service Blueprinting with other methods for mapping processes or analysis of tasks (e.g. CTT, IDEF, BMPN, UML). There are no known studies on the efficacy of Service Blueprinting or its perceived or actual usability by end users of a service. Neither is it clear what depth of user representation and involvement is needed in the approach. Is it simply an attempt for the service designers to represent the service from what they think is the user’s perspective, or does Service Blueprinting demonstrate a deeper philosophical commitment (see Bekker and Long, 2000) to user involvement and representation?

We caution against viewing services research as coherent, despite the large amounts of rhetoric, discussion, support from industrial sources, and recent research funding in the area. Our own modest efforts in this area have shown relationships between activity concepts embodied in approaches such as tasks analysis, process modelling, and the UCL Conception, and a range of research strands in services (Wild et al., 2009a,b), but this remains a long way from a unified and accepted paradigm. What this work recognised was that across the varieties of services research in existence – some of which we have covered here – there are a variety of recurring concerns, but they had not been brought together within one framework. These concerns cover: value and values; the relationship between domains and activities; the relationship between products and domains, and service activities and domains; the relationship between service provider and service recipient and the kind of overt and covert resources that are used in service performance. In turn, concepts within service approaches relate to activity oriented approaches such as Task Analysis, Process and Domain modelling. Dowell and Long’s work was an important antecedent our work, and this paper provide an opportunity to reflect on both its influence and why it could not be used as-is to represent and relate strands of services research and model service systems.

2.2. The UCL Conception

One of Long’s many contributions to HCI is his work with John Dowell on what we label the UCL Conception (see Dowell and Long, 1989, 1998). The UCL (University College London) Conception’s utility has been demonstrated in a number of contexts: it has been used to compare task analysis; scope HCI education syllabi; to help understand change occurrences; modelling task planning, control, perception and execution; emergency management; air traffic control; and cooperative work.

The UCL Conception defines Interactive Work Systems (IWS) as cognitive systems whose scope encompasses two types of participant, people and IT artefacts, interacting to perform tasks in a domain. ‘Domains’ are composed of abstract or physical objects whose attributes may be mutable. Tasks are activities that are concerned with changing these domain object attributes. Organisations express their requirements for changes to a domain through the specification of goals. A ‘product’ goal is scoped towards a domain and is the intention to change several attributes and objects in the domain. A product goal breaks down into a number of ‘task’ goals that alter individual attributes. Different forms of task are recognised notably interactive, offline, automated (Dowell and Long, 1989; Lim and Long, 1994) and enabling (Whitefield et al., 1993). With the later being tasks that put the IWS into a state where it can be used (e.g. booting, opening applications). Each participant of an IWS (i.e. person or IT artefact) of an IWS has ‘structures’ and ‘behaviours’ that support task performance. ‘Structures’ provide capabilities in reference to a participant’s environment and can be physical or abstract. ‘Behaviour’ is the activation of structures to execute the changes in the IT artefact and domain. Structures are physical (e.g. electronic, neural, biomechanical and physiological) or abstract (e.g. software or cognitive representational schemes and processes). Similarly, behaviours may be physical, such as printing to paper or selecting a menu, or abstract, such as deciding which document to open, or problem solving. The many-to-one mapping between people’s structures and their behaviours allows the production of different behaviour from the same physical and psychological structures.

Comment 12

The Dowell and Long (1989) HCI conception is well described here. However, it is worth noting that: (1) interactions perform tasks with some degree of effectiveness, not just perform them; (2) domain objects are physical or physical and abstract, rather than physical or abstract; (3) at least some domain object attributes are necessarily mutable (otherwise no ‘work’ can be performed, that is, no object attribute transformations can be made by the interactive worksystem); (4) structures are physical or physical and abstract, rather than physical or abstract (as indeed are domain objects – see (2) earlier.

 

Dowell and Long (1989) view effectiveness (later termed performance) as a function of task quality (the quality of a ‘product’ created by an IWS) and resource costs (the costs to participants of establishing structures and producing behaviours). Desired effectiveness is set by specifying desired task quality and the desired resource costs. Similarly, actual effectiveness can be measured as a function of the actual task quality and the actual resource costs (Dowell and Long, 1998, p. 139). Task quality pertains to goals, that is, what do people and organisations want from a domain (e.g. speed, vs. quality). Structural resource costs relate to costs of setting up and executing structures and behaviours of both IWS elements to carry out tasks. Whilst behavioural resource costs are those incurred for actual task performance. There has been no attempt to produce – or link to – a taxonomy of development costs, beyond the distinction between structural and behavioural costs.

Comment 13

Again, this is a good description of the Dowell and Long (1989) Conception for HCI. However, it is worth noting, that: (1) effectiveness and performance are often interchangeable; but in all cases a function of Task Quality and Resource Costs, as stated by Wild; (2) people and organisations  desire changes to domain objects (for example, slow versus fast (that is, speed) or high versus low (that is, quality); and (3) structural and behavioural resource (that is, set-up) costs can be quantified (as well as distinguished).

Dowell and Long view the domain as a distinct part of an IWS’s environment – it is the world in which tasks originate, are performed, and have their consequences. The domain characterisation is ‘oriented to objects’. Domains are composed of abstract and physical objects with attributes. These attributes have a state that may be able to change. Attributes can be physical or abstract, and may need to be inferred when the domain is studied. Objects can have both abstract and physical attributes. Printed texts, have abstract properties that support the communication of messages and physical properties that support the visual representation of information (Dowell and Long, 1989), there is therefore a coupling between the domain objects and entities with Structures and Behaviours capable of perceiving their affordances. Dowell and Long (1989) maintain that levels of complexity emerge amongst attributes at different levels of analysis. Attributes that emerge may subsume those that arise out of lower levels; thus, a printed text could be a letter, tax return or an instructional text. Dowell and Long (1989) stress that objects be described at an appropriate level, but give no indication of how to do this.

Comment 14

An instance of domain modeling, including the appropriateness of levels of description, can be found in Hill’s paper on the Emergency Management Combined Response System (2010).

 

 

Their work has emphasised physical changes and cognitive processes (i.e. changes to information and knowledge), little is said about social or emotional changes.

Comment 15

Conceptions can be judged in terms of their completeness, coherence, and fitness-for-purpose. The reasons given by Wild for not being able to apply the Dowell and Long (1989) conception directly to services suggest its incompleteness (but do not exclude its non-coherence). Whether its fitness-for-purpose is appropriate or not depends on whether Wild wants to use it for understanding (Services Science) or design (Service Engineering). See also Comment 3.

 

2.2.1. Relating the UCL Conception to services research

We now consider how the entities within UCL Conception relate to varying strands of services research. Table 1 provides a comparison of core concepts of the UCL Conception with certain strands of services research across different disciplines.

Comment 16

Mapping two conceptions or frameworks to each other is a non-trivial matter. However, it is essential, if HCI researchers are to build on each other’s work, as Wild does here. It is, thus, worth considering some of the issues raised.

First, given two conceptions A and B, there are three possible mappings: (1) A to B; (2) B to A; and (3) C to A and B. (1) and (2) appear to be the same; but differ as to the conception assimilated (B in (1) and A in (2)). In (3), A and B are both assimilated to C and the latter is carried forward.

Second, mapping may be carried out by equivalence ( in (1) and (2), some or all concepts in A may be the same as, or equivalent to, some or all concepts in B). Mapping may also be carried out by generification (in (1) and (2), some or all concepts in A may have some features of some  or all concepts in B. Lastly, mapping may be carried out by abstraction ( in (1) and (2), some or all concepts in A may be abstractions of some or all concepts in B).

Third, different conceptions may have been developed, consistent with different criteria, for example, consistency; coherence; and fitness-for-purpose. Normally, the criteria of the assimilating conception are the ones carried forward.

Wild claims that there are ‘overlaps between concepts within services research and the (Dowell and Long) conception’. This claim suggests the potential for a generification-type relation between the two sets of concepts. However, closer examination of Table 1 indicates an informal equivalence relationship, for example, (work as) tasks and services as tasks; but with exceptions, for example, people as co-creators, as well as worksystem components. The relationship can only be informal at best, because services research embodies several conceptions, for example, Service Blue Printing and Service Dominant Logic. The mapping is, thus, many-to-one, rather than one-to-one.

Given these overlaps between concepts within services research and the conception at first sight, the concepts could provide framework to situate and relate the various disciplinary strands of service research. However, we argue that the Dowell and Long’s work cannot be used as-is to represent and relate strands of services research and model service systems. The reasons are varied and include: services exist in a wider environment (Section 3.1.1); effectiveness judgements are dependent on values (Section 3.1.2); service demands a richer notion of people (Section 3.1.3); different kinds of abstract objects need to be represented and reasoned about (Section 3.1.4); the relationship between a core and service system needs to be represented (Section 3.1.5); and going beyond engineering (Section 3.1.6). We ask the reader to bear with us whilst we provide an overview of the ABFS. Which then allows us to address and expand on these points.

3. The activity based framework for services (ABFS)

Working from the view that services are consistently defined as activities – rather than objects or artefacts – the concepts (e.g. Roles, Domains, Actants, Artefacts, Goals, Tasks) of the ABFS are drawn from activity modelling approaches, such as task analysis (Diaper, 2004), domain and process modelling (Dowell and Long, 1989, 1998; Melão and Pidd, 2000), and soft systems methodology (Checkland and Poulter, 2006). This synthesis produced a framework that can relate together the disparate streams of service research (Wild et al., 2009a) and help classify the design Foci (i.e. what is being designed) of service design approaches (Wild et al., 2009b).

Comment 17

Wild’s claim that the ABFS framework was synthesized from task analysis, domain and process modeling and soft systems methodology is not inconsistent with Comment 16, which suggests an informal equivalence relationship between the framework and the Dowell and Long Conception. Note also the design orientation of the framework (rather then (scientific) understanding – see also Comments 2 and 3).

 

  • Tasks: the majority of service definitions class services as tasks, but they are perhaps most prominent in the Service Blueprinting approach (Shostack, 1984). There are classifications of service tasks, focussing around organisational division (Shostack, 1977); economic relationships (Hill, 1977, 1999); the relationship to a product (Tukker, 2004) or a more ‘general’ consideration of their nature (e.g. Lovelock, 1983)
  • Goals: are not a first class entity in most approaches to services, but can be tacit in Service Blueprinting and in discussions of value (Flint, 2006; Vargo and Lusch, 2004a)
  • People: are considered as co-creators of value in the Service-Dominant Logic (Vargo and Lusch, 2004a); are implied by lines of visibility in Service Blueprinting; and within the IHIP qualities are implied by Inseparability, and their role in service paradigms such as Nonownership (Lovelock and Gummesson, 2004)
  • IT artefacts: the Service-Dominant Logic argues that software, along with tangible products and services are a mechanism for delivering benefit to another party, this mirrors work in computing/information systems that promotes a view based on Software-as-a-Service (Bennett et al., 2000). Beyond such a high level and general statements there is little work that examines the overall role of IT artefacts in service within the services community
  • Domain: alongside tasks, domains have a close conceptual correspondence to the IHIP debates and refinements. In Section 2.1, we noted that there is a distinction between tangible and intangible products and tangible and intangible activities. We argue that there is a correspondence between intangible and tangible activities and products, and concrete and abstract IWS tasks and domain objects
  • Structures and behaviours: are covered in the SDL as the resources (operand and operant) different parties bring to service exchanges. Often the terms are used loosely in other services research, and more importantly in a non-systemic manner
  • Effectiveness, is timeand cost-oriented in Service Blueprinting, and whilst work service quality evaluation has often focussed on perceived or experienced Reliability; Assurance; Tangibles; Empathy, and Responsiveness (Seth et al., 2005), with some later work focussing on personal values (Lages and Fernandes, 2005). In Services Marketing, generally revenue and profitability have been high-level effectiveness measures

Table 1: Comparing the concepts of the UCL Conception to services research strands.

The ABFS is represented schematically in Fig. 1 service activities are carried out within a service system. The system embraces the objects (both abstract and physical); the goals and values held by various individual and collective Actants. Activities are carried out by actants and artefacts (both IT and non IT) to affect the objects in a domain. Not illustrated in the representation is the potential for an overlap between the domain and actants – artefacts. This overlap captures potential recursive relationships between people and domains (e.g., self directed education), and the distinction between coherence and correspondence domains (Vicente, 1990), that is, domains that ‘exist’ virtually within an IT artefact (e.g. 3D graphics, the internet).

 

 

A service system can be considered to have a variety of success measures, depending on the value (i.e. benefit) sought, and this is evaluated when the quality goals are balanced against the resource costs. Resource costs include affective, socio-cultural costs, as well as the consideration of the physical environment discussed by Stahel (1986). A service system has an environment, which has sociocultural and physical dimensions. Borrowing from Dowell and Long (1998), we suggest that actants and artefacts have structures and behaviours. Long and Dowell’s concepts generalise and we assume a wider set of structures and behaviours, namely the physical and socio-cultural (Stahel, 1986). Thus, we assume physical (ecosupport system, toxicology system, flows-of-matter system), and socio-cultural (Elster, 2007; Hall, 1959) structures and behaviours alongside those of artefacts and individuals (i.e. the IWS). The costs of setting up and maintaining these structures and behaviours are evaluated in service success assessments. This in turn depends upon the set of values that can be identified as applicable to the service system.

Comment 18

Wild suggests that service systems have ‘physical socio-cultural environments’. Further, that there are physical and socio-cultural structures and behaviours, as for artefacts and individuals. The question then arises, as to how and by what ‘affective, socio-cultural costs’ are incurred. Figure 1 is not clear on this point.

A comparison of the ABFS concepts against specific streams of services research can be made, in Table 2, (adapted from Wild et al., 2009a) presents our assessment of the depth of consideration of varying approaches against the core concepts of the ABFS.

Comment 19

As well as the ‘depth of consideration of varying approaches against the core concepts of the ABFS,’ it would be interesting to identify the actual concepts, employed by these same varying approaches. Such a listing, in conjunction with the Dowell and Long Conception would support the address of some of the issues, concerning mapping between conceptions, raised by Comment 16.

 

Several additions and considerations mark a departure from the Dowell and Long’s concepts, due to the need to engage with the services literature and with modelling service systems. Notable divergences and additions are: the inclusion of artefacts other than computers; the inclusion of a physical and socio-cultural environment; the inclusion of values; the reintroduction of affective and conative resource costs; an environment other than the domain, and the potential overlap between the domain and actants or tools.
The original purpose of the ABFS was to relate different strands of services research (see Wild et al., 2009a). However, recent work has started to examine the framework as a modelling approach. Specific interest is modelling service systems for complex engineered products, and the transitions to different configurations, possibly more ‘effective’ ones. The aim is to produce high-level and integrative models that illustrate: the systemic nature of service systems; the distinction between different kinds of domain; how different values affect the overall structure of a service system. An ABFS ‘model’ can be considered an elaborated representation of a Human Activity System (see Checkland and Poulter, 2006), and act as a general system model.

3.1. ‘Barriers’ to moving the UCL Conception towards services

Having provided an outline of the ABFS including details of the work we draw upon we examine the barriers within the UCL Conception to its use raw as a framework for relating wider strands of services research and providing models of service systems.

3.1.1. Service activities exist in a wider environment

The environment tends to become a catch-all in activity-based approaches, covering all the things that are not first class entities in their modelling worldview. Those with a systems orientation may try and draw a boundary around the core system of interest.

Comment 20

Dowell and Long (1989) indeed propose such a boundary:  ‘The worksystem has a boundary enclosing all user and device behaviours, whose intention is to achieve the same goals in a given domain. Critically, it is only by defining the domain that the boundary of the worksystem can be established’. Elsewhere, Dowell and long (1989) propose that ‘a domain of application may be conceptualized as: ‘a class of affordance of a class of objects’. Taken together, the two proposals can be considered sufficient for drawing ‘a boundary around the core system of interest’, as muted by Wild.

However, boundaries are rarely easily defined (Mingers, 2006), existing across both socio-cultural, physical and computational levels and often being observer dependent (Checkland and Poulter, 2006; Mingers, 2006). In the UCL Conception, the domain is the only ‘environment’ represented, thus, whilst it is ecological, it is weakly so.

Comment 21

In Dowell and Long (1989 and 1998), the strong ecological relationship is between the worksystem and the domain. They write: ‘The worksysyem clearly forms a dualism with the domain: it therefore makes no sense to consider one in isolation of the other.’ Wild is presumably referring to a different type of ecological relationship.

Outside of the domain or IWS nothing is discussed about context. How the domain is distinguished from the general environment is related to goals – they refer to desired changes in part of the world but what processes demarcate the domain are not clear.

Comment 22

Dowell and Long (1989) are very clear about the demarcation of the domain from the general environment: ‘The domains of cognitive work are abstractions of the ‘real world’, which describe  the goals, possibilities and constraints of the environment of the worksystem.’ For ‘real world’, we can read ‘general environment’ and for ‘environment’ we can read ‘domain’. The ‘real world’ or ‘general environment’ is not separately specified, other than by its expression in the domain. Wild is, thus, correct, that the Dowell and Long conception specifies only one environment – that of the domain.

 

Table 2: Comparison of the ABFS concepts against specific streams of services research

Dowell and Long (1998 p. 130) admit that a domain cannot be completely formalised, but it is not clear whether the domain’s boundary with the rest of the world is open or closed or something in between.

Comment 23

The formality, with which a domain can be expressed, depends on the ‘hardness’ of its design problem (Dowell and Long, 1989). They argue: ‘…..the dimension of problem hardness, characterising general design problems, and the dimension of specification completeness, charaterising discipline practices, constitute a classification space for design disciplines….’

 

We argue that to embrace services research it is important to represent not just the immediate environment of interest to services activities (i.e. the domain), but the wider environment within which activities are carried out. Here we find that HCI approaches can benefit from environmental concepts within services research strands such as the functional economy (e.g. Stahel, 1986). Within our own work, a secondary environment was synthesised by drawing on Stahel’s (1986) work on the Functional Economy, along with other work (Elster, 2007; Hall, 1959). This secondary environment is assumed composed of a socio-cultural system, along with a physical system.

Comment 24

According to Dowell and Long (1989): ‘The worksystem has a boundary, enclosing all user and device behaviours, whose intention is to achieve the same goals in a given domain. Critically, it is only by defining the domain that the boundary of the worksystem can be established…..’ No ‘secondary environment’, in Wild’s sense is postulated, as such. However, some of its features, for example, physical and socio-cultural ones, could be expressed in terms of the worksystem and the domain.

 

The former is discussed and the intellectual frameworks we drew upon are discussed in the presentation of the ABFS. The latter breaks down into: the eco-support system for life on the planet (e.g. biodiversity); the toxicology system; and the flows-of-matter system, especially as they relate to recycle and remanufacture decisions. Both elements provide potential sources of quality and resource cost measures (e.g. social coherence and pollution) for judging the effectiveness of service systems.

3.1.2. Effectiveness judgements are dependent on values

Effectiveness is taken to refer to whether activities are achieving some higher level or long term goal (see Checkland and Poulter, 2006, pp. 42–44), and we assume the term was originally chosen to distinguish it from efficiency (using resources well); and efficacy (that the activity is working).

Comment 25

According to Dowell and Long (1989): ‘Effectiveness derives from the relationship of an interactive worksystem with its domain of application – it assimilates both the quality of the work performed by the worksystem and the costs incurred by it. Quality and cost are the primary constituents of the concept of performance, through which effectiveness is expressed.’

 

Values are the criteria with which judgements are made about other entities (Checkland and Poulter, 2006). There has been a steady increase in the use of values and related terms such as value, quality, choice and worth (e.g. Cockton, 2006; Karat and Karat, 2004; Light et al., 2005). Cockton (unpublished) notes that HCI’s original quest to be a scientifically and engineering oriented discipline led it to background values, and not examine its own values as a discipline.

In our context, our interest is a little more mundane with the impact that different value sets and value choices have on the configuration and judgement of service systems. Values affect how actants view the effectiveness of service system configurations that are acceptable and unacceptable (e.g. labour and time savings vs. social cohesion and fight reduction). For example, placing high value on one’s carbon footprint can lead to transport choices such as cycling and walking which may be in a trade-off situation with time use goals and values. Alternatively, certain values and their trade-offs may lead to alternative behaviours such as carbon offsetting, using renewable fuel and lift/car share schemes.

Values are key aspect of scoping the other elements of a service system. The form of the service system can reflect the values held by its actants. Whilst a high-level goal of restaurants is to provide food and generate profit; the values held by the owners, staff and patrons can drive radically different manifestations of eating location, menu, and experience. Comparing a high-class joint with a roadside catering outlet without reference to values would be meaningless. Yet their basic transformations and domain objects remain remarkably similar: the preparation and serving of foodstuffs.

Comment 26

The similarity resides only in the high level of description. Major differences would appear, following Dowell and Long (1989), at lower levels of the description of the worksysyem and the domain.

Some individuals and groups see their values as true and objective, those who do not share your values are classed as having none (Beck and Cowan, 1996; Goodwin and Darley, 2008). Witness the view by many in the ‘environmental’ movement that shortterm non-resource renewing profit driven enterprises have no values. Such enterprises have values, but they are focussed on assessing effectiveness by profitability, capital liquidity, rather than sustainability. It becomes a misnomer to claim to be placing human values at the core of the HCI discipline (see Harper et al., 2008), as all our values are human. A human valuing technological progress and subscribing to technological determinism – whilst potentially passé – is still holding and enacting human values, to claim otherwise would be claiming a truly objective and non-human position on values. The issue is whether those values are reflected in the design process and in some way their maintenance, or desired change is supported by the entities (artefacts, processes, social structures) we design, and whether and how design methods and entities promote the reconciliation of differing valuing systems.

Comment 27

This point is hard to dispute and indeed, remains a challenge for HCI. The dualism of the worksystem and the domain, following Dowell and Long (1989), is able to support the expression of such values for design purposes.

 

 

With respect to services one key trend in recent years has been the emergence of availability and capability contracts (Terry et al., 2007; Tukker, 2004). These go beyond outsourcing to contractual arrangements where two or more partners work together to deliver services. In many contexts, this brings commercial and noncommercial organisations together with a potential for clashes of values, the most obvious being when public sector services interact with commercially oriented organisations. Furthermore, these arrangements rely on the service recipient providing facilities back to its supplier, with both parties acting as supplier and recipient of services.

In other contexts, service design is tackling the design of public services, another form and context that can bring together different actants and values, from the efficiency driven targets beloved by bureaucrats and politicians to those concerned with retaining, or promoting broad and difficult goals such as community cohesion and sense of community participation (Parker and Heapy, 2006; Seddon, 2008). If HCI’s and service science’s frameworks cannot acknowledge the existence of values in relation to the effectiveness of activities and artefacts, they will be impoverished. Promoting values to a first class entity within HCI and services frameworks opens them up to a richer set of considerations about human action. Recognising that values exist in relation to goals, activities, desired and actual transformations adds more depth to the characterisation of values. They are not discussed context free, with the ”it all depends on” caveat, they are scoped in reference to actions within a domain and an environment.

Comment 28

This is a good point and a hard one to dispute. For the difference between domain and (secondary) environment – see Comment 24.

3.1.3. Service demands a richer notion of people than the UCL Conception provides

Dowell and Long (1989) originally classed resource costs as being cognitive conative (motivational), and affective (emotion).

Comment 29

To be precise, ‘conative costs’, according to Dowell and Long (1989): ‘relate to the repeated mental and physical actions and effort required’ by interactive behaviours, performed to achieve some goal. In this sense are they ‘motivational’.

The later paper (Dowell and Long, 1998) dropped affective and conative resource costs. In our work, our ‘enrichment’ of the conception of people has been twofold. In the first case the recognition that values play an important role in shaping goals and effectiveness criteria and in scoping wanted and unwanted resource costs of a service system (see Section 3.1.2). The second is that the notion of resource costs and quality measures can embrace conative, affective, and socio-cultural issues.

Comment 30

The ‘enrichment’ of the Dowell and Long (1989) conception of people (‘users’), claimed by Wild, may be one of application; but not of substance. ‘Values’ and ‘socio-cultural issues’ can both be represented in terms of domain transformations, enacted by the worksystem behaviours, if the representation is part of the design requirement – see also Comment 24.

 

The general argument for structures and behaviours can be expanded to cover emotional, sociocultural issues. Work by luminaries such as Teasdale and Barnard (1993), Elster (2007), Hall (1959), and Beck and Cowan (1996), Cowan and Todorovic (2000) suggests phenomena akin to structures and behaviours. However, the nature of these additional structures and behaviours both differs from, and interrelate with cognitive ones. They can probably be assumed to exist at different emergent levels of reality (Mingers, 2006) and vary in their objective, intersubjective and subjective qualities (Heylighen, 1997; Mingers, 2006). For most purposes activities can be seen to be simultaneously affective, cognitive and in some way socio-cultural. Whilst it makes the implementation of an engineering vision for HCI harder to achieve as prescribed by Long (Long and Dowell, 1989) it should lead to a broader understanding of how different kinds of structures and behaviours are involved in understanding the effectiveness of activities; for example, how a service deemed to achieve its goals efficiently and could fail experientially, socioculturally or incur physical environmental costs that cannot be sustained and vice versa.

3.1.4. Different kinds of domain objects need to be represented and reasoned about

The domain concept provides a useful abstraction for the modelling of service systems. What are the objects?, what is their nature?, are simple questions that force the analyst to grapple with what the service activities will be ‘doing’. Allowing us to consider similarities and differences between different service contexts Dowell and Long (1989) stressed that objects be described at an appropriate level, but give no indication of how to do this.

Comment 31

Dowell and Long (1989 and 1998) illustrate how domain object/attribute/states can be expressed. Worked examples of domain modeling can be found in Hill (2010); Stork (1999); and Cummaford (2007).

 

 

A legitimate goal of an activity could be to affect emotional, socio-cultural ‘objects’, for example entertainment, education and health services all alter personal or socio-cultural entities (Hill, 1977). In theory, the domain concept can generalise to cover alternative kinds of objects. Hence, there needs to be the recognition that ‘objects’ in the domain could have properties that are not just physical (e.g., material or energy) and information or knowledge. There is ambiguity about the status of social and affective issues in Long’s work. Green (1998) noted that Dowell and Long’s work in air traffic control covers social issues; but they are not considered as first class concepts within the UCL Conception. Nor are they explicitly discussed when discussing the determinism boundary (Dowell and Long, 1989). In turn affective and conative resources costs were covered in the first paper (Dowell and Long, 1989), but later dropped (see Dowell and Long, 1998).

Comment 32

Affective and conative costs, as part of an expression of worksystem performance, remain as part of the Dowell and Long conception (1989). They were not referenced in the 1998 paper, whose specific expression was oriented towards ‘cognitive engineering’, rather than HCI.

 

There is a requirement to be able to model, and reason about, emergent properties within a domain that are motivational, emotional, and socio-cultural.

Comment 33

Domain objects can be decomposed into cognitive, conative, and affective attribute states, as required by the work, performed by the worksystem. Socio-cultural attribute states might constitute higher levels of description of these states or indeed additional objects, as required by the product goals of the worksystem. User costs can also be decomposed. See also Long (2010) for the example of a computer games ‘fun’ interactive (work) system.

Hence, we suggest that activities can aim to alter emotional, socio-cultural, states along with the informational and physical objects. Despite views to the contrary, advice on how this could be achieved cannot be found in work on analysis patterns or domain modelling (e.g. Fowler, 1997; Sutcliffe, 2002), which share similar modelling concepts, but give little consideration on the different ‘kind’ of domain objects altered, and they should therefore be modelled.5 The development of a computer based representation of a domain, is the creation of a representing world, however these representing worlds are not grounded in a perspective that can handle different ‘kinds’ object with any theoretical depth. They provide no theory to allow us to distinguish between situations where, for example, fun is an evaluative criterion alongside others (e.g. work applications); where it is the goal of the activity (games and theme parks); or it is balanced against other factors such as knowledge gained (e.g. modern interactive museums).

3.1.5. Core system, service system

When modelling service domains, we need to represent the overlap between at least two different Human Activity Systems (HAS). Service systems are set up to support another human activity system, hence we posit a relationship between core-system and one or more service systems, and Fig. 2 illustrates the distinction. The darker boxes represent part of the Core human activity system, with the service system, represented by the lighter boxes. The actants, artefacts, and sometimes aspects of the domain become the domain of the support system. The support system will utilise its own tools and artefacts in support of the core system. It may however share artefacts, actants, and activities with the core system. Fleet planning and forecasting of assets are example of such shared activities.

In reality, the resulting model may be more complex than the structure represented in Fig. 2. Many contemporary service contracts are complex, in terms of both the core domain (e.g. healthcare, armed forces); but also the complexity of the contractual arrangements: involving multiple partners, long time scales and complex payment and reward mechanisms (Terry et al., 2007). There may also be a chain of service systems in a series of core/support relationships, the service support may draw on other service systems such as IT or payroll.

Comment 34

The reader should be reminded at this point, that the Dowell and Long (1989 and 1998) conception is intended to specify design problems and to support the search for design solutions. It is not, then, simply a general, all-purpose representational conception. The relationship between core and support systems needs to be specified and tested against design requirements and the possibilities of their satisfaction.

Fig. 2. Overlapping systems (core and service).

The distinction between core and support system could be modelled by including a rich set of ‘parallel’ enabling tasks (Whitefield et al., 1993), and expanding the range of (enabling) tools and domain objects. We argue that for most significant modelling exercises the use of the enabling task concept masks the importance of distinguishing a relationship between different systems undertaking different roles. The notion of a core and service domains allows the representation of issues such as clashes in values and visibility, in reference to different kinds of activity (e.g. physical maintenance, forecasting, and education), artefacts, and domain changes. However, modelling approaches are often used by different modellers in different ways (Melão and Pidd, 2000), the enabling tasks approach may also be suited to illustrating differences between self services and services provided by another party.

3.1.6. Going beyond engineering

Whilst the HCI 89 keynote (Long and Dowell, 1989) offers alternative perspectives such as HCI as craft and applied science; and acknowledge that HCI will provide knowledge embodied as principles, heuristics, and guidelines. Long’s work is probably most heavily associated with the promotion of an Engineering perspective on the development of IT artefacts (e.g. Dowell and Long, 1989, 1998; Lim and Long, 1994) and handling of human factors from an engineering perspective, such as specify and implement, and the development of engineering principles (Long and Dowell, 1989). It remains an open question whether ‘engineering’ principles can be widely developed. There are few in existence (see Cummaford, 2000; Johnson et al., 2000), and their development takes considerable resources.

Comment 35

Limitations on the development of design principles can be found in Dowell and Long (1989), Figure 2: A Classification Space for ‘Design Disciplines’ and in particular, for an Engineering Discipline of Human Factors.

 

HCI has expanded its focus to embrace social, hedonic, and experiential concerns (Hassenzahl et al., 2000; McCarthy and Wright, 2004) and focussed on the support of community and family activities (Harper et al., 2008) this engineering perspective has begun to look in some way problematic (e.g. Hassenzahl et al., 2001; Sengers, 2003). There is much to be said for an engineering perspective and none of what follows such be taken as a dismissal of the power of the engineering approaches. However, what we need to be aware of are situations where the overall success of service design and delivery will be dependent on the interplay of factors that are quantifiable, and repeatable, as well as personal, experiential and socio-cultural factors, that may be more tacit, subjective and variable.

Comment 36

If social, hedonic and experiential concerns can be specified, either as part of the domain or of the worksystem, or both, the Dowell and Long conception (1989 and 1998) can express any associated design problem (within the limits set out in Figure 2(1989)) and support the search for a design solution – see Long (2010) for the example of a computer games ‘fun’ interactive (work)system. The reverse holds, if the social, hedonic and experiential concerns cannot be specified. The latter is assumed to be the case for ‘tacit, subjective and variable’ concerns.

 

A service or IT artefact that makes the required transformations effectively and efficiently can still be considered a failure if the customer, whether individual or organisational, has been subject to issues such as: over or under inclusion in decision making; reduced cohesiveness of social structures; reduced visibility; or recipients left feeling belittled, redundant, or deskilled by the service professionals. In addition, the failure to address such softer issues can lead to a breakdown in being able to obtain data about traditional quality measures and processes: data may not be recorded and passed on, equipment may be mishandled and shared assets may be withheld. Conversely, a service recipient can be made to feel welcomed, at ease, and involved, but without technical competence in executing the service, the service could still fail. For example well treated, but misdiagnosed patients, or service recipients being involved in decision-making processes that fail to fix faults in equipment.

Comment 37

If the ‘softer issues’ cited cannot be specified, in terms of the Dowell and Long conception (1989), the associated design problem can only be addressed by experiential (craft) design knowledge, using ‘implement and test’ design practices (see Figure 2).

 

When we move towards services, many services are carried out in distressing, and ultimately unwanted circumstances. A designer or researcher coming in and declaring either an old school (HCI as engineering of efficiency, errors and measurement of satisfaction) or new school (HCI as promotion of hedonic measures)6 approach should receive short shrift. Assessments about the effectiveness of such services need to balance this factor against other measures of effectiveness. Questions such as these cannot be handled within a single perspective dominated by engineering and work concerns (i.e. Dowell and Long, 1989, 1998; Long and Dowell, 1989),

Comment 38

Hedonic issues, which can be specified, can be conceptualized, following Dowell and Long (1989). See also Comment 36

.

but equally someone viewing interaction as maximising hedonic issues would also fail. This is a subtly different proposition from preventing the decline of an emotional state – which could be a laudable goal of such service types (e.g. counselling services) – and the promotion of a hedonic experience. However, we maintain that much of the spirit of the concepts outlined in the Dowell and Long’s work generalise, particularity the explicit notion of structures and behaviours and the implicit notion of trade-offs in the design of systems.

Comment 39

The trade-off in design between ‘Task Quality’ and ‘User (Resource) Costs’ is recognized throughout Dowell and Long (1989 and 1998).

3.2. Illustrative examples

Within this section, we provide illustrative examples. The first is a simple illustration taken from a novel by Gemmel; the second presents an initial ABFS for a Family’s transport choices, given the particular value set they wish to enact.

3.2.1. A literary example

An implicit issue that can be drawn out the UCL Conception characterisation of task qualities and resource costs, is that trade-offs can be made between different system configurations. The inclusion of a wider set of costs allows the representations of wider trade-offs, both as design options within a single IWS-domain coupling and in relation to different classes of application domains. In some situations, tasks/goals are harder to learn/achieve, but provide greater enjoyment (e.g. games); others are more efficient in terms of achieving the goal, but have a different ‘affective profile’. An IT artefact could be made more efficient, but at great cost in terms of the processing power and memory it needs, thus additional tasks are allocated to the user in the IWS.

A literary example of these trade-offs can be provided by Gemmel. ” ‘A wagon and a single driver would be more effective, surely?’ Observed Skilgannon . . . Landis smiled . . .. ‘In the main however they just bring food. You speak of effectiveness. Yes, a wagon would bring more supplies, more swiftly, with considerable economy of effort. It would not, though, encourage a sense of community, of mutual caring.’ (Gemmel, 2004, p. 52)”. Within this example, delivery of meals to the loggers in the story is done by the women of the local town, some married to the loggers, some not. Leaving aside the somewhat old-fashioned gender roles – this arrangement whilst less efficient than one man and a wagon – is continued because it promotes social cohesion, the presence of the women means less fights between loggers occur, and new relationships are formed. Thus, the domain can be seen to embrace both the physical changes of food preparation, but also affective and social states. Within this book, this form of catering arrangement leads to a key relationship between two characters, one that shapes the drive and motivation for one of those characters throughout the remainder of the story. These forms of decisions and trade-offs surround and permeate many activities both social and work based (see Wakkary, 2005; Wild et al., 2003). With a wider set of quality assessment criteria and resources to consider we can investigate the trade-offs that can be made when setting up a service system, rather than the narrower conception of IT effectiveness just physical and cognitive changes and costs.

Comment 40

Dowell and Long  (1989 and 1998) set no limits on the ‘set of assessment criteria, objectives and resources to consider’. The limits are set by the ‘design problem’, which their conception is used to express, that is, the difference between actual and desired performance. In the case, cited by Wild, if the woman-wagons interactive worksystem performed either ‘food delivery work’ (domain object: food; attribute: delivery; states: delivered/undelivered) or ‘social cohesion work’ (domain object: social cohesion; attribute: promotion; states: promoted/unpromoted) or both types of work, either with a lower Task Quality or more Resource (cognitive, conative or affective) Costs or both, than desired, then a design problem would exist. The Dowell and Long conception would be appropriate for expressing such a design problem, as illustrated (see also Hill’s paper (2010) on the issue). No limits on domain objects are set, as such. It suffices that they can be specified as physical and abstract object/attribute/states, whose transformation constitutes the work of some worksystem at some level of Task Quality and User Costs. See also Comment 36.

3.2.2. Family travel services

Our next example uses the following scenario to drive ABFS models of services:

The Bloggs family live in Sandford, 8 or so miles west of the City that the parents work in. The parents work at a science park, that is well served with buses to the city centre, but there are no direct buses from their village to their workplace. They chose the village for the many benefits village life can bring, especially for their children (e.g., low crime, sense of community, proximity to the countryside, air quality, good local schools) even though it would entail extra travelling and decisions about travel planning. The youngest child attends the Sandford village junior school, however, the nearest high school is a village college about 5 miles away in Fordham, in the opposite direction from to the city they work in, although the college is served by a school bus. They consider themselves environmentally aware, community focussed and wish to support their children in their exploration of a range of hobbies and interests. However, as busy professionals they are aware that the greenest transport options – via bus services – conflict with the time constraints of their daily lives and parental responsibilities. Sandford includes some local amenities such post office/grocery, hairdressers, pub, several private businesses. Services such as general practitioner, dentists and vet are dispersed within local villages. The closest town has all three but is a different county, and health authority. The family choose a neighbouring village that has all three, and try and group routine visits for all the family. The nearest supermarkets are in several towns between 6 and 9 miles away. The nearest rail station to London via fast train is 6 miles or £20 round trip taxi journey. They live within 20 miles of two regional airports.

The core objects of this domain are people, things, locations, deadlines (both hard soft) and schedule. Transport artefacts include bikes, cars, buses, and delivery vans. Additional states to be changed or maintained include the emotional security of their children about travel, and the social fabric of the village (which is currently healthy with around 30 clubs societies, etc.). The actants include family members, friends of the family, and those providing transport related services. The generic goal for their transport services is to transfer family members and where appropriate friends, and things (possessions, new purchases) to different locations.

They have a number of high-level values and goals that they bring to their choice of transport and service choices:

  • (A) Commitment to being carbon neutral and reducing extraneous journeys.
  • (B) Supporting local businesses (biofuel company 10 miles away but this is the nearest local supplier).
  • (C) Commitment to supporting their local and neighbouring village communities.
  • (D) Commitment to ethical or state owned finance providers.
  • (E) Reduce time costs of functional activities (travel to work).
  • (F) Loyalty to a supermarket chain from the region they grew up in.

We examine, using ABFS constructs, the following transport service activities:

  1. Use of Supermarket delivery service, with top up in the local store.
  2. Eldest child catches school bus, rather than being dropped at school.
  3. Youngest child walked to school by the eldest child.
  4. Travel to work in car, rather than cycle (40 min on high speed road without cycle lanes) or bus (two journey 1.5 h travel time).
  5. Use of biofuel (despite 8 mile diversion to pick up, and facilities to store spare), Carbon offsetting services.
  6. Support village driving scheme (local scheme to offer lifts to people in the village without car).

These particular travel examples provide a broad demonstration that ABFS concepts are applicable to high-level models of Service activities.

Comment 41

It is worth noting that elements of the Dowell and Long (1989) conception can be found in this travel illustration of the application of ABFS concepts, for example, domain, effectiveness, etc. However, many are omitted, for example, Task Quality, product goal etc. Other concepts seem to have a different specification, for example, Resource Costs, trade-offs etc. Lastly, the design problem remains unspecified, either as Task Quality or as User (Resource) Costs or both – see also Comment 39 and 40, for associated issues.

They also demonstrate a close relationship between values held and high-level cross activity goals. Future work will need to clarify the nature of these. In previous work we suggested that such high-level goals could be represented within a heterarchical goal complex (Wild et al., 2004 see also Diaper, 2004). However, this could be a convenient representation for what are distinct cognitive representations (goals vs. values). At this level of representation, socio-cultural behaviours can be considered, but making the reasoning more formal will need greater theorisation. We outline some sources for this in the next section (see Table 3).

Option 1 Option 2 & 3 Option 4 Option 5 Option 6
Domain People, things, locations, deadlines (both hard soft) and schedule Maintain positive affective states for children, maintain/support village socio-cultural structures
Activities Make list, make order, receive items Walk to junior school, catch bus from there Drive, navigate, park Calculate mileage make payment Drive to supplier fill tank and barrel Pick up person, plan route plan time
Goals New possessions transported from store to home Parent Both children safely transported to school Parent A and B transported to and from work with possessions Parent A and B CO2 for car offset Person X, transported to location at time Y
Actants Parent ChildA, ChildB bus driver Parent A and B Parent A and B, Service provider, offset projects Parent A and B, Service recipient, scheme organiser
Artefacts Computer, website, delivery van Timetable, bus stops bus, school, college, roads, etc. Car, road system, car park (1) Computer, website, bank account
(2) Fuel tank, fuel pump, barrel
Car, road system, car park, booking forms telephone
Effectiveness Shared shopping list
Required items delivered on time
Child regularly at school/college on time Journey regularly 15–20 min and no more CO2 offset, but extraneous journeys are not encouraged Right fuel at the right time Person X transported to location at time Y
S&B socio-cultural Subsistence, association, protection Territoriality, Temporality Learning, protective, association Association, Subsistence (work) Subsistence, Exploitation Subsistence, Association, Interaction
Benefit Reduce travel and shopping time Reduce travel time for parents Reduce travel time for parents Reduction of guilt about greater car usage village life can entail Support economic communities that are beneficiaries of offsetting funds Reduction of guilt about greater car usage village life can entail Support community, meet new people
Values Support regional supermarket
Support local store
Reduce family CO2
Pride in children’s independence, maturity and behaviour
Support local school bus service
Encourage independence in children
Children travel with trusted companions
Reduce family CO2
Reduce personal CO2
Reduce temporal costs of functional activities Commitment to being carbon neutral Support ‘local’ business Support community
Costs Delivery charge Less time spent with children on journey Car running and depreciation costs CO2 costs
Need to build exercise into routine
Financial cost of offset fee
Extra mileage to pick up fuel Registration cost for fuel supplier
Car depreciation costs CO2 costs
Trade-offs Local store is pricey
Lack of coordination could increase traffic to village for multiple deliveries
Less chance to teach children about subsistence & economics
Less chance for bargain hunting
50 miles a week in the car vs. more time to talk with childA Adds to general traffic levels
Cycle ride could provide exercise
Risk of cancellation of bus services due to less customer
Higher rate of fuel filter changes needed
Extra travelling to pick up Biofuel
Biofuel less suitable for winter use
Combine lifts with other activities

Table 3: ABFS inspired representation of family travel services.

3.3. Towards formalisation of the additional elements

Within the preceding examples, our use of the additional concepts of the ABFS has been relatively informal despite our hints that relevant work could be used. The socio-cultural structures and behaviours were informally drawn from Hall’s work but without explanation, they can seem rather isolated. Table 4 presents additional elements to be used in future modelling efforts. They are put forwards as a checklist of options, rather than as a validated theoretical classification of resources. Several can co-occur, and others can be broken down in more detail. In the current state of evolution of ABFS the dimensions are put forwards as a checklist of qualities and/or resource costs. They should be considered as heuristics and despite their pedigree should not be considered as validated principles for the examination of affective and socio-cultural issues.

Type Description
Conative
  • Physio sensory experiences
  • Socio collective or shared experiences
  • Psycho individual enjoyment and satisfaction
  • Ideo intellectual and aesthetic experiences
Affective
Evaluative emotions
  • Shame – negative belief about one’s own character
  • Contempt and Hatred – negative belief’s about another character the former is that they are inferior the latter that they are evil
  • Guilt – negative belief about one’s own actions
  • Anger – negative belief about another’s actions towards oneself
  • Cartesian Indignation – negative belief about another’s action towards others
  • Pridefulness – positive belief about one’s own character Liking – positive belief about another’s character
  • Pride – positive belief about one’s own actions
  • Gratitude – positive belief about another’s actions towards oneself
  • Admiration – positive belief about another’s actions towards a third party
State emotions
  • Envy – caused by the deserved good of someone else
  • Aristotelian Indignation – caused by the undeserved good of someone else
  • Resentment – caused by the reversal of a prestige hierarchy
  • Sympathy – deserved good of someone else
  • Pity – undeserved bad of someone else
  • Malice – is caused by the undeserved bad of someone else
  • Gloating – is caused by the deserved bad of someone else
Socio-cultural
  • Interaction – interactional
  • Association – organisational
  • Subsistence – economic
  • Bisexuality – sexual
  • Territoriality – territorial
  • Temporality – temporal
  • Learning – instructional
  • Play – recreational
  • Defence – protective
  • Exploitation – exploitational

Table 4: A checklist of factors for conative, affective and socio-cultural qualities and resources

Our next concern is to present checklists of items for Conative, Affective and Socio-Cultural resources, drawing on work by Tiger (2000), Hassenzahl (2000), Elster (2007), and Hall (1959). To further scope conative goals, we use Tiger’s (2000) ‘imperfect categories’ to outline four major groups of motivators for products and services. For affective resources and states, we turn to Elster, a social scientist interested in the nature of explanation in the social sciences. He notes that affective states are to be distinguished from visceral reactions (e.g. pain, hunger) in that they have cognitive antecedent, are ‘states’ they represent something, are generally associated with physiological arousal and expression and are associated with action tendencies, even if those actions are not carried out. We use his classification (pp. 145–161) to scope future considerations of such resources that are risked.

Hall (1959) introduced ten dimensions of culture along with additional considerations such as the difference between formal, informal, and technical systems. In Hall’s text it is represented as a 10 by 10 grid (category plus the adjective form of the category) Each cell of the matrix could be divisible into the same ten dimensions. Thus the community defence interaction could contain factors such as ‘the temporal aspects of community defences’ or ‘economic aspects of community defences’ (Hall, 1959, p. 193). This kind of arrangement could be used to produce a huge array of socio-cultural issues, which could serve as the objects being changed by a service system, or resources to be assessed in effectiveness judgements. So for example community defences could be the domain whilst temporal and economics aspects could be evaluation factors.

4. Summary and conclusions

Earlier the paper made the observation that HCI could offer the emerging field of service science: (1) tools for enacting a user centred ethos; and (2) concepts and frameworks for handling the nature of services. In the context of a special issue on Long’s work the paper explored the influence, scope and utility of the UCL Conception in relation to our own work in relating strands of services research, and in modelling service systems. The conception offers a set of abstractions that can be built upon when moving HCI towards dealing with services, service systems, and service science. Set in its historical context, the conception offered a set of abstractions that for many brought clarity to the field, but when considering it in relation to services (and contemporary HCI concerns), often a case for a more general form of the concepts can be made. However, by generalising the concepts to embrace wider forms of resource costs and domain object, we increase the variability of the factors being examined and the need to make assessments of subjective and intersubjective factors. This increase in variability would take us further away from being able to specify then implement in the manner associated with Long’s vision for an engineering approach. Either we need to consider in more depth what it would mean to ‘engineer’ services, including experiences, or we need to work out the relationships between the different perspectives on development.

Comment 42

This point is well taken. Dowell and Long’s (1989) conception is intended to express design problems, appropriate for the formulation of HCI design principles, supporting ‘ specify then implement’ design practices. However, they nowhere exclude the solving of such design problems by other types of design knowledge and practice. Indeed, they argue that the effectiveness of different types of design knowledge can only be tested, if there is a consensus on the design problem(s), they claim to solve.

 

Long’s position on HCI as an engineering discipline did not preclude the development of heuristics (e.g., Smith et al., 1997). The ABFS as presented still embodies considerable craft understanding; the work presented in Section 3.3 could be further developed to provide more than a checklist of items, but this is a major challenge in itself.

Bødker (2006) was keen to portray HCI as being composed of first, second, and third waves. The trouble with Bødker’s view is that existing work within ‘1st wave theories’ is capable of embracing aesthetics, cognition, emotion, and culture within the same framework (Byrne et al., 2004; Lindgaard and Whitfield, 2004; Teasdale and Barnard, 1993), along with route maps for the generalisation of such a framework (Barnard et al., 2000). Similarly one piece of Long’s work (Long and Dowell, 1989) demonstrated the same concepts (e.g. work system, domain and discipline) can be viewed through different perspectives (craft, applied science and engineering). We have argued that his concepts of structures and behaviours generalise to different task qualities and resource costs, but the criteria associated with an engineering perspective may not hold for the new resource costs we have suggested in the ABFS. Indeed even colloquial use of the terms emotion engineering and social engineering send shudders down the spines of holders of certain value sets. Whilst work by Hassenzahl and colleagues demonstrates that robust statistics can be found about relationships between pragmatic and hedonic factors, these assessments are still subjective in nature, and less amenable to the engineering perspective prescribed by many for HCI (see also Hassenzahl et al., 2001; Sengers, 2003).

In the introduction, we mentioned that HCI had two elements to offer: (1) the user centred mindset and techniques; and (2) concepts and frameworks for understanding the nature of services. The major focus of this paper has been to explore one of these frameworks – the UCL Conception – as an important antecedent to our own work in the field. We now consider – in brief – this first issue. Benbasat and Zmud (2003) made the argument that the related field of Information Systems had started to dwell on peripheral issues that move the field away from its core focus on the development of IT artefacts. If Long and Dowell’s (1989) definition of HCI as ”the design of humans and computers interacting (p. 9),” and other such technology centred definitions of HCI remain in place, then services can be considered one of these peripheral issues, therefore interaction between HCI and service science moves HCI away from its core concern. HCI’s textbook consensus is still concerned with the user centred design of computer-based technology. Services are not technology, although they are frequently enabled, enacted by, and performed through IT artefacts across varying channels (van Dijk et al., 2007). If we hold that HCI is about the human-centric design of activities and experiences – whether involving IT artefacts or not – then the interaction between the disciplines can continue. Definitions of HCI tend to stress both aspects, but perhaps HCI’s biggest gift to other fields is the user centred focus that is generic enough to apply to activities and experiences as well as artefacts.

HCI could make both substantive and methodological contributions to the analysis and design of services. Long standing methodological principles such as: early and continual focus on people and their activities; multidisciplinary and integrated design teams; and iterative design processes (Gould and Lewis, 1985) are applicable to the design of services as well computational and non computational artefacts. HCI can offer approaches such as user participation, prototyping, conceptual design, and a range of approaches that provide sophisticated analysis of tasks/activities and their informational, cognitive, emotional, and social consequences. Characterisations of services overlap with participatory, experience and activity oriented approaches for analysing and designing computing applications and activities (e.g. Hassenzahl et al., 2000; McCarthy and Wright, 2004; Schrepp et al., 2006). Service design methodologies developed by designers borrow from, overlap with, or complement HCI. Parker and Heapy (2006) for example, recommend the use of prototypes, personas, and the measurement of service experience.

This of course begs the question whether HCI’s user centred focus is unique 7. If HCI does move into the Service Science arena alongside its existing interaction with and embracement of, disciplines such as Design Engineering and Sociology, HCI will come across Services Marketing and Services Operations, both of which could make a claim on user centeredness. In the case of Services Marketing, Fisk (2008) a renowned Services Marketer (who has worked with various disciplines e.g. Fisk et al., 2008), considers computer science in a pejorative and stereotyped fashion in its characterisation of people as users, and seemingly unaware of HCI, or its antecedents and relatives such as human factors, design, software engineering, and participatory design. In a similar vein Ng et al. (2009, p. 379) claims that Services Marketing is unique in the service science arena as being the only discipline that considers the customer as being within the service system. Seemingly unaware of HCI conceptions such as Long’s. So much for theory, what about practice? If Seigel and Dray’s observations is representative of Marketing, ”more time would be spent interviewing users about what they liked and disliked about the site than in observing and analyzing their task behaviour (Seigel and Dray, 2001, p. 20);” then there is room for co-existence, each approach focuses on different aspects, and HCI’s focus is on actual behaviours, not just opinions and perceptions. Other HCI practitioner papers support this view (e.g. Rohrer et al., 2008), and Cockton’s exploration of worth (Cockton, 2007) has drawn upon marketing techniques, as has van Dijk’s work on service channels (2008, 2007).

With regards to Service Operations, Wright and Mechling’s study (2002) study suggested that Service Operations practitioners saw three key issues to be: (1) determining how to utilise resources most effectively; (2) monitoring and measuring quality of services and (3) predicting future events, conditions, customer demand, which suggest many crossovers with HCI’s concerns and ethos. However, Seddon et al. (2009) characterise Service operations approach to services as dominated by a Lean Services agenda. This relies on a manufacturing analogy for services, centred on standardisation, control, and viewing all services operations as a form of demand management. They go onto note that a key part of the Lean Services approach is ”treating failure demand as though it is just more work to be done is to fail to see a powerful economic lever (p. 8)”. Service Operations’ user centred focus will generally apply too late in the service design and execution lifecycle to remedy the cause(s) of failure demand. HCI should be able to provide tractable design processes that prevent the bulk of failure demands, hence HCI could help bridge between Service Marketing and Service Operations. We take heart from Carroll’s assertion ”the continuing synthesis of disparate conceptions and approaches to science and practice in HCI has produced a dramatic example of how different epistemologies and paradigms can be reconciled and integrated (Carroll, 2009)”.

Acknowledgements

The writing of this paper has been supported by the EPSRC/BAE systems funded S4T (Solution Service Support: Strategy and Transformation) programme. The three reviewers provided great stimulus to clarify the concepts within this paper, I hope I have not put them off considering services themselves. Attendees at the CIRP IPS2 conference and the 1st and 2nd HCI and Services workshops at HCI 2008 and 2009 have provided much useful discussion and insight, alongside colleagues within the Institute for Manufacturing and Engineering Design Centre at Cambridge. The usual caveats about misunderstandings apply to this work.

References

Alonso-Rasgado, T., Thompson, G., Elfström, B.-O., 2004. The design of functional (total care) products. Journal of Engineering Design 15 (6), 515–540.

Aurich, J.C., Schweitzer, E., Fuchs, C., 2007. Life-Cycle Oriented Planning of Industrial Product-Service Systems. ICMR 2007, Leicester, De Montfort/Inderscience. pp. 270–274.

Barnard, P.J., May, J., Duke, D., Duce, D.A., 2000. Systems, interactions and macrotheory. ACM Transactions on Computer–Human Interaction 7, 222–262.

Beck, D., Cowan, C., 1996. Spiral Dynamics: Mastering Values, Leadership, and
Change. Blackwell Business, London.

Bekker, M., Long, J., 2000. User Involvement in the Design of Human-Computer
Interactions: Some Similarities and Differences between Design Approaches
HCI’2000 Sunderland, 4–8 September. Springer, pp. 135–148.

Benbasat, I., Zmud, R.W., 2003. The identity crisis within the is discipline: defining and communicating the discipline’s core properties. MIS Quarterly 27 (2), 183–
194.

Bennett, K., Layzell, P., Budgen, D., Brereton, P., Macaulay, L., Munro, M., 2000.
Service-based software: the future for flexible software. APSEC 2000, pp. 214–
221.

Bitner, M.J., Ostrom, A.L., Morgan, F.N., 2007. Service blueprinting: a practical tool
for service innovation. In: Innovation in Services Conference, Berkeley, April 26–
28.

Bødker, S., 2006. When second wave HCI meets third wave challenges.
NORDICHI’06, ACM, Oslo, pp. 1–8.

Byrne, R.W., Barnard, P.J., Davidson, I., Janik, V.M., McGrew, W.C., Miklósi, Á.,
Wiessner, P., 2004. Understanding culture across species. Trends in Cognitive
Sciences 8 (8), 341–346.

Carroll, J.M., 2009. Human Computer Interaction (HCI). <http://www.interactiondesign.org/encyclopedia/human_computer_interaction_hci.html>.

Checkland, P.B., Poulter, J., 2006. Learning for Action. Chichester Wiley.

Chen, J.V., Yen, D.C., Chen, K., 2009. The acceptance and diffusion of the innovative
smart phone use: A case study of a delivery service company in logistics.
Information & Management 46 (4), 241–248.

Cockton, G., 2006. Designing worth is worth designing. NORDICHI 2006. ACM, Oslo,
pp. 165–174.

Cockton, G., 2007. Creating value by design? Engineering Design Centre Seminar
Series, Cambridge.

Cockton, G. Virtues and Potentials Can Guide Design Choices of Qualities and
Values. University of Northumbria, Newcastle, unpublished manuscript.

Costanza, R., d’Arge, R., de Groot, R., Farber, S., Grasso, M., Hannon, B., Limburg, K., Naeem, S., O’Neill, R.V., Paruelo, J., Raskin, R.G., Sutton, P., van den Belt, M., 1997. The value of the world’s ecosystem services and natural capital. Nature 387
(6630), 253–260.

Cottam, H., Leadbeater, C., 2004. Health: Co-creating Services. Report, RED: Design
Council, London. Cowan, C.C., Todorovic, N., 2000. Spiral dynamics: the layers of human values in
strategy. Strategy and Leadership 28 (1), 4–12.

Cummaford, S., 2000. Validating effective design knowledge for re-use: HCI
engineering design principles, vol. 2. CHI ’00. ACM Press, The Hague, pp. 71–72.

Cyr, D., Hassanein, K., Head, M., Ivanov, A., 2007. The role of social presence in establishing loyalty in e-Service environments. Interacting with Computers 19
(1), 43–56.

Diaper, D., 2004. Understanding task analysis for human–computer interaction. In: Diaper, D., Stanton, N.A. (Eds.), The Handbook of Task Analysis for HCI. LEA, Mahwah, pp. 5–47.

Dowell, J., Long, J.B., 1989. Towards a conception for an engineering discipline of human factors. Ergonomics 32 (11), 1513–1535.

Dowell, J., Long, J.B., 1998. Conception of the cognitive engineering design problem. Ergonomics 41 (2), 126–139.

Dreyfuss, H., 1955/2004. Designing for People. Allworth Press.

Elster, J., 2007. Explaining Social Behavior. Cambridge University Press, Cambridge.

Fisk, R., 2008. Service Science, the Elephant and the Blind Men, Who’s Who?
University of Strathclyde, Glasgow.

Fisk, R.P., Brown, S.W., Bitner, M.J., 1993. Tracking the evolution of the services
marketing literature. Journal of Retailing 69 (1), 61–103.

Fisk, R.P., Grove, S.J., John, J., 2008. Interactive Services Marketing, 3rd ed. Houghton
Mifflin, Boston.

Fließ, S., Kleinaltenkamp, M., 2004. Blueprinting the service company: managing
service processes efficiently. Journal of Business Research 57 (4), 392–404.

Flint, D.J., 2006. Innovation, symbolic interaction and customer valuing: thoughts stemming from a service-dominant logic of marketing. Marketing Theory 6 (3),
349–362.

Fowler, M., 1997. Analysis Patterns, Reusable Object Models. Addison-Wesley,
Menlo Park.

Gemmel, D., 2004. The Swords of Night and Day. Corgi Books, London.

Goedkoop, M., van Halen, C., te Riele, H., Rommens, P., 1999. Product Service
Systems, Ecological and Economic Basics. Report, PRé Consultants, Amersfoort.

Goodwin, G.P., Darley, J.M., 2008. The psychology of meta-ethics: exploring
objectivism. Cognition 106 (3), 1339–1366.

Gould, J.D., Lewis, C., 1985. Designing for usability: key principles and what
designers think. Communications of the ACM 28 (3), 300–311.

Green, T.R.G., 1998. The conception of a conception. Ergonomics 41 (2), 143–146.

Hall, E.T., 1959. The Silent Language. Doubleday, New York.

Harper, R., Rodden, T., Rogers, Y., Sellen, A. (Eds.), 2008. Being Human: Human–
Computer Interaction in the Year 2020. Microsoft Research Ltd., Cambridge.

Hassenzahl, M., Platz, A., Burmester, M., Lehner, K., 2000. Hedonic and Ergonomic Quality Aspects Determine a Software’s Appeal Emotions and Values. CHI 2000,
The Hague, 1–6th April, pp. 201–208.

Hassenzahl, M., Beu, A., Burmester, M., 2001. Engineering joy. IEEE Software 18 (1),
70–76.

Hawken, P., Lovins, A., Lovins, L.H., 1999. Natural Capitalism: Creating the Next
Industrial Revolution. Rocky Mountain Institute, Snowmass, CO.

Heskett, J., 2009. Creating economic value by design. International Journal of Design
3 (1).

Heylighen, F., 1997. Objective, subjective and intersubjective selectors of
knowledge. Evolution and Cognition 3 (1), 63–67.

Hill, P., 1977. On goods and services. Review of Income & Wealth 23 (4), 315–338.

Hill, P., 1999. Tangibles, intangibles and services. Canadian Journal of Economics 32
(2), 426. IfM and IBM, 2008. Succeeding through Service Innovation. IfM, University of
Cambridge, Cambridge.

Jégou, F., Manzini, E. (Eds.), 2008. Collaborative Services: Social Innovation and
Design for Sustainability. POLI design, Milano.

Johnson, P., Johnson, H., Hamilton, F., 2000. Getting the knowledge into HCI. In:
Schraagen, J.M., Chipman, S.F., Shalin, V.L. (Eds.), Cognitive Task Analysis. LEA,
Mahwah, pp. 201–214.

Jones, M., Samalionis, F., 2008. From small ideas to radical service innovation.
Design Management Review, Winter, pp. 19–27.

Karat, J., Karat, C.-M., 2004. Experiences people value. In: Diaper, D., Stanton, N.A.
(Eds.), The Handbook of Task Analysis for HCI. Lawrence Erlbaum Associates,
Mahwah, NJ, pp. 585–602.

Kounkou, A., Cullinane, A., Maiden, N., 2008. Using HCI knowledge in service-centric
applications. In: Wild, P.J. (Ed.), HCI 2008 Workshop on HCI and Services.

Lages, L., Fernandes, J., 2005. The SERPVAL scale: a multi-item instrument for measuring service personal values. Journal of Business Research 58 (11), 1562–
1572.

Light, A., Wild, P.J., Dearden, A., Muller, M.J., 2005. Quality Value Choice: Exploring
Deeper Outcomes for HCI Products. CHI2005 workshop, Porland, Monday 26
April. ACM Press.

Lim, K.Y., Long, J.B., 1994. The MUSE Method for Usability Engineering. Cambridge
University Press, Cambridge.

Lindgaard, G., Whitfield, A., 2004. Integrating aesthetics within an evolutionary and
psychological framework. Theoretical Issues in Ergonomics Science 5 (1), 73–
90.

Long, J.B., Dowell, J., 1989. Conceptions of the Discipline of HCI. In: Sutcliffe, A.,
Macaulay, L. (Eds.), HCI’89, Nottingham, 5–8th September, CUP, pp. 9–32.

Lovelock, C.H., 1983. Classifying services to gain strategic marketing insights.
Journal of Marketing 7 (summer), 9–20.

Lovelock, C.H., Gummesson, E., 2004. Whither services marketing? Journal of
Service Research 7 (1), 20–41.

Luthria, H., Rabhi, F., 2009. Service oriented computing in practice. Journal of
Theoretical and Applied Electronic Commerce Research 4 (1), 39–56.

Magoulas, G.D., Chen, S., 2006. Human factors in personalised systems and services.
Interacting with Computers 18 (3), 327–330.

McCarthy, J., Wright, P., 2004. Technology as Experience. MIT Press, Bradford, MA.

Melão, N., Pidd, M., 2000. A conceptual framework for understanding business
processes and business process modelling. Information Systems Journal 10 (2), 105–129.

Mingers, J., 2006. Realising Systems Thinking. Springer, New York. Morelli, N., 2006. Developing new product service systems (PSS): methodologies
and operational tools. Journal of Cleaner Production 14 (17), 1495–1501.

Namoune, A., Nestler, T., Angeli, A.D., 2009. End user development of service-based applications. In: Wild, P.J. (Ed.), 2nd Workshop on HCI and Services at HCI 2009
Cambridge, 1st September.

Nelson, H.G., 2002. Systems science in service to humanity. Systems Research and
Behavioral Science 19 (5), 407–416.

Ng, I.C.L., Maull, R., Yip, N., 2009. Outcome-based contracts as a driver for systems
thinking and service-dominant logic in service science: evidence from the
defence industry. European Management Journal 27 (6), 377–387.

Papazoglou, M., van den Heuvel, W.-J., 2007. Service oriented architectures. The
VLDB Journal 16 (3), 389–415.

Parker, S., Heapy, J., 2006. The Journey to the Interface. Report DEMOS, London. Pring, B., Lo, T., 2009. Dataquest Insight: SaaS Adoption Trends in the US and UK
Report, Gartner.

Ramirez, R., 1999. Value co-production: intellectual origins and implications for
practice and research. Strategic Management Journal 20 (1), 49.

Rathmell, J.M., 1974. Marketing in the Services Sector. Winthrop Publishers,
Cambridge MA.

Reason, B., Downs, C., Lovlie, L., 2009. Service Thinking. <http://www.livework.co.
uk/articles/service-thinking>.

Rohrer, C., Au, I., Darnell, E., Dickenson, N., Evenson, S., Kaasgaard, K., 2008. Design,
Marketing, Strategy: Where does User Research Belong? CHI ’08, vol. 2. ACM,
Florence. pp. 2241–2244.

Roy, R., Shehab, E. (Eds.), 2009. Industrial Product Service Systems, IPS2 2009.
Cranfield University Press.

Schrepp, M., Held, T., Laugwitz, B., 2006. The influence of hedonic quality on the
attractiveness of user interfaces of business management software. Interacting
with Computers 18 (5), 1055–1069.

Seddon, J., 2008. Systems Thinking in the Public Sector. Triarchy Press, Axminster. Seddon, J., O’Donovan, B., Zokaei, K., 2009. Rethinking Lean Service. Vanguard
Consultancy, Buckingham.

Seigel, D.A., Dray, S.M., 2001. New kid on the block: marketing organizations and
interaction design. Interactions 8 (2), 19–24.

Sengers, P., 2003. The engineering of experience. In: Blythe, M.A., Overbeeke, K.,
Monk, A.F., Wright, P.C. (Eds.), Funology: From Usability to Enjoyment. Kluwer
Academic Publishers, Dordrecht.

Seth, N., Deshmukh, S.G., Vrat, P., 2005. Service quality models: a review.
International Journal of Quality & Reliability Management 22 (9), 913–949.

Shostack, G.L., 1977. Breaking free from product marketing. Journal of Marketing 41
(2), 73–80.

Shostack, L.G., 1984. Designing services that deliver. Harvard Review of Business 62
(1), 133–139.

Smith, W., Hill, B., Long, J.B., Whitefield, A., 1997. A design-oriented framework for
modelling the planning and control of multiple task work in secretarial office
administration. Behaviour and Information Technology 16 (3), 161–183.

Stahel, W.R., 1986. The functional economy: cultural & organizational change.
Science & Public Policy 13 (4), 121–130.

Stauss, B., 2005. A pyrrhic victory: the implications of an unlimited broadening of
the concept of services. Managing Service Quality 15 (3), 219–229.

Sutcliffe, A., 2002. The Domain Theory: Patterns for Knowledge and Software Reuse.
Erlbaum, Mahwah.

Teasdale, J.D., Barnard, P.J., 1993. Affect Cognition and Change: Re-modelling
Depressive Thought. Erlbaum Associates, Hove.

Terry, A., Jenkins, D., Khow, T., Summersgill, K., Bishop, P., Andrews, M., 2007.
Transforming Logistics Support for Fast Jets. Report, National Audit Office,
London.

Tiger, L., 2000. The Pursuit of Pleasure, 2nd ed. Transaction Press, New Brunswick. Tukker, A., 2004. Eight types of product-service system. Business Strategy and the
Environment 13 (4), 246–260.

van Dijk, G., 2008. HCI informing service design, and visa versa. In: Wild, P.J. (Ed.),
Workshop on HCI and the Analysis, Design, and Evaluation of Services, HCI
2008, Liverpool BCS.

van Dijk, G., Minocha, S., Laing, A., 2007. Consumers, channels and communication:
online and offline communication in service consumption. Interacting with
Computers 19 (1), 7–19.

Vargo, S.L., Lusch, R.F., 2004a. Evolving to a new dominant logic for marketing.
Journal of Marketing 68 (1), 1–17.

Vargo, S.L., Lusch, R.F., 2004b. The four service marketing myths. Journal of Service
Research 6 (4), 324–335.

Vargo, S., Lusch, R., 2008. Why ”service”? Journal of the Academy of Marketing
Science 36 (1), 25–38.

Vargo, S.L., Maglio, P.P., Akaka, M.A., 2008. On value and value co-creation: a service
systems and service logic perspective. European Management Journal 26 (3),
145–152.

Vicente, K.J., 1990. Coherenceand correspondence-driven work domains.
Behaviour and Information Technology 9 (6), 493–502.

Wakkary, R., 2005. Exploring the Everyday Day Designer. Understanding
Designers’05, Aix-en-Provence, 17–18 October 2005. Centre for Design
Computing and Cognition. pp. 277–282.

Whitefield, A., Esgate, A., Denley, I., Byerley, P., 1993. On distinguishing work tasks
and enabling tasks. Interacting with Computers 5 (3), 333–347.

Wild, P.J., Macredie, R.D., 2000. On change and tasks. In: McDonald, S., Waern, Y., Cockton, G. (Eds.), HCI’2000, University of Sunderland, 4–8 September. Springer,
pp. 45–59.

Wild, P.J., Johnson, P., Johnson, H., 2003. Understanding task grouping strategies. In: Palanque, P., Johnson, P., O’Neill, E. (Eds.), HCI 2003, Bath, 8th–12th September. Springer, pp. 3–20.

Wild, P.J., Johnson, P., Johnson, H., 2004. Towards a composite model for multitasking. In: Palanque, P., Salvik, P., Winckler, M. (Eds.), TAMODIA’04, Prague, November 15–16. ACM Press.

Wild, P.J., Jupp, J., Kerley, W., Eckert, C.M., Clarkson, P.J., 2007. Towards a Framework for Profiling of Products and Services. 5th ICMR, Inderscience, Leicester, pp. 285–290.

Wild, P.J., Clarkson, P.J., McFarlane, D., 2009a. A framework for cross disciplinary efforts in services research. In: Roy, R., Shehab, E. (Eds.), Industrial Product
Service Systems, Cranfield 1–2nd April 2009. University Press, Cranfield, pp.
145–152.

Wild, P.J., Pezzotta, G., Cavalieri, S., McFarlane, D.C., 2009b. Towards a Classification
of Service Design Foci, Activities, Phases, Perspectives and Participants.
MITIP’09 Bergamo.

Wright, C.M., Mechling, G., 2002. The importance of operations management
problems in service organizations. Omega 30 (2), 77–87.

Wyckham, R.G., Fitzroy, P.T., Mandry, G.D., 1975. Marketing of services. European
Journal of Marketing 9 (1), 59.

Zeithaml, V.A., Parasuraman, A., Berry, L.L., 1985. Problems and strategies in services
marketing. Journal of Marketing 49 (2), 33–46.

Diagnosing Co-ordination Problems in the Emergency Management Response to Disasters 150 150 admin

Diagnosing Co-ordination Problems in the Emergency Management Response to Disasters

Diagnosing Co-ordination Problems in the Emergency Management Response to Disasters

Becky Hill

UCL Interaction Centre, MPEB 8th Floor, University College London, Gower Street, London WC1E 6BT, United Kingdom

John Long's Comment 1 on this paper

I have known and worked with Becky Hill for more than 20 years, which is quite a long time by any standards. Following her first degree in Psychology at UCL, she completed her MSC, for which I was her Director of Studies. She went on to become a Research Fellow, then Senior Research Fellow, Project Manager and Lecturer, during my time as Director of the Ergonomics and HCI Unit. I was also the first supervisor of the PhD thesis, upon which her Festschrift contribution is based. I much appreciated her as a colleague and as a friend (and still do). It is hard to think of a more suitable and worthy contributor to the Festschrift. Her contribution well represents the ‘models and methods’ research of the Ergonomic and HCI Unit (Long, 2010).

Abstract

In the United Kingdom, there is a system for the co-ordination of the emergency services in response to disasters – The Emergency Management Combined Response System (EMCRS). This is a general management framework with a complex three tier command and control system, set-up by the UK government in response to a need for better co-ordination between agencies, when they respond to disasters.

This research has developed models of the implementation of the EMCRS for specified disaster scenarios, that support diagnosis of co-ordination problems between agencies. Data for the modelling were acquired by means of training exercises. The co-ordination problems were identified through behaviour conflicts between the agencies. For example: the Fire Service behaviours of setting up a cordon around the disaster site conflict with the Ambulance Service behaviours of accessing the site for treatment of casualties. Model development was achieved through application of an existing framework.

The EMCRS models constitute substantive Human Computer Interaction design knowledge, that is, knowledge that is both explicit and supports design. One view of HCI (Long, 1996) is that of an engineering design discipline, whose research validates design knowledge, both substantive and methodological. Design knowledge supports design practice directly, as the diagnosis of design problems and indirectly, as the prescription of design solutions. An initial method for co-ordination design problem diagnosis by means of EMCRS models has been developed. This paper will describe the development of the EMCRS models and will apply the method and show the diagnosis from this application, of one co-ordination design problem.

Comment 2
The research corresponds well with Salter’s Figure 8 (2010), Design Research Exemplars. The ‘Design Problems’, diagnosed by Hill, correspond to the ‘Specific Requirements Specification’ and the ‘Design Solutions’ prescribed correspond to the ‘Specific Artefact Specification’. In the research, the former empirically verified the latter and the latter was empirically derived from the former. If, and when, the research is completed, the ‘Derivation’ and ‘Verification’ would then be formal. Design Practice Exemplars (Salter’s Figure 7) would then be enabled.

1. Introduction

The aim of this paper is to show the development of models of the Emergency Management Combined Response System that can be used to diagnose inter-agency co-ordination problems. In the United Kingdom, there exists a system for the co-ordination of the emergency services in response to disasters, such as explosions, air crashes etc. – the Emergency Management Combined Response System (EMCRS) (Anon., 1994, 2001). This system manages, that is, plans and controls, agencies, such as Fire and Police, when they respond to disasters. The EMCRS was set up to support better co-ordination between agencies responding to disaster, in reaction to a succession of enquiries into disasters e.g. Hidden (1989) and Fennell (1988), which identified problems with co-ordination, both within and between the emergency services in their disaster response. Co-ordination in this context is defined as the ‘harmonious integration of the expertise of all the agencies involved with the object of effectively and efficiently bringing the incident to a successful conclusion’ (Emergency planning college document, 1995).

Comment 3
The Emergency Management Combined Response System (EMCRS), object of Hill’s research, manages the complex domain of disasters. Like other complex domains, for example, Nuclear Power, Air Traffic, Health Services, Manufacturing etc, the industries themselves and their regulators have definitions, which conceptualise these domains. In the case of the EMCRS, the Emergency Planning Document (1995) defines ‘co-ordination’, as the ‘harmonious integration of the expertise of all the agencies involved (that is, Fire, Police and Ambulance) with the object of effectively and efficiently bringing the incident (that is, the disaster) to a successful conclusion.’ Such definitions constitute a potential problem for researchers, because they also need to conceptualise the domain for the purposes of their research, for example, for modeling or other theory purposes.

Two extreme options suggest themselves. The researchers simply adopt completely the domain definitions. Alternatively, the researchers ignore the latter and work with definitions, provided by earlier research, either their own or that of other researchers. Both options have strengths and weaknesses. The first option may find domain definitions hard or indeed impossible to operationalise. However, communicating the research results to the domain community should prove easy. The reverse is the case for the second option. Research conceptualisations are likely to be easier to operationalise. However, communicating the research results to the domain community is likely to prove more difficult.

Hill’s solution to this potential problem is interesting, because it draws from both extreme options. As concerns the definition, cited earlier, Hill appears to accept it or at least to work within it (see Section 1.0 Introduction, Paragraph 1). She develops the concept of ‘effectiveness’ in the research; but not those of ‘efficiency’ or ‘harmonious integration’, or at least, not as such.

She would no doubt claim that ‘efficiency’ (or at least some aspects thereof) is carried forward into the research in ‘effectiveness’ and that ‘harmonious integration’ (or at least its absence, is carried forward in the conceptualisation of ‘behaviour conflicts’, that is, which domain sub-object transformations hinder other domain sub-object transformations (see Section 6: Method for Co-ordination Design Problem Diagnosis, Stage 3)).

In contrast, Hill appears to adopt completely the Home Office’s (1994) definition of the overall task – ‘to save life, prevent escalation of the disaster, to relieve suffering, to facilitate investigation of the incident, safeguard the environment, protect property and restore normality’ (see Section 1.5.1 HCI-PMT Axioms for EMCRS, Axiom 2.2). All appear in Hill’s EMCRS domain model, so that she can rightly claim – ‘ the overall task for the EMCRS is to transform the DISASTER Object to a desired level of stability and normality’.

By directly adopting the domain definitions or indirectly working within them, Hill ensures that the EMCRS domain can be modeled by the HCI-PCMT framework (the research requirement – see Figure 2); but that the results can be communicated to domain experts (the operational requirement).  The design problems, diagnosed by the research, were discussed with the Emergency Planning Research Group at the Home Office, who agreed with their existence and importance (see Section 7: Summary and Future Work, Paragraph 2).

In summary, then, the relationship between domain definitions of worksystems and their work and research conceptualisations thereof are critical both for the conduct of the research (that is, the acquisition of design knowledge to support problem diagnosis and solution prescription) and the communication of the research results to clients, domain experts etc. This issue warrants further study and analysis. Additional illustrations of this issue can be found in the remaining Festschrift published (and submitted) papers.

 

However, even after the introduction of the EMCRS there are still occasions when the emergency response to disasters has been identified as being un-co-ordinated. For example, on July 7th 2005, terrorist attacks in London left 52 dead and many more injured. The report into the terrorist attacks (Anon., 2006), found that there were issues with the emergency services response, mainly due to communication problems, which led to an un-coordinated response. For example, although it states in the London Emergency Procedure Manual that a major incident can be declared by any of the emergency services, the implication being that this will be done on behalf of all the services, on 7th July all three primary emergency services declared a major incident, independent of each other. It was not clear to the reviewers as to why this happened, and also why a declaration of a major incident by one emergency service had not automatically mobilised units from all three (Police, Fire and Ambulance) Services. As a result the review states: ”We recommend that the London Resilience Forum review the protocols for declaring a major incident to ensure that, as soon as one of the emergency services declares a major incident, the others also put major incident procedures in place. This could increase the speed with which the emergency services establish what has happened and begin to enact a co-ordinated and effective emergency response”.

There have been many methods, models and frameworks developed for analysis of Emergency Management. Specifically Rogalski and Samurcay (1993) have focused on communication between the services as a means of analysing distributed decision making. The analysis allows an understanding of why one group is better than the other. The group that has a better flow of communication and distribution of roles is more efficient. Samurcay and Rogalski (1991) have also developed a Method for Tactical Reasoning (MTR) and applied it in emergency management. This method describes a decomposition of the overall task (for the class of emergency situations) into specific tasks (involved in analysis and planning), as prescribed tasks in the sense developed by Leplat (1988). The MTR provides a model of the cognitive tasks, involved in emergency management. This research allows for an understanding of emergency management behaviours, but does not relate it directly to the design of the emergency management system. Kaempf et al. (1996) studied the decision making of experienced personnel in complex command and control environments, using the recognition-primed decision (RPD) model (Klein and Woods, 1993), which depicts how experienced people make decisions in natural settings. The results of the study suggested that decision makers use recognition processes and that situation awareness is of primary concern. However, it is difficult to generalise from this study to other command and control domains, as the domain studied was very procedural in nature, and thus, other command and control settings may place different requirements on the decision makers. Other work, such as Blandford and Wong (2004) and Blandford et al. (2002) has looked at the behaviours of individual services within emergency management, but does not relate these behaviours to the other services within emergency management, and does not develop systematic models that can be directly used for design of the emergency management system.

Frameworks and models are prevalent in the HCI literature (Long, 1987; Whitefield, 1990). However, most models of interaction are task based (Wright et al., 2000). Traditional task analysis methods such as GOMS (Card et al., 1983), Hierarchical Task Analysis (Shepherd, 1989), Task Knowledge Structures (Johnson, 1992), based on observable actions may not be appropriate for analysing complex work domains (Moray et al., 1992). These methods of task analysis do not account for the variability of behaviour that is observed in complex systems (Vicente, 1990). One approach to HCI that places an emphasis on the importance of the constraints in the environment (i.e. the task or the work domain) that are relevant to the operator is an ecological one (Vicente, 1990). The framework used in the current research (the HCI-PCMT framework) has an ecological perspective, in as much as it includes a domain external to the system of concern.

Comment 4
References to ‘ecology’ and ‘ecological validity’ abound in the HCI literature. Most are consistent with natural language definitions, for example, ‘study of plants/animals/peoples/institutions in relation to environment’ (Chambers, 1983). Alternatively, ‘study of organisms in relation to their surroundings’ (Oxford Pocket, 1984). As an illustration, Flach claims that: ‘The field (Cognitive Systems Engineering) places a high value on external or ecological validity and naturalistic observations, where cognition is studied in rich semantic contexts’ (1998). The claim sets no obvious limits on the ‘external or ecological validity’, that is, the environment, as it relates to Cognitive Systems Engineering. In contrast, Dowell and Long (1989), limit the ecological environment to the domain of work. For example, their conception: ‘proposes an ecology of worksystem and domain, whereby the behaviours of worksystems are shaped by, and so reflect, their specific domains’.

Hill follows Dowell and Long (1989), as concerns her use of the concept of ecology. For example: ‘The framework used in the current research (the HCI-Planning and Control of Multiple Tasks framework) has an ecological perspective, in as much as it includes a domain external to the system of concern (Section 1, Introduction). Later, she claims: ‘It is understood that data from a real disaster would give the EMCRS model greater ecological validity, than using data from training exercises’ (Section 7).

Limiting the environment to the domain, in the manner of Hill, has the advantage that the domain is completely specified (see Figure 2, showing EMCRS Model 1).

Researchers preferring the more general concept of the environment to complete the ecological dualism, along with the worksystem, would be wise to follow her example and specify those aspects of the environment, relevant to the worksystem. Without such specification, research cannot be replicated and so validated (Long, 1997). Validation of design knowledge is a pre-requisite for HCI discipline progress (as opposed to HCI community growth – see also comments on Carroll’s paper (2010)).

 

 

This approach is similar to the work of others, for example, the Means-Ends Abstraction Hierarchy of Rasmussen and Vicente (1989). However, although the Means-Ends Abstraction Hierarchy can function as a mechanism to cope with the complexity of the natural environment, unlike the HCI-PCMT framework, no distinction is made between the work domain and the interactive worksystem, which allows for an expression of the performance of the system. The Means-Ends Abstraction Hierarchy is a also a more general framework for analysing complex HCI systems, its purpose is not specifically for modelling planning and control in multiple task work situations, and therefore it does not have a planning and control architecture. Consistent with a cognitive engineering perspective (Norman, 1986), the HCI-PCMT framework aims to model the cognitive behaviours of a ‘joint cognitive system’ (Woods and Hollnagel, 1987), considered in relation to their task ‘world’. The HCI-PCMT framework was developed specifically for analysing planning and control for multiple task work systems, and has been shown in previous case-studies to diagnose planning and control design problems (Smith et al., 1997). Thus the HCI-PCMT framework has an, albeit minimalist architecture, to accommodate such planning and control systems. The HCI-PCMT framework was also developed to analyse ‘to-be computerised systems’, and the EMCRS analysed was not computerised.

1.1. Development of design-oriented frameworks and models for HCI

The research presented in this paper is intended to constitute HCI substantive1 design knowledge, in the form of models that support diagnosis of specific design problems, and reasoning about potential solutions to these problems. Dowell and Long (1989) propose the discipline of HCI as the application of HCI knowledge, to support design practices, intended to solve HCI design problems. They identify validated engineering principles as a type of knowledge that best supports HCI practice. These principles would therefore support the design of general solutions to general classes of HCI design problems. The development of such principles represents a long-term goal for an engineering design discipline of HCI.

Comment 5
Hill reports research, which develops models and a method, as HCI design knowledge (Long, 2010), to support diagnosis of specific design problems and reasoning about potential solutions to these problems. Hill contrasts models and methods as HCI design knowledge with (HCI Engineering) Principles, as proposed by Dowell and Long (1989 and 1998). ‘They identify validated engineering design principles, as a type of knowledge, that best supports HCI practice. These principles would, therefore, support the design of general solutions to general classes of HCI design problems’. General Design Problem corresponds to Salter’s Specific Requirements Specification and General Design Solution corresponds to his General Artefact Specification. The derivation and verification relations between them are formal, as are the relations between General and Specific Design Problem and General and Specific Design Solution (see Salter (2010) – Figure 8). The development of such principles represents a long-term goal for an engineering design disciple of HCI (see Section 1: Introduction).

The contrast between these two types of HCI design knowledge, that is, models and methods in the short to medium term and principles in the longer term, raises the issue of the relationship between them. It is important, for example, whether the latter can build on the former, that is, whether Hill’s research, in some way, contributes to the development of principles. It may be that there is no relation between them. However, if the two types of design knowledge share the same conceptions (of the HCI Discipline and the HCI Design Problem) a relationship would seem possible and even plausible.

 

Following, my Festschrift contribution (2010), the most plausible set of relations are as follows: ‘Models and methods research’ shows promise to be carried forward into ‘principles research’, if it succeeds in specifying the models and methods themselves in terms of an HCI Design Problem (as specified by Dowell and Long (1989) in Hill’s case). It is more promising, if the models and methods also support the diagnosis of design problems (again, as in Hill’s research). It is even more promising, if the models and methods prescribe design solutions to the diagnosed problems (only informally and by way of illustration in Hill’s research). However, it is most promising, if the problems and solutions are completely specified, for principles research to attempt to identify the commonalities (and the non-commonalities) between them, to support the formulation of a principle by which the solution is formally derivable (or not) from the problem (see Salter’s (2010) Figure 8 – Design Practice Exemplars). These are all ways, in which ‘models and methods’ research can support ‘principles’ research. Hill’s work is able to provide such support; but has not been required to do so, at this point in time.

 

Elsewhere, work by Stork (1999) and Cummaford (2007) has made use of models and methods research to support the construction of HCI Design Principles. For example, Stork applied the MUSE (Method for Usability Engineering – Lim and Long, 1994) method, as part of ‘HCI best practice’ to help solve design problems in the domain of domestic energy management to support the formulation of initial principles. Likewise, Cummaford used ‘existing HCI design knowledge (including domain (Dowell and Long, 1989) and user models (Smith et al. 1992) and methods to specify class design problems and solutions, as part of his attempt to formulate initial principles for business-to-consumer electronic commerce transactions. Models and methods research, then, has already supported principles research. Models and methods and principles are forms of HCI design knowledge, intended to diagnose design problems and to prescribe design solutions.

 

 

Design-oriented frameworks are one form of HCI knowledge, which is both explicit and is intended to support design directly. Such frameworks provide the basis for modelling specific design problems. Their purpose is to enable designers to reason more effectively about potential design solutions. Frameworks lack the ‘guarantee’ of validated engineering principles. Instead, they support the practices of ‘specify-and-implement’. That is, practices where design proceeds through iterations of successive cycles of specification and implementation. Such frameworks support the designer in producing better specifications at an earlier stage of design, thus reducing costly iteration. These frameworks produce models of the systems under investigation, that support diagnosis of design problems, and reasoning about design solutions. For HCI knowledge to be design-oriented, it must be formulated in relation to an adequate expression of HCI design problems. In turn, an adequate expression of HCI design problems must be constructed in relation to a complete and coherent conception of the ontology of HCI; that is, a conception of those entities constituting the scope of application of the HCI discipline (Long, 1996). Dowell and Long (1989) have developed a conception of HCI as engineering (HCIE) in which they attempt to outline a general, complete and coherent ontology for HCI comprising: (i) an interactive worksystem – the tobe-designed system comprising users and computers, (ii) a domain of application – the work to be carried out, and (iii) performance – the effectiveness with which work is carried out. The framework used for the modelling described in this paper is thus based on the Dowell and Long (1989) conception.

As stated above, the research described in this paper developed such models for the EMCRS – a system that manages the response of the emergency services to disasters, that support diagnosis of EMCRS co-ordination design problems and reasoning about design solutions to these problems.

To show the development of such models this paper will first describe the EMCRS modelled. Then the background to the use of Design-oriented frameworks for such modelling will be presented. The actual framework used for the modelling, based on the Dowell and Long (1989, 1998) conception will then be described in detail in Section 3. In Section 4 the EMCRS data gathered for modelling are outlined. Section 5 describes model development through application of the framework presented in Section 3. Section 6 presents the method for co-ordination design problem diagnosis and applies the method to identify one behaviour conflict and diagnose the co-ordination design problem of cordon restrictions. The last section summarises the paper and identifies future work.

2. Domain of study

Emergency management is an example of a multi-user planning environment, which requires operators to deal with emergency situations. Controlling these situations requires the co-ordination of numerous agents, who share the various specific tasks, which fulfil the overall goal of making the situation stable. These tasks involve a number of people, often geographically distributed, working simultaneously (rather than sequentially), as a team towards the achievement of shared goals. The development of systems for emergency management, therefore, demands the analysis and modelling of co-operative work tasks, placing strong emphasis on the capture and representation of concurrent task activities, involving multiple agents.

The EMCRS has a command and control organisation with a three tier structure. The EMCRS is a general management framework, which has been agreed nationally which:

  • Defines relationships between differing levels of management.
  • Allows each agency to tailor its own response to plans to interface with the plans of others.
  • Ensures all parties involved understand their relative roles in the combined response.
  • Retains sufficient flexibility of option to suit local circumstances to enable the emergency services to interact effectively (Anon., 1994, 2001

The primary objectives of a disaster response, as declared by the Home Office are:

  • To save life.
  • To prevent escalation of the disaster.
  • To relieve suffering.
  • To protect property.
  • To safeguard the environment.
  • To facilitate criminal investigation and judicial, public, technical or other inquiries.
  • To restore normality as soon as possible.

All the different agencies should use this structure to organise their own planning procedures, so that they interface effectively with each other. The three levels are operational, tactical and strategic (sometimes referred to as bronze, silver, gold). At each level, each of the agencies has its own commander for co-ordinating the response. At the strategic level, these commanders make up a senior co-ordinating group. The operational response is carried out by each agency, concentrating on their specific tasks within their areas of responsibility, e.g. the Fire Service fighting fires. The tactical response determines the priority in allocating resources. It also plans and co-ordinates the overall response, obtaining other resources as required, for exam-ple additional fire engines. The strategic co-ordinating group has to formulate the overall policy within which the response to a major incident will be made. At the strategic level, there is one person from each emergency service. Under the EMCRS, the management of the response to major emergencies will normally be undertaken at one or more of the three levels. The degree of management required will depend on the nature and scale of the emergency.

There needs to be co-ordination at all levels of the EMCRS, so that the disaster situation is brought under control, as quickly and efficiently as possible. There needs to be co-ordination at each level within the hierarchy and between the levels. One of the main mechanisms, by which the performance of any planning system is affected, is that of coordination. Hutchins (1990) has identified important features to be accounted for in a distributed task, which ensure effectiveness:

  • Shared task knowledge – each person understands enough about each others’ work to co-ordinate effectively.
  • Horizon of observation – which allows other team members to witness other performances.
  • Multiple perspectives, which allow for activities to be observed from different points of view.

The behaviour of each member of the team is contingent on the behaviour of all the other members of the team. An action by one member will trigger an action (reaction) by another member, until the task is complete. Each member of the group has knowledge of a specific part of the distributed task, the whole group is undertaking. The co-ordination among the actions of the members of the team is not achieved by following a master procedure, instead it emerges from the interactions among the members of the team. The procedure is used as a guide to organising actions. Distribution of tasks leads to a need for co-ordination. Thus, in the case of EMCRS, co-ordination is required, because each emergency services agent only possesses a local view and incomplete information and, therefore, must co-ordinate with other agents to achieve globally coherent and efficient solutions. In emergency management, there is not only co-ordination between each agent within one group, but also co-ordination between each agency (which is made up of many single agents) on a horizontal level and vertical co-ordination between the different command levels. As stated earlier, co-ordination, or rather lack of co-ordination, has been identified as a major factor in the ineffective response of the emergency services to disasters.

The aim of this research was to attempt to diagnose these coordination problems with respect to the planning and control of the EMCRS. The emergency services response to disasters has different phases. First, there is the initial response, when the situation is usually fairly chaotic. Second, the response some time later (could be a few hours, maybe longer depending on the scale of the incident), when the situation is more stable. Last, the restoration of normality phase, when the actual incident has been brought under control, but the situation has not returned to normal. Within each of these phases, the emergency services will have different roles/tasks, that they need to carry out. During the initial response phase, the tasks being carried out by the emergency services will be their primary tasks in response to the situation, e.g. Fire Service fighting fires, Ambulance Service treating casualties. Collaboration, coordination and communication are thus, vital at the initial response stage (Anon., 2001). Co-ordination problems, occurring between services in the initial response phase, will, therefore, have more of a detrimental effect on EMCRS performance, than co-ordination problems occurring at other phases, when the tasks being carried out are not dealing with the initial effects of the situation. Data collected for use in the modelling of the EMCRS will thus need to include the initial phase as a priority. The EMCRS is thus a complex system interacting with a complex dynamic situation. There are multiple agencies, with multiple personnel, at multiple levels of command, carrying out concurrent task activities.

3. HCI planning and control for multiple task work framework

The aim of the present research was to develop models of the EMCRS that support the diagnosis of EMCRS co-ordination design problems and the reasoning about solutions to these problems. To develop such models a design-oriented framework was required, that supports modelling of the EMCRS – a distributed cognitive planning and control system, comprising more than one user, or groups of users, whose activities must be co-ordinated for effective performance. One such framework was developed for a class of HCI design problem, expressed as the planning and control of multiple task (HCI-PCMT) work in office administration a ‘to be computerised system’ (Smith et al., 1992, 1997). The office administration domains previously modelled by the HCI-PCMT framework were single user planning and control systems. Application of the HCI-PCMT framework to model the EMCRS would, thus, extend the scope of the framework to accommodate multiuser planning and control systems. The models produced would identify planning and control co-ordination design problems and thus diagnose ineffective performance.

The HCI-PCMT framework is based on a conception of HCI (Dowell and Long, 1989, 1998). The conception distinguishes the interactive worksystem, which comprises users and computers or, more generally devices/equipment, from its domain of application, constituting the work carried out by the worksystem. The effectiveness with which work is carried out, that is performance, is a function of the quality of the work (how well it is performed), and the resource costs to the worksystem (the effort, etc. of performing the work that well). Overall performance, thus, expresses whether goals have been achieved, and at what cost. A design problem is diagnosed, if actual performance (Pa) does not equal desired performance (Pd), where performance (P) is expressed as task quality (Tq), user costs (Uc) and computer (device) costs (Cc). A design solution is prescribed, if Pa is equal to Pd.

Comment 6
Throughout her paper, Hill makes liberal use of the concept of ‘problem’, for example, ‘design problem’; ‘coordination problem’; ‘coordination design problem’; and ‘inter-agency co-ordination design problem’. This use prompts consideration of how the concept of ‘problem’ is used more widely in HCI.

First, some HCI researchers make little or no use of the concept of ‘problem’. Examples to hand are the contributions of Carroll and Wild to this Festschrift (2010). Neither researcher appears to assign much, indeed if any, importance to the concept of ‘problem’, either as concerns the Discipline or a Conception of HCI.

Second, as might be expected, some researchers use the concept of ‘problem’; but only in the natural language sense of the word, for example: ‘doubtful or difficult matter requiring a solution’ (Pocket Oxford, 1984) or ‘a matter difficult of settlement or solution’ (Chambers, 1983). Dix, in his paper, writes: ‘new disciplinary roots require new methods. We have many methods in our HCI toolkit, so this does not seem to be a problem’ (Section 4). Also, in the same section: ‘The problem here is that the paper (and many in HCI) has ‘borrowed’ controlled experimentation methods from psychology…..’. Elsewhere, Salter writes: ‘The new system led to further problems as doctors receiving an offer from one residency program held off accepting until they had heard from a preferred program’ (Section 5).

Other researchers, however, conceive of ‘problem’ primarily as it relates to design and, in some cases, to a Discipline of Design. Newman, for example, identifies the first step in the engineering design process as: ‘Recognising the need for an artifice, and thus identifying a problem in computer systems design, whose solution will meet this need’. Likewise, Dowell and Long (1989) claim that: ‘Engineering disciplines apply validated models and principles to prescribe solutions to problems of design’. Also, Salter (2010) asserts that: ‘Engineering disciplines have problems of design…… (Section 1). Also, that: Disciplines of different types attempt to solve different design problems’ (Section 2). Lastly, in her abstract, Hill claims that: ‘Design knowledge supports design practice directly, as the diagnosis of design problems and the prescription of design solutions’.

As well as being conceived in terms of HCI design (or an HCI Discipline of Design), as illustrated earlier, we can further ask, together with Dowell and Long (1989): ‘What might be the nature and the form of the (design) problem being solved?’ In other words, how does a Conception of HCI conceive of a (design) problem?

Of course, such a question cannot be answered either by researchers, who eschew use of the problem concept (see Carroll and Wild earlier) or researchers, who restrict its meaning to that offered by natural language (see Dix earlier). An answer, however, can be found in the work of researchers, who use a Conception of HCI to guide their research. For example, according to Salter (Section 3 (2010)), Design Problems have two key components: ‘the requirements component and the artifact component. The requirements component is the ‘what’ of the design problem…. The artifact component represents the ‘how’ of the design problem’.

Finally, turning to Hill’s paper, she asserts: ‘A design problem is diagnosed, if actual performance (Pa) does not equal desired performance (Pd), where performance (P) is expressed as task quality (Tq), user costs (Uc) and computer (device) costs (Cc).’ By using a conception of the HCI Discipline as (engineering) design and a conception of HCI as solving design problems, Hill and Salter are able to build on the work of others, sharing the same conceptions. For example, Hill’s EMCRS models and method (as HCI design knowledge) are built on the HCI-PCMT (Planning and Control of Multiple Tasks) framework of Smith et al, (1992 and 1997), which in turn is built on Long and Dowell’s HCI Engineering Discipline Conception (1989) and Dowell and Long’s HCI Design Problem Conception (1989 and 1998). Hill’s EMCRS models and method support the diagnosis of inter-agency coordination planning and control design problems. Similarly, Salter uses the HCI Discipline and HCI Design Problem Conceptions to address the design of economic systems. Salter analyses the global financial crisis of 2007+, in terms the specification of the general problem of economic design.

Building on the work of others, as in the case of Hill and Salter, is central to the progress of the discipline of HCI. Such progress is required for the validation of HCI design knowledge as: conceptualization; operationalisation; test; and generalization (Long, 1996). The concept of design problem may have the potential to attract the consensus, which would allow HCI researchers to build on each other’s work and so to contribute to the validation of HCI design knowledge. Hill’s paper illustrates such potential.

 

In the Dowell and Long conception, a domain of application (or work domain) is described in terms of objects, which may be abstract or physical. Objects are constituted of attributes, which have values. The attribute values of an object may be related to the attribute values of one or more other objects. An object, at any time, is determined by the values of its attributes. The worksystem performs work by changing the value of domain objects (i.e. by transforming their actual attribute values) to their desired values, as specified by the work goal. Attributes may be affordant or dispositional. Affordant attributes are transformed by the worksystem; their transformation constitutes the work performed. Dispositional attributes are relevant to the work (they need to be used by the worksystem); but are not changed by the worksystem.

The worksystem is conceptualised as a behavioural system comprising the interacting user behaviours (supported by user structures) and computer (device) behaviours (supported by device structures). Abstract structures comprise representations and processes. Abstract representation structures refer, for example to the worksystem’s knowledge, databases or information stores. Abstract process structures refer, for example, to the worksystem’s procedures, methods or heuristics. Abstract structures support worksystem abstract behaviours, when abstract process structures, such as procedures, act on abstract representation structures, such as a database. Similarly, worksystem physical structures support worksystem physical behaviours. The HCI-PCMT framework specifies worksystem structures for the planning and control of multiple task work. These structures are expressed at both abstract and physical levels of description. Physical structures embody abstract structures and physical behaviours embody abstract behaviours. At the abstract level, the framework describes the worksystem’s cognitive structures. These comprise four process structures (planning, controlling, perceiving and executing), and two representational structures (plans and knowledge-of-tasks). These structures support the planning and control behaviours of the worksystem and are distributed across the physical users and devices/equipment. The four processes support the behaviours of planning, control, perception and execution respectively. The physical structures support the physical behaviours, but are not differentiated further by the HCI-PCMT framework, as the framework’s concern is primarily with abstract behaviours associated with planning and control.

The rationale for what to some might appear a ‘minimalist’ architecture is threefold. First, the general architecture of representations and processes is commonly assumed by Cognitive Psychology models in the information processing tradition. Secondly, the architecture was adequate to support the construction of the initial HCI-PCMT framework for the domain of secretarial office administration. Thirdly, the architecture supported the construction of models, whose form and granularity were commensurate with solving user interface design problems. The full argument for this set of structures, can be found elsewhere (Smith et al., 1997); but can be summarised as follows:

Influenced by Newell and Simon (1972), much planning research in Cognitive Science and Artificial Intelligence has tended to view plans as complete and fully-elaborated behaviour sequences, which ensure task goal achievement. This view has been undermined by research into planning in HCI. The behaviours of users, who are part of worksystems, it has been argued, cannot be regarded entirely as the output of executable plans (e.g., Suchman, 1987; Larkin, 1989; Payne, 1991) – rather they are often, at least partly, direct responses to the task environment. Within this perspective, plans need not be complete and fully-elaborated, but rather they may be partial (in the sense that they may specify only some of the behaviours to be implemented) and/or general (in the sense that some behaviours may be specified only generally and not at a level that is executable). Such plans might be more generally viewed as ‘resources’ for guiding behaviour (Suchman, 1987). Furthermore, if a plan is regarded as a resource to guide behaviour it is no longer necessary that it be limited to specifying behaviours. Rather it might, instead, specify required states of the task or conditions of the environment. Plans, which serve as resources for guiding behaviour, rather than as specifications of complete and fully-elaborated behaviour sequences, cannot ensure that goals will be achieved. This research also undermines the assumption that perception precedes planning, which precedes execution. Ambros-Ingerson (1986) argued that all planning can precede execution only when:

  1. The task environment is static – relevant changes in the task environment do not occur after the plan is complete.
  2. The task environment is simple enough to be practically modelled – the consequence of behaviours can be predicted sufficiently well to generate a complete and fully-elaborated behaviour sequence.
  3. The task environment is known – the planner’s knowledge of the task environment can be complete before planning commences.

Most task environments studied by HCI researchers do not embody these assumptions (Young and Simon, 1987). In direct contrast, they are usually dynamic, complex and partly unknown by the planner (e.g., Hollnagel et al., 1988). Execution behaviours in worksystem task environments are required to commence before plans are complete and fully-elaborated and therefore the perception, execution and planning behaviours must be temporally interleaved – having no necessarily fixed order in which to be performed.

When performing a task, a system has to exercise control; that is, it has to select the next behaviour to be carried out at each moment (e.g. Hayes-Roth, 1985). For a system, which constructs complete and fully-elaborated plans, controlling is a simple process of selecting behaviours, according to the plan and initiating their execution. However, for worksystems, which employ plans as resources to guide behaviour, some more complex control behaviour is required to select execution behaviours over time – since the selection is constrained by, rather than specified by, the plan. Furthermore, if a worksystem interleaves execution behaviours with planning and perception behaviours, controlled sequencing of these behaviours is also required.

Consistent with the preceding arguments, the PCMT framework describes the worksystems’ cognitive structures for planning and control as follows:

At the first (abstract) level of description, Plans are specifications of required transformations of domain objects and/or of required behaviours. They may be partial (in the sense that they may specify only some of the behaviours or transformations), and they may be general (in the sense that some behaviours or transformations may be specified only generally and not at a level that is directly executable). Planning behaviours, thus, specify the required domain object transformations and/or behaviours to support those transformations.

Perception and execution behaviours are, respectively, those whereby the worksystem acquires information about the domain objects and those whereby it carries out work, changing the value of the object attributes as desired. Information about domain objects from perception behaviours is expressed in the knowledgeof-tasks representation. Control behaviours entail deciding which behaviour to carry out next, both within and between tasks; but involve more than reading off the next behaviour from a complete and fully-elaborated plan.

The second level of description of planning and control structures is physical, wherein the framework describes the distribution of the abstract cognitive structures across the physically separate user and devices of particular worksystems. The framework, thus, allows the construction of alternative models of the distribution of cognitive structures across the user and devices, and so supports reasoning about allocation of function between users and devices, a major decision in design problem solutions. In the office administration domains, studied for the development of the framework, the physical worksystem was the person plus devices, but not a computer. These domains were: secretarial office administration; and medical reception. The more general notion of devices in the framework replaces the notion of a computer in the Dowell and Long (1989) conception. The use of devices (which include computers) allows the HCI framework to model work situations for which computerised support has yet to be developed. This would include the EMCRS, which at the time of analysis was not computer supported. The models of EMCRS produced by application of the framework would enable a designer to reason about design solutions including computerisation.

The outline of the HCI-PCMT framework, including its domain of application, its worksystem and its performance, is now complete. How the framework supports the analysis of multiple task work will now be outlined.

Multiple task work requires a user, as part of the interactive worksystem, to perform distinct, but overlapping tasks. Each task potentially competes for worksystem behaviours. Multiple task work represents an important concern for system designers, as performing overlapping tasks is likely to have an effect on work performance. The term ‘multiple task work’, as characterized by the framework, refers to situations, in which more than one task is carried out concurrently over relatively long and overlapping periods of time. Characterising multiple task work requires a single task to be defined. In the framework, a task is considered to be part of the work carried out in the domain of the worksystem. A task is thus conceptually distinct from the worksystem itself and its behaviours. In one of the systems previously analysed by the framework, medical reception, (see Hill et al., 1995) a task was expressed as the support of a medical case object (i.e. patients consulting with medical practitioners). The medical reception domain is an instance of multiple task work, since support is provided concurrently for multiple ongoing and temporally overlapping medical cases (i.e. for many patients together). In the EMCRS, there are multiple agencies, who need to work together towards the goal of stabilising the disaster situation. Each of the agencies involved within the EMCRS has its own set of tasks that it must carry out in order to achieve the overall goal of stabilising the disaster. These tasks are carried out independently from the other agencies; but the behaviours, associated with each of these tasks, need to be co-ordinated with the other agencies, to maximise the effectiveness of the overall EMCRS response. The work of the EMCRS can be described as the support of a disaster. Unlike, the previous systems analysed with the framework, there are obviously not multiple disasters, and thus a single task cannot be described as support for a single disaster. Rather, a single task would be each of the individual agency tasks in support of a disaster. Thus, the work of the EMCRS is multiple task, since support is provided concurrently for the multiple ongoing and temporally overlapping tasks carried out by the individual agencies in response to a disaster. This difference in the task description has implications for the application of the framework to the EMCRS, and will be one of the areas of extension for the framework, that modelling a multi-user planning system requires. These implications will be described in detail with respect to the framework axioms, (see Section 5). Extending the framework in this way, would enable application of the framework to other complex systems, where there are multiple users or groups of users carrying out independent, but concurrent tasks, that need to be co-ordinated for effective performance.

The EMCRS has more than one level of operation, in fact potentially three, (operational, tactical and strategic) depending on the characterisation of the disaster to which response is made. The HCI-PCMT framework has so far only been applied to systems with one level of operation. The HCI-PCMT framework will need to be extended to accommodate this difference.

The HCI-PCMT framework is expressed as a set of axioms,2 based on a partial and selective application of Dowell and Long’s (1989) conception for HCI. (The HCI-PCMT framework was developed for a ‘to-be computerised system’, and therefore some of the specifics of the conception which refer to computers are not applicable.) The purpose of the HCI-PCMT framework is to express design problems to aid a designer to reason about possible design solutions, in a specify-and-implement type of design practice. The axioms are described later in the paper with respect to EMCRS. A diagrammatic representation of the generic HCI-PCMT framework is shown in Fig. 1. This representation is used to apply the framework to the EMCRS.

Fig. 1. HCI-PCMT framework.

Thus, application of the framework allows for a description of the abstract and physical structures of the interactive worksystem and the abstract and physical objects of the domain of application (work). The framework defines the relationship between the abstract and physical structures of the worksystem, and the relationship between the abstract and physical objects of the domain. Performance is some function of the task quality, associated with the multiple task work carried out, and the resource costs, associated with worksystem structures and behaviours of planning and control. The framework will thus allow for a description of design problems, where actual performance does not equal desired performance.

4. EMCRS training – Exercise Scorpio

In order to develop models of the EMCRS by application of the HCI-PCMT framework, data regarding an instantiation of EMCRS were required. Ideally, such data would be from response to an actual disaster. However, these data are hard if not impossible to access, and also difficult to make direct observations from. There are various specialist training centres in the UK for the emergency services. However, there is only one centre where multidisciplinary training is provided – The Cabinet Office Emergency Planning College at Easingwold (formerly the Home Office Emergency Planning College). The aim of the college is to promote and sustain emergency preparedness within the United Kingdom through the concept of Integrated Emergency Management. Many different training exercises are run at the centre, that cover all aspects of emergency planning. The exercise where the data were gathered was the Emergency Services Seminar on Inter-Agency Response to Disaster. The aim of this seminar was to provide an opportunity for emergency services’ personnel, who may have a role to play in the dissemination of best practice at the operational, tactical and strategic command levels, to study problems which might arise from major civil emergencies with particular reference to the need for a co-ordinated response. The information regarding the exercise – Exercise Scorpio, made it clear that it was the most appropriate of all those run by the college, for gathering EMCRS data. The data were provided by two stagings of this exercise. The trainees were members of the emergency services and local authority emergency planning officers. There were 60 trainees taking part in each exercise. The emergency service personnel were brought together in multi-agency groups, called syndicates, to discuss Exercise Scorpio. These syndicates, were pre-selected, i.e. the members had applied to attend the seminar as a group. Syndicates comprised Police, Fire, Ambulance and Local Authority personnel from the same district, county or region. The exercise required the trainees to describe their response to Exercise Scorpio from initial response to the restoration of normality. There are various well defined phases in response to the disaster scenario, which involve different worksystem configurations and different worksystem behaviours. In the model, only one worksystem configuration and its behaviours are modelled – that of the initial response phase. This phase is considered critical, since co-ordination problems at this phase can have most serious effects on subsequent disaster stability.

4.1. Exercise Scorpio narrative

Newford is a market town in the county of Crownshire. The main town centre is built around the main A338 which bisects the town in a north/south direction. In the centre of the town a railway, carried on an embankment and bridge, runs directly across the town from east to west. From the railway bridge, in a northerly direction, the A338 is inclined and at the bottom of the incline is a canal with moorings for canal boats. This is a very busy holiday waterway and is a popular stopping point.

At approximately 0930 h on a weekday during school term time, a tanker train en-route from a refinery to an airport fuel depot is derailed whilst passing over a railway bridge.

The bridge which is a steel Victorian structure, carries the railway over the A338 which provides the main access into the town centre. It is market day and there are numerous market stalls set up on the side of the roadway on either side of the bridge. The market and town are a popular destination for locals and visitors from out of the town area, including foreign citizens on sightseeing tours through the area. There are a number of housing developments behind the shops in the High Street each side of the railway bridge and a nursery school with 60 toddlers is sited approximately 400 m away. A cottage Hospital with 30 beds is some 500 m to the south east of the bridge and a primary school with 200 pupils some 800 m to the south.

The train consists of a diesel electric locomotive and eight tank cars, each fully loaded with 100 tonnes of AVTUR Aviation Fuel (SIN 1863 with an emergency action code 3 (Y)E). During the derailment one of the tank cars is ruptured and aviation fuel flows down the sides of the embankment onto the roadway and into adjoining properties. Flammable vapours from the fuel have been ignited by an open gas burner from a catering caravan.

At the time the explosion occurred, a tourist bus was passing beneath the railway bridge. The bus is carrying 45 tourists of whom 25 are Japanese and German nationals.

The explosion has created severe structural damage to the railway bridge, premises adjacent to the bridge in an approximate 30 m radius and has created major leaks in two of the other tank cars. Structural damage of a moderate nature has occurred within an approximate 100 m radius. At least 50 people have been killed, including some of the foreign tourists. Many people have received burns from the flashover of the explosion and numerous people are trapped and injured in properties, beneath the bridge the structure and in vehicles on the roadway. A number have also been contaminated by aviation fuel. There are numerous fires in the area.

The leaking fuel has run down the road incline to the north of the bridge and is entering the canal and watercourses at the bottom of the incline. The river flows from east to west but there is no particular flow on the canal. The wind is north westerly Force 2–3. There are several barges used for residential purposes moored on the canal.

Early information from witnesses suggests youths have been seen running from the section of rail track where the derailment took place and that vandalism may be responsible for the derailment.

No recording device could be used for gathering data as each syndicate was located separately. Therefore, during the syndicate discussions data were collected through note taking by the researcher. These notes were an aide memoire for the researcher. All the syndicates were then brought together and asked to present their responses to the group as a whole. General information from these presentations was recorded in bullet-point form on a printable white-board. These records were printed for the researcher.

For each of the emergency services, the trainees were asked to describe: what their roles would be for the scenario; what their response would be to the scenario; what problems they would face; what problems the other services would face; and what potential conflicts there would be with the other services. The raw data recorded required explanation, as many aspects of the Emergency Service personnel responses were couched in language specific to the Emergency Services and would therefore be unclear or uninformative for a naïve reader. The researcher thus ‘expanded’ the data to express as much information as possible. This expansion was based on: the discussion within the syndicates; from further discussion within the seminar between the Emergency Services trainees and their trainers; from Emergency Services planning documentation and from information given by the Home Office Emergency Planning Research Group. Data were collected at two different stagings of Exercise Scorpio.

5. EMCRS model development

The EMCRS models were constructed by application of the HCIPCMT framework axioms and representation to the data collected from two stagings of Exercise Scorpio. The application of the HCIPCMT framework axioms to EMCRS will extend the framework to accommodate the EMCRS. Section 5.1 will present the HCI-PCMT framework axioms applied to EMCRS data. The HCI-PCMT framework representation was applied to the two datasets to produce two different EMCRS models. These models were then combined to produce EMCRS Model 1 which is presented and described in Section 5.2.

To reiterate to aid understanding for the reader, the HCI-PCMT framework makes a distinction between the interactive worksystem (users and computers/devices) and its domain of application or work domain.

5.1. HCI-PCMT axioms for EMCRS

5.1.1. Axiom 2.1 HCI-PCMT EMCRS design problems

HCI-PCMT EMCRS design problems and their possible solutions, generated by specify-and-implement design practice, entail the specification of the implementable3 planning (structures and) behaviours and control (structures and) behaviours of the emergency service personnel and emergency service devices of the worksystem, such that when they interact with the perception (structures and) behaviours and execution (structures and) behaviours of the emergency service personnel and emergency service devices, they provide support for a disaster, such that the actual level of performance falls within some desired level of performance.

5.1.2. Axiom 2.2 EMCRS domain: multiple task work

The domain is described at both an abstract and a physical level. At the highest level of description (Abstract level 2) is the Disaster object. This Disaster object is defined at Abstract level 2, the highest level. The attributes and values for the Disaster object were conceptualised as stability and normality, with values along a continuum. Thus, the overall task of the EMCRS is to transform the Disaster object to a desired level of stability and normality. The performance of the EMCRS worksystem is expressed by the transformation of the Disaster object’s attribute values. Each task carried out by the worksystem transforms the attribute values of the disaster object. These attribute values change by manipulation of the values of the attributes of the sub-objects of the domain. These sub-objects’ attribute value changes are affected by the sub-tasks of the EMCRS, which are the individual agency tasks (also the multiple tasks in EMCRS). Each of these sub-tasks will have associated domain sub-object transformations – sub-transformations. The required transformation of the Disaster object can be divided into a number of sub-transformations, concerning particular sub-objects and their attributes. The attribute normality was conceptualised from the notion that at the beginning of the disaster scenario the ‘disaster’ is chaos, and the work of the EMCRS is ultimately to restore normality, i.e. to bring the disaster under control. Thus, the more desirable the level of normality the better the performance of the EMCRS. The second attribute – stability was conceptualised through an understanding of the expected overall performance of the EMCRS worksystem – that of stabilising the disaster (preventing further loss of life and containing fires and other hazards). Both of these attributes’ values are changed by the transformation of the sub-object attribute values.

The other abstract objects, which are sub-objects of the disaster object, were conceptualised from: the primary objectives of the EMCRS (see Section 2); the primary roles/tasks for the emergency services (Anon., 1994); information from the exercise narrative; and the data. The sub-objects of the domain: the Lives sub-object; Disaster Character sub-object; Disaster Scene sub-object; Property sub-object, Environment sub-object and Emergency Service sub-object are at a lower abstract level of description – Abstract level 1. At the physical level of description, the abstract sub-objects of the domain are realised as physical objects. Abstract level 1 objects have realisation attributes whose values specify the physical objects of their realisation. These three levels are the same as those for the office administration domains. Vertical relationships exist between the values of attributes at different levels of description. The realisation relationship between Abstract level 1 objects and Physical level objects is a many-to-one relationship. The value of physical object attributes determine through emergence the values of Abstract level 1 attributes. In turn, the values of attributes of the Disaster object (Abstract level 2) are determined by an emergence relationship from the values of attributes at Abstract level 1 – the sub-object attribute values. Horizontal relationships exist. There is a relationship between the values of the attributes at the same level of description. For example, at Abstract Level 1, the Disaster Scene sub-object attribute scene containment value contained will require the Lives sub-object emergency services personnel safety to have a value of equipped. A task is the required state transformation of the Disaster object, including all lower level transformations of the associated sub-objects. As there is only a single Disaster object, at the highest level of description, multiple task work is that domain work in which, at the highest level of description, the Disaster objects attributes are undergoing independent, but temporally overlapping transformations. This HCI-PCMT extension is to accommodate a multi-user planning system. (The EMCRS domain thus, contrasts, with the office administration domain where there are multiple objects at the highest level of description.)

5.1.3. Axiom 2.3 EMCRS worksystem: planning and control behaviours and structures

Four types of abstract behaviour are generic to the worksystem and undifferentiated between the users and devices. These behaviours are planning, control, perception and execution. The four types of abstract behaviour are supported by abstract structures, also undifferentiated between users and devices. These abstract structures comprise four types of process, corresponding to the four types of behaviour. That is, a planning process, a controlling process, a perceiving process, and an executing process. There are two types of representations: a plan representation and a knowledge-of-tasks representation. The EMCRS abstract worksystem is defined in terms of these abstract cognitive structures. The physical worksystem structures were identified from analysis of the data, from information about EMCRS structures, and information about the required personnel and resources for particular worksystem behaviours (specified in the roles/tasks of the different services). Thus, from information on the EMCRS, we have commanders at the tactical and operational levels of control. From information on roles/tasks for the emergency services, and from the data, one role of the Police Service is the preservation of the scene for evidence and enquiries. Preserving the scene will require Police personnel to manage the scene. All the physical structures required for model diagnosis of design problems from the identified conflicts are represented in the model. These abstract and physical structures are shown in Fig. 2.

Fig. 2. EMCRS physical and abstract structures.

The behaviours associated with these structures are as follows: Perception behaviours are those, whereby the worksystem acquires information about Property, Lives and other domain objects, such as their risk status, and records these values. The states of the domain objects form the contents of the Knowledge-of-tasks representation. Perception behaviours update the contents of the Knowledge-of-tasks representation, based on the reading of the domain, for example that there are properties at risk. Execution behaviours are those which carry out the work of the worksystem by transforming the values of the domain object attributes values directly; for example, treating injured survivors at the scene transforms the Lives sub-object attribute survivor status from untreated to treated. These execution behaviours will in turn transform the Disaster object to a more desired level of stability along its continuum. Planning behaviours are those that specify what and/or how tasks will be accomplished in terms of required object state transformations and/or required worksystem behaviours, for example, that the site should be declared a crime scene, which requires Police personnel to manage it. Control behaviours select which behaviours should be carried out next, based on the contents of the plan and Knowledge-oftasks representation. A plan representation structure embodies the plans used in the combined response. These structures are distributed across the different levels of the worksystem, i.e. strategic, tactical and operational.

5.1.4. Axiom 2.4 EMCRS performance

Performance is some function of: the task quality associated with the multiple task work (for example, lives saved, fires contained) and the resource costs associated with the worksystem planning and control behaviours (for example, plans correct, firemen in place). (As above resource costs are not differentiated between the user and devices, also resource costs are specific to the planning and control worksystem structures and behaviours.)

All the HCI-PCMT axioms have now been re-expressed for the EMCRS. These axioms and the HCI-PCMT framework representation were applied to the data from the training exercises to produce EMCRS models. Details of the how the models were constructed will not be presented here as the remit of the current paper is to show how the models can be used for design problem diagnosis.4 Model 1 is the combined model from both datasets.

5.2. EMCRS Model 1

Fig. 2 shows EMCRS Model 1, the model constructed by application of the HCI-PCMT framework to the combined datasets 1 and 2. EMCRS Model 1 shows all the abstract and physical structures of the worksystem identified from both datasets, on the left handside of the diagram. At the abstract level, the three tier structure of the EMCRS, operational, tactical and strategic is depicted (but interactions within and between levels are not shown). These abstract structures are distributed across the physical worksystem structures. This distribution is not shown here, due to the limitations of the representation. The abstract structures representation is the same as that shown in Fig. 1. The physical level shows all those structures that have been identified as necessary to inform the abstract structures of the worksystem for the conflicts identified within both datasets. On the right-hand side, the domain objects, attributes and values are shown, from both datasets, both abstract and physical. The physical level objects attributes and values are not shown in full, as the representation shown here cannot support this. (For further information see Hill, 2005.) The links, between the abstract sub-objects and the physical objects, shown with a dotted line define the abstract to physical realisation relationship. The realisation relationship has a one-to-many mapping. The full lines between the Disaster object and the other sub-objects define a part-of relationship. The attributes with a star (*) are dispositional, that is they need to be perceived by the worksystem; but are not changed by it.

Although a strategic level of command is represented in Model 1, this intent is for completeness of the EMCRS representation. The structures of the strategic level are not referred to in the model descriptions. The reason is because, although a strategic level of command was set up in the exercise and so is, thus, included in the representation, this command level is not activated in the initial response phase to the exercise. The data for the modelling are only from the initial response phase, and therefore do not refer to the strategic command level. It is often the case that a strategic level of command is not activated in major incidents, until later in the response, or sometimes not at all, if it is decided that it is not required.

The EMCRS Model 1 is intended to be a model for diagnosing planning and control co-ordination design problems of the EMCRS, and offering up potential solutions to these design problems. A method was developed to aid in the use of the model for diagnosis. Thus, once constructed, Model 1 was used to diagnose EMCRS coordination design problems, through application of this method. A planning and control co-ordination problem is diagnosed if actual performance does not equal desired performance, where performance is expressed as a function of task quality and resource costs to the system, i.e. performance expresses whether goals have been achieved and at what cost. The method therefore guides the identification from the model of the planning and control behaviours of the interactive worksystem associated with a given task and the corresponding desired domain object transformations, such that the performance of the system for this particular task can be expressed. Being able to identify where (at a planning and control level) within the system a co-ordination design problem is occurring enables better reasoning about potential design solutions to these problems. The next Section describes the method for co-ordination design problem diagnosis.

6. Method for co-ordination design problem diagnosis

Application of the method and the diagnosis of one co-ordination design problem will now be described (see Table 1).

6.1. Method stage 1 – identifying potential conflicts

The training scenario questions included one on inter-agency conflicts. From the data recorded in responses to this question, five sets of conflicts were identified. This section will describe one of these conflicts as identified in the data.

Conflict 2 Cordon restrictions – due to information in the Exercise Scorpio narrative, regarding structural damage to buildings and a number of fires at the scene, the Fire Service set up an inner cordon to contain the scene. The Fire Service are responsible for the safety of all personnel within the cordon. Access is restricted to those with regulation safety equipment. The Ambulance Service need access to locate casualties and either treat them at the scene, or transport them to hospital. The Ambulance Service do not have regulation safety equipment and are not allowed access to the casualties. The Fire Service task of containing the scene conflicts with the Ambulance Service task of locating and treating casualties.

Having identified the behaviour conflict, stage one of the method is now complete. The next stages of the diagnosis method are now applied to the identified conflict, to diagnose co-ordination design problems. Co-ordination design problems are diagnosed, if actual performance does not equal desired performance.

6.2. Method stage 2

The behaviour conflict arose as follows: The Fire Service operational commander (physical structure) carries out perception behaviours that update his knowledge-of-tasks with the information that there are structurally damaged buildings, fires and leaking hazardous fuels. He then carries out control behaviours that direct him to consult the major incident plan. The plan specifies that the Fire Service is responsible for setting up an inner safety cordon, when there are hazards and dangers at the scene, and maintaining the safety of all those within the scene. Based on this plan, and the knowledge-of-tasks (about the fires etc.), the operational commander carries out control behaviours that direct him to consult the operational plan for setting up of a cordon. The operational plan gives guidance for cordon set-up and regulations. The operational commander then carries out control behaviours that direct him to carry out planning behaviours, based on the operational plan and the knowledge-of-tasks. The planning behaviour specifies how the inner cordon should be set-up and what the regulations are for entering it. The operational commander then carries out control behaviours that direct him to carry out an execution behaviour of setting up the cordon. This execution behaviour is carried out by the operational personnel (firemen) setting up the cordon and maintaining specified safety regulations. This physical object manipulation transforms the abstract Disaster Scene sub-object’s attribute scene containment from un-contained to contained. Thus, transforming the Disaster object attribute of stability to a more desired level along its continuum. (At the same time, other operational firemen and their fire equipment are controlling the fire and stabilising buildings, transforming the attributes of the Property and Disaster Character sub-objects, which are again transforming the Disaster objects stability attribute towards its desired level.)

Method Stages Action Example for Clarification
1 From data, identify tasks carried out by each agency in response to the scenario, where there are potential conflicts Set-up of inner cordon by the Fire Service; access to casualties for triage without regulation safety equipment by the Ambulance Service
2 Use Model 1 to describe the behaviours associated with each task and the corresponding desired domain sub-object transformations Desired domain sub-object transformations are those transformations that would be carried out, if an agency’s behaviours are not hindered. For the above example, one desired domain sub-object transformation for the Ambulance Service would be: Lives sub-object attribute survivor triage status from untriaged to triaged
3 Identify behaviour conflicts i.e. which domain sub-object transformations will hinder other domain sub-object transformations From the above example the Fire Service behaviours of transforming the Disaster Scene sub-object attribute scene containment from un-contained to contained have hindered the Ambulance Servic behaviours and corresponding domain sub-object transformations
4 Use Model 1 to identify whether other domain sub-object transformations that will be hindered as a ‘knock on effect’ from the initial conflict behaviour For example, the Ambulance Service not being able to transform the Lives sub-object attribute survivor triage status from untriaged to triaged will mean that the Lives sub-object attribute survivor treatment status cannot be transformed from not treated to treated. Also, as the Ambulance Service cannot access the casualties, the Fire Service will have to move the casualties to the edge of the cordon to enable triage to take place. In so doing, the Fire Service will reduce their fire fighting and property protection behaviours, as personnel will need to be taken away from these tasks to carry out rescue behaviours and will therefore not be able to transform the Disaster Character sub-object attribute fire status from uncontrolled to controlled, and the Property sub-object attributes of buildings/vehicles status from at risk to not at risk
5 Identify the performance effect of the hindered domain sub-object transformations by referring to the overall common objectives and priorities of the EMCRS (i.e. to save life, to prevent escalation of the disaster, etc.). The primary priority for all services is to save life. Therefore, hindering any domain sub-object transformation that reduces life saving by the EMCRS will have the greatest impact on performance In the current example, hindering triage and subsequent treatment transformations by the Ambulance Service, of the Lives sub-object will greatly affect the performance of the EMCRS with respect to the primary priority of saving life. Reducing the fire fighting and property protection behaviours by the Fire Service will have an effect on the secondary priority of preventing escalation of the disaster. Thus, Model 1 gives a performance expression of actual performance being less than planned/ desired performance, as a performance deficit, is shown for both agencies

Table 1 Method for co-ordination design problem diagnosis.

The operational commander then carries out control behaviours that direct him to inform the tactical incident officer of the inner cordon set-up. (Some kind of internal communication behaviour is carried out to inform the incident officer about the cordon set-up.) The tactical incident officer (and his communication equipment) then carries out perception behaviours, which update his knowledge-of-tasks about the inner cordon set-up. The tactical incident officer then carries out control behaviours that direct him to consult his plan to assess the resources required for the setup. He then carries out planning behaviours to specify the resources required for this task.

At the same time, the operational Ambulance Senior officer (tactical level), (with his communication equipment) is carrying out perception behaviours that update his knowledge-of-tasks with the information that there are a number of casualties at the scene. He then carries out control behaviours that direct him to consult his major incident plan. According to the plan, casualties must be triaged and then either treated at the scene and/or transported to hospital. He then carries out control behaviours that direct him to carry out planning behaviours to specify in the operational plan what personnel are required to triage, treat and/or transport the casualties. Based on this plan and the knowledge-of-tasks, he then carries out control behaviours to direct the execution behaviours of triaging, treating and/or transporting casualties. These execution behaviours are carried out by the Ambulance operational personnel for triage and treatment with their Ambulances for transport. It is the manipulation of the physical casualties, which transforms the abstract Lives sub-object attributes survivor triage status from untriaged to triaged, survivor treatment status from not treated to treated and survivor transport status from not transported to transported. In turn these sub-object transformations, change the Disaster object’s desired level of stability.

6.3. Method stage 3

However, the Ambulance operational senior officer (tactical level, and his communication equipment) has not carried out perception behaviours that update his knowledge-of-tasks, that the scene is now contained, and regulation safety equipment is required to enter it. Therefore, when the Ambulance personnel attempt to carry out their execution behaviours, they do not fulfil the proper safety requirements, which would allow them to enter the inner cordon. Therefore, the execution behaviours of triaging, treating and/or transporting casualties cannot be carried out. Thus, they cannot transform the abstract Lives sub-object attributes and so cannot transform the Disaster object to a more desired level of stability. Thus, a behaviour conflict has been identified. Identifying behaviour conflicts enables co-ordination design problem diagnosis.

6.4. Method stage 4

The primary objective of EMCRS is to save life. In order to try to increase the desired level of stability of the Disaster object, the Lives object attribute values must be changed. Therefore, the Fire Service must carry out rescue execution behaviours to move the survivors to the edge of the inner cordon, so that the Ambulance Service can carry out their execution behaviours and, thus, increase the stability of the disaster object. However, the Fire Service carrying out rescue execution behaviours will decrease the resources available for performing the execution behaviours of controlling the hazard, thus decreasing the effectiveness of the response to the secondary objective of preventing escalation of disaster. The outcome is that the performance of the EMCRS is reduced even further.

6.5. Method stage 5

The Fire Service behaviours of containing the scene have conflicted with the Ambulance Service behaviours of triaging, treating and/or transporting casualties, and result in a behaviour conflict. This behaviour conflict results in a co-ordination design problem, which may relate to reduced overall EMCRS performance, either through hindered goal achievement (e.g. lives not saved) or through unacceptable system resource costs (e.g. excessive firefighter workload). The model diagnoses an EMCRS co-ordination design problem as actual overall performance being less than desired performance with respect to the EMCRS common objectives. This co-ordination design problem relates to the common EMCRS objectives, i.e. to save life (casualties not rescued); to prevent escalation of disaster; (fire not contained). Model 1 describes a performance deficit, related to hindered goal achievement and unacceptable resource costs for the Ambulance and Fire Services. Containment of the scene with safety requirements by the Fire Service reduces casualty triage and treatment (life saving) by the Ambulance Service. The Ambulance Service having to wait to treat casualties will increase their treatment workload. The Fire Service scene entrance safety requirements excluding the Ambulance Service, reduces Fire Service fire containment, as they have to rescue casualties and move them to the edge of the scene. The Fire Service workload will thus increase.

A design problem diagnosis by the EMCRS model has been described. To authenticate this design problem diagnosis, a potential design solution to this problem will now be suggested. The execution behaviours of the Fire Service (setting up an inner cordon and maintaining specified safety regulations) have been identified as part of the cause of the identified design problem. How these execution behaviours came about can be identified by the model. The behaviours, that affected the Fire Service execution behaviours of inner cordon set-up, were the planning and control behaviours of the operational commander. Planning behaviour, as specified in the model, is based on the plans and the information in the knowledge-of-tasks representation (in this case, for example, structural damage and many fires at the scene). The plan which the operational commander consulted, should have specified that set-up of the inner cordon should not be progressed without the knowledge of the Ambulance Service (Chief and Assistant Chief Fire Officers’ Association, 1994). The Fire Service operational commander, thus, had an inappropriate representation of the plan, which has caused inappropriate planning behaviours, that have contributed to the design problem. Now that it has been identified, where in the planning system the problem is occurring, reasoning about potential solutions to this design problem, are possible. A potential solution is to enable adequate training of the procedures specified in the operational plans, such that inappropriate planning behaviours are not carried out, and such that actual performance equals desired performance, for inner cordon set-up by the Fire Service, i.e. the set-up is as desired and with acceptable resource costs.

The Ambulance service execution behaviours of accessing casualties for triage and treatment are identified as part of the cause of this co-ordination design problem. These execution behaviours are affected by the planning and control behaviours of the Ambulance operational officer. Planning behaviour as specified in the model, is based on the plans and the information in the knowledge-of-tasks representation (in this case, for example, casualties and an inner cordon). The plan that the Ambulance Service officer consulted, should have specified that liaison should take place with the Fire Service about what safety equipment is required for entering the cordon (The National Health Service Ambulance Service, 1994). The Ambulance Service operational officer had an inappropriate representation of the plan, which has caused inappropriate planning behaviours, which have contributed to the design problem. A potential solution is to enable adequate training of the procedures specified in the operational plans, such that inappropriate planning behaviours are not carried out, and such that actual performance equals desired performance, for Ambulance Service triage and treatment of casualties, i.e. triage and treatment are as desired with acceptable resource costs.

7. Summary and future work

This paper has presented research, constituting HCI substantive design knowledge, in the form of models that support diagnosis of design problems and reasoning about solutions to these problems.

Comment 7
There is general agreement that the requirements phase is the foundation upon which the rest of the system development life-cycle is built (Somerville, 1989). Requirements can be divided into different categories – functional and non-functional (IEEE, 1984); also vital and desirable (BSI, 1986). More specific types of requirements may also be identified, including organisational; user interaction; and interface (Denley, 1999). Of concern here are User Requirements (Carlshamre and Karlsson, 1996), because although part of the initial phase of the system development cycle (Newman, 1994), they do not appear to include, explicitly at least, the concept of design problem as such, as referenced here by Hill (although they do not exclude it explicitly either). The omission is important, because Hill is clear, that her models and method are intended to: ’Support diagnosis of specific design problems and reasoning about potential design solutions’ (Section 1.0). The diagnosis of design problems and the reasoning about potential design solutions are performed by designers, as part of their practice, by which design proceeds through iterations of successive cycles of specification and implementation. The question then arises as to what is the relation between user requirements and design problems?

One possible relation is that user requirements and design problems are one and the same thing. That is, there is no difference between them. Although it is not totally clear, Newman (1994) might be understood as taking this view: ’Recognising the need for an artifice, and thus identifying a problem in computer systems design whose solution will meet this need (the initial stage of the engineering design process)’. This view, however, is rejected. Following the HCI Discipline and HCI Design Problem conceptions, in the manner of Hill’s research, a design problem occurs, when actual performance, expressed as Task Quality, User Costs and Computer Costs (see earlier) does not equal (is usually less than) desired performance, expressed in the same way. In contrast, user requirements have no such expression or constraints, even allowing user requirements to conflict.

This difference indicates that user requirements and design problems are not one and the same concept. Rather, it suggests that design problems can be expressed as (potential) user requirements, but not vice versa. Salter appears to agree with this asymmetric relationship, although his terms differ. The Specific Requirements Specification (‘design problem’) is an abstraction over the Client Requirements (‘user requirements’). The Specific Artifact Specification (‘design solution’) is an abstraction over the Artifact. The Client Requirements/Artifact relationship is derived and verified empirically. The Specific Requirements Specification/Specific Artifact Specification is derived and verified formally.

Whatever the terms used, however, the general point for HCI research is that differences between User Requirements and Design Problems need to be both explicit and clear.

 

This paper has described the development of diagnostic models of the EMCRS, through application of the HCI-PCMT framework to data from EMCRS training exercises. The HCI-PCMT framework was based on the Dowell and Long (1989, 1998) conception of HCI. The scope of the HCI-PCMT framework needed to be extended in order to be applied to the EMCRS. The HCI-PCMT framework, as shown Fig. 1, has a representation and a set of axioms for producing diagnostic design models. The framework axioms for EMCRS have been presented in Section 5. The EMCRS axioms address some of the differences in nature of the EMCRS and the other domains previously modelled by the HCI-PCMT framework. The EMCRS axioms are considered to be extensions to the HCI-PCMT framework. The most important EMCRS axiom relates to how to describe a task and its sub-tasks. These descriptions have important implications for the characterisation of multiple task work and subsequent identification of behaviour conflicts exhibited by the data. Thus, the EMCRS multiple tasks are the individual agency tasks, which are identified as sub-tasks of the domain. These sub-tasks carry out sub-object transformations in the domain, which affect disaster object transformations. Behaviour conflicts are identified, when sub-objects are not transformed as desired. Behaviour conflicts are diagnosed as co-ordination design problems by the EMCRS model. Without the EMCRS axiom extensions co-ordination design problem diagnosis would prove difficult, if not impossible.

A diagnosis method for identifying co-ordination design problems using the EMCRS Model 1 has been presented and then applied to identify one co-ordination design problem – cordon restrictions. However, this method is at an early stage of development and must be further developed for operationalising and testing. Nevertheless, it has been applied in the current research, albeit by the method developer, and thus, has been shown to facilitate in EMCRS model diagnoses. Future research would develop this method for operationalisation, testing, generalisation and so validation. Within the original research five co-ordination design problems have been diagnosed using this method. It is realised that without validation with further examples there is a question about the generalisability of these design problems. However, these diagnosed design problems were discussed with the Emergency Planning Research Group at the Home Office, who agreed with their existence and importance. The EMCRS data gathered for this research were from training exercises. It is understood that data from a real disaster would give the EMCRS model greater ecological validity, than using data from training exercises. People with access to such data (for example, senior emergency service training and research personnel) should however, be able to apply the current framework to such data, and produce models that would support co-ordination design problem diagnosis. However, due to the complex nature of the EMCRS, data at any lower level of description, than presented in the training exercises, would have been very difficult to analyse effectively. Also, the EMCRS is a management system for co-ordination of the planning and control in response to disasters. Thus, the EMCRS is specified at a high level, with respect to the operational, tactical and strategic levels of command. The EMCRS does not specify within a service how each individual person should co-ordinate with respect to their individual agency roles. The training exercises, where the EMCRS data were gathered, were set-up specifically to train the emergency service officers in how to use the EMCRS for the management of response with respect to operational, tactical and strategic levels of command. Thus, the trainees were all emergency service officers, and not operational level personnel. The exercise data were thus viewed as appropriate for modelling the EMCRS, for planning and control of the different command levels for disaster response. The aim of the research was to model the EMCRS to diagnose design problems, but more specifically planning and control co-ordination design problems. Thus, again data at this level of detail were considered appropriate for meeting the current aims. Future work would however, involve validation of the EMCRS model by application to data from an actual disaster.

Acknowledgements

The research presented in this paper was carried out within a PhD supervised by Professor John Long, and second supervised for the last year by Professor Ann Blandford. This work was part funded by the Home Office Emergency Planning Research group under the EPSRC CASE scheme. I would like to thank Chris Johnson and the two anonymous reviewers for their constructive comments in the revision of this paper. Thanks also go to the special issue editors for pointing me in the right direction on a number of issues.

References

Ambros-Ingerson, J.A., 1986. Relationships between planning and execution. Quarterly Newsletter of the Society for the Study of Artificial Intelligence and Simulation of Behaviour 57, 11–14.

Anon., 1994. Dealing with Disaster, second ed. Home Office Publication, London (HMSO).

Anon., 2001. Dealing with Disaster, third ed. Cabinet Office Publication, Liverpool, Brodie.

Report of 7 July Review Committee (2006) Greater London Authority. <www.london.gov.uk/assembly/reports/7july/report.pdf>.

Blandford, A., Wong, W., 2004. Situation awareness in emergency medical dispatch.

International Journal of Human-Computer Studies 61 (4), 421–452.

Blandford, A.E., Wong, B.L.W., Connell, I., Green, T.R.G., 2002. Multiple viewpoints on computer supported team work: a case study on ambulance dispatch. In: Faulkner, X.,

Finlay, J., Détienne, F. (Eds.), People and Computers XVI: Proceedings of HCI’02. Springer, pp. 139–156.

Card, S.K., Moran, T., Newell, A., 1983. The Psychology of Human Computer Interaction. Lawrence Erlbaum, Hillsdale, NJ.

Chief and Assistant Chief Fire Officers’ Association, 1994. Fire Service Major Incident Emergency Procedures Manual. CACFOA Services Ltd, Tamworth.

Dowell, J., Long, J., 1989. Towards a conception for an engineering discipline of human factors. Ergonomics 32 (11), 1513–1536.

Dowell, J., Long, J., 1998. Conception of the cognitive engineering design problem. Ergonomics 41 (2), 126–139.

Emergency planning college document, 1995. Management Levels in Response to a Major Incident. The Principles of Command and Control.

Fennell, D., 1988. Investigation into the King’s Cross Underground Fire. Department of Transport, London (HMSO).

Hayes-Roth, B., 1985. A blackboard architecture for control. Artificial Intelligence 26, 251–321.

Hidden, A., 1989. Investigation into the Clapham Junction Railway Accident. Department of Transport, London (HMSO).

Hill, R., 2005. Diagnosing Co-ordination Problems by Modelling the Emergency Management Response to Disasters. Unpublished PhD thesis, University of London. Hill, B.,

Long, J.B., Smith, W., Whitefield, A.D., 1995. A model of medical reception— the planning and control of multiple task work. Applied Cognitive Psychology 9, 81–114.

Hollnagel, E., Mancini, G., Woods, D., 1988. Cognitive Engineering in Complex Dynamic Worlds. Academic Press, London. Hutchins, E., 1990. The technology of team navigation. In: Galegher, J., Kraut, R., Egido, C. (Eds.), Intellectual Teamwork. Lawrence Erlbaum, Hillsdale, NJ.

Johnson, P., 1992. Human-Computer Interaction: Psychology, Task Analysis and Software Engineering. McGraw-Hill, London. Kaempf, G.L., Klein, G., Thordsen,

M.J., Wolf, S., 1996. Decision making in complex naval command-and-control environments. Human Factors 38 (2), 220–231. Klein, G.A.

Woods, D.D., 1993. Conclusions: decision making in action. In: Klein, G., Orasanu, J., Calderwood, R., Zsambok, C. (Eds.), Decision-Making in Action: Models and Methods. Ablex 405-411, Norwood, NJ.

Larkin, J.H., 1989. Display-based problem solving. In: Klahr, D., Kotovsky, K. (Eds.), Complex Information Processing: the Impact of Herbert A. Simon. Lawrence Erlbaum, Hillsdale, NJ.

Leplat, J., 1988. Methodologie von Aufgabenanalyse und Aufgabengestaltung. Zeitsch-rift fur Arbeits-und Organisationpsychologie 1, 2–12.

Long, J., 1987. Cognitive ergonomics and human computer interaction. In: Warr, P. (Ed.), Psychology at Work. Penguin, Harmondsworth. Long, J., 1996. Specifying relations between research and the design of human computer interactions. International Journal of Human Computer Studies 44 (6), 875–920.

Moray, N., Sanderson, M., Vicente, K., 1992. Cognitive task analysis of a complex work domain: a case study. Reliability Engineering and System Safety 36, 207–216. Newell, A., Simon, H., 1972. Human Problem Solving. Prentice-Hall, Englewood

Cliffs, NJ. Norman, D.A., 1986. Cognitive engineering. In: Norman, D., Draper, S. (Eds.), User Centered System Design. Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 31–62.

Payne, S.J., 1991. Display-based action at the user interface. International Journal of Man-Machine Studies 35, 275–289.

Rasmussen, J., Vicente, K., 1989. Coping with human errors through system design: implications for ecological interface design. International Journal of ManMachine Studies 31, 517–534.

Rogalski, J., Samurcay, R., 1993. Analysing communication in complex distributed decision-making. Ergonomics 36 (11), 1329–1343.

Samurcay, R., Rogalski, J., 1991. A Method for Tactical Reasoning (MTR) in emergency management: analysis of individual acquisition and collective implementation. In: Rasmussen, J., Brehmer, B., Leplat, J. (Eds.), Distributed Decision Making: Cognitive Models for Co-operative Work. John Wiley & Sons, Chichester, pp. 287–298.

Shepherd, A., 1989. Analysis and training in information technology tasks. In: Diaper, D. (Ed.), Task Analysis for Human Computer Interaction. Ellis-Horwood, Chichester.

Smith, M.W., Hill, B., Long, J.B., Whitefield, A.D., 1992. Modelling the relationship between planning, control, perception and execution behaviours in interactive worksystems. In: Diaper, D., Harrison, M., Monk, A. (Eds.), People and Computers VII; Proceedings of HCI’92. Cambridge University Press, Cambridge.

Smith, M.W., Hill, B., Long, J.B., Whitefield, A.D., 1997. A design-oriented framework for modelling the planning and control of multiple task work in Secretarial Office Administration. Behaviour and Information Technology 16 (3), 161–183.

Suchman, L.A., 1987. Plans and Situated Actions. Cambridge University Press, Cambridge.

The National Health Service Ambulance Service, 1994. Ambulance Service Operational Arrangements for Civil Emergencies. Internal Ambulance Service document.

Vicente, K.J., 1990. A few implications of an ecological approach to human factors. Human Factors Society Bulletin 33 (11), 1–4.

Whitefield, A.D., 1990. Human computer interaction models and their roles in the design of interactive systems. In: Falzon, P. (Ed.), Cognitive Ergonomics: Understanding, Learning and Designing Human Computer Interaction. Academic Press, London, pp. 7–25.

Woods, D.D., Hollnagel, E., 1987. Mapping cognitive demands in complex problemsolving worlds. International Journal of Man-Machine Studies (26), 257–275.

Wright, P., Fields, B., Harrison, M., 2000. Analysing human computer interaction as distributed cognition: the resources model. International Journal of Human Computer Interaction 15, 1–41.

Young, R., Simon, T., 1987. Planning in the context of human–computer interaction. In: Diaper, D., Winder, R. (Eds.), People and Computers III; Proceedings of HCI’87. Cambridge University Press, Cambridge, pp. 363–370.

Applying the Conception of HCI Engineering to the Design of Economic Systems 150 150 admin

Applying the Conception of HCI Engineering to the Design of Economic Systems

Applying the conception of HCI engineering to the design of economic systems

Ian K. Salter

JP Morgan Chase & Co.,1 60 Victoria Embankment, London EC4Y 0JP, United Kingdom

 

 

John Long's Comment 1 on this Paper

I have known Ian Salter, since the early nineties. He came to UCL to study for a PhD with me, following a period as a lecturer in Computer Science with an interest in graphical languages. His thesis was entitled: ‘The Design of Formal Languages’ (not a modest undertaking). It is still unclear to me who supervised whom. Ian subsequently worked as a research fellow and together with John Dowell, we researched task-oriented modeling in the domain of air traffic management (1992). Ian’s analytic strengths are of the highest order and I am delighted that he has carried forward ideas from these earlier days into their application to the design of economic systems, his festschrift contribution. Although I had ample opportunity to provide feedback on earlier versions of the article, as kindly acknowledged by Ian, I have made quite a few comments here with the intention of linking up a range of issues, appearing across the Festschrift, as a whole.

Abstract

The Long and Dowell conception for HCI design (Long and Dowell, 1989) outlined the general design problem for HCI and contrasted between applied science and engineering disciplines of HCI. Salter (1995) sought to clarify the applied science conception through the application of Kuhn’s conception of science. Salter also built upon the work of Long and Dowell to produce a generic conception of engineering design. As part of this work the notion of preference was formalized. Building upon the generic conception a set of criterion for an engineering discipline is established. A general design problem for economics is outlined in order to apply the generic conception to the field of economics. Roth’s (2002) implicit conception of economic engineering is analyzed against the criterion and the formalized notion of preferences and found to be a consistent but not complete example of an engineering discipline. The general problem of economic design, based upon Long and Dowell’s approach, is employed to analyze a regulatory response (Turner, 2009) to the global financial crisis of 2007+ and develop a design-based solution to the problems. It is argued that the current applied science based responses to the economic crisis are insufficient and that a multi disciplinary engineering approach is necessary. This approach includes consideration of how economic participants interact with computers as part of the financial system.

Reprinted by permission of JPMorgan Chase & Co. 2009. JPMorgan Chase & Co. All rights reserved.

Acknowledgement: This paper would not have been possible without the aid of Professor John Long, who spent many hours of his time reading and discussing drafts. The errors contained within are however the author’s alone.

E-mail addresses: ianksalter@gmail.com, ian.salter@jpmorgan.com ((The author is not necessarily representing the views or opinions of JPMorgan Chase & Co.))

 

1. Introduction

In 1989 John Long and John Dowell (Long and Dowell, 1989) outlined different conceptions for HCI disciplines based upon the general problem the discipline seeks to address, and the consequent knowledge and practices that arise. Engineering disciplines have problems of design whereas scientific disciplines have problems of understanding and prediction. They conclude that in order to address the design problems of HCI an engineering discipline of HCI is required. The Long and Dowell conception took the form of a set of concepts that were used to structure work on HCI design.

Comment 2

Long and Dowell (1989) characterized three different possible conceptions for HCI – Craft, Applied Science and Engineering. They concluded that an Engineering Conception of HCI, although demanding considerable resources for its development, would offer more effective solution of design problems, than the other conceptions. Its design knowledge, in the form of Principles, would offer a better guarantee, than either Craft or Applied Science design knowledge. The latter’s address of HCI problems, however, even though less effective then Engineering, was explicitly acknowledged by Long and Dowell.

The economist Alvin Roth, with respect to the field of economics, expresses similar thoughts:

‘Economists have lately been called upon not only to analyze markets, but to design them. Market design involves a responsibility for detail, a need to deal with all of a market’s complications, not just its principle features. Designers therefore cannot work only with the simple conceptual models used for theoretical insights into the general working of markets. Instead, market design calls for an engineering approach’ (Roth, 2002).

Roth outlines an implicit conception of economic engineering through the analysis of the redesign of the entry-level labor market for American doctors.

Salter (1995) developed a generic conception for engineering design, based on the Long and Dowell approach. In what follows this conception is extended through the addition of a number of criteria that a discipline should fulfill to claim it is an engineering discipline in the sense of Long and Dowell. The generic conception is then instantiated for economics through the postulation of a general design problem for economic engineering.

Through consideration of the American doctors labor market redesign, Roth’s implicit conception is analyzed with respect to the postulated general problem and the generic engineering conception. The aims of this analysis are:

  • To validate the postulated general problem of economic design.
  • To assess the consistency and completeness of Roth’s implicit conception with respect to the generic engineering conception.

Roth’s conception for economic design is restricted to the consideration of microeconomics and the design of individual markets. The continuing global financial crisis that began in 2007, and continued into 2008 causing considerable financial upheaval, has pushed the discipline of Economics to the forefront of public debate. It seems clear to many that some form of macro-economic redesign of the global financial system is necessary to ensure both economic revival and that such a crisis cannot happen again.

The UK Financial Services Authority (FSA) proposes a number of changes to the financial system through the Turner review (Turner, 2009). The postulated general design problem for economic engineering is used to analyze the Turner review. A simple redesign of the global financial system is proposed. The aims of this analysis and redesign are:

To consider the value of the use of the postulated general design problem in specific instances of design, even when the knowledge and practice of an engineering discipline are unavailable.

To illustrate that a design focused approach is of value in addressing the current crisis.
By considering these micro and macro instances of economic redesign it is argued that an discipline of economic engineering, of the type envisage by Long and Dowell for HCI, is required in order to provide solutions to economic design problems. It is further argued that economic engineering would draw practitioners from other disciplines and that those with a background in HCI will have a key part to play.

2. The conception of HCI design

Long and Dowell (1989) considers different possible conceptions for the discipline of HCI. Each type of discipline is characterized by three components:

  • Knowledge.
  • Practice.
  • General problem.

Knowledge is used to support practice aimed at solving the general problem of a discipline. For the discipline of HCI the scope of the general problem is identified as:

‘humans and computers interacting to perform work effectively’ (Long and Dowell, 1989).

The scope of the general problem (see Fig. 1) is extended in another paper (Dowell and Long, 1989). The human and computer interacting together are thought of as an interactive worksystem. The concept of effective work is captured by the notion of desired worksystem performance, which is expressed in terms of both the desired quality of work and the costs of the human and computer components of the worksystem that are incurred in doing the work. Interactive worksystems exhibit actual performance, which is a function of the actual quality of work done by the worksystem and the actual costs incurred.

Fig. 1. The general problem for human computer interaction.

 

Comment 3

Salter has well understood Long and Dowell’s (1989) conception of the scope of the general HCI problem as: ‘humans interacting to perform work effectively’ and its development in Dowell and Long (1989). However, in Figure 1 – The General Problem for HCI – ‘effectiveness’ is not explicitly represented. The same, it must be said, of Figure 2 in Long and Dowell (1989) and in Figure 3 in Dowell and Long (1989). The latter, however, is acceptable, as Figure 3 is intended only to show the fundamental distinctions between interactive worksystem and domains (of application). Figure 3 is an adaptation of Figure 2. A complete representation, however, of ‘humans interacting with computers to perform work effectively’ appears in Dowell and Long (1998), Figure 1, entitled ‘Worksystem and Domain’. Here, Performance is expressed as Task Quality (how well the work is performed), emanating from the Domain and User Costs (workload incurred in performing the work that well), emanating from the Worksystem. As Effectiveness is part of the core conception of HCI (and Salter recognizes it as such), it must be assumed implicit in Salter’s Figure 1. The inclusion is important for Salter’s subsequent pull-through of Long and Dowell (1989) and the Dowell and Long (1989; 1998) Conceptions into his application thereof to the engineering of economic design.

The same comment can also be made with respect to Figure 3, The Applied Science Conception of HCI and Figure 5, the Engineering Conception of HCI.

 

The Long and Dowell paper (Long and Dowell, 1989) outlined three different conceptions of the discipline of HCI, distinguished by the nature of their knowledge and practices. These conceptions are:

  • Craft.
  • Applied science.
  • Enginering.

In what follows, the applied science and engineering conceptions from Long and Dowell (1989) are considered and illustrated with figures from Salter (1995). The nature of the knowledge and practices that correspond to each conception are considered in terms of their definition, operationalization, testability, and generalization.

Comment 4

If ‘conception’ is equated with ‘definition’, then Salter’s ‘definition; operationalisation; test; and generalisation’ equates with Long’s expression of validation (1997). The three different possible conceptions of HCI – Craft, Applied Science and Engineering (Long and Dowell, 1989) all require such validation of their knowledge, practices and relations with respect to their acknowledged ‘discipline problem’.

 

The term definition is employed to mean the explicit definitions of the knowledge and practices. Operationalization is the transformation of the definitions of the knowledge and practices into a form that can be used and tested. The testing of the knowledge and practices is aimed at determining how well they support the general problem. Finally, knowledge and practices are general if they can be applied to more than one instance of the general problem. that correspond to each conception.

Comment 5

Long and Dowell (1989) claim that: ‘the ‘public’ knowledge possessed by HCI, as a Craft discipline, is not operational. That is, because it is either implicit or informal…..’ In other words, craft knowledge and practices can be defined; but only informally or implicitly. They are not defined/conceived explicitly. Hence, also not operationalisable, testable or generalisable (see also Comment 4).

2.1. Applied science conception

The conception of an applied science design discipline describes a practice in the form of ‘specify and implement and test’ and knowledge in the form of ‘guidelines’. In the applied science conception of design, artifacts are still designed by a process of construction and evaluation and reconstruction. However, knowledge in the form of guidelines, derived from scientific knowledge, is used to guide the process. The conception of HCI as an applied science discipline is represented in Fig. 2.

Comment 6

See Comment 5.

Screen shot 2014-12-16 at 15.53.36

By considering examples of HCI as an applied science discipline, Long and Dowell conclude that the knowledge and practices of HCI as an applied science are derived from scientific theories that are operationalized, tested and generalized. However, the knowledge and practices themselves are not operationalized, tested and generalized with respect to the general problem of HCI. The limited account given of the relationship between scientific knowledge and applied science design in Long and Dowell (1989) was not well understood at the time. Salter (1995) attempted to clarify what was meant through consideration of Thomas Kuhn’s (1970) conception of science (see Fig. 3).

 

 

Fig. 3. Kuhn and applied science.

In Kuhn’s view, a scientific discipline may be conceived of as a framework that contains the following elements:

  • Paradigm.
  • Disciplinary matrix.
  • Shared exemplars.

For Kuhn, the history of a scientific discipline cycles through two distinct stages.

The shortest phase is the crisis period. During this period the symbolic generalizations, metaphysical assumptions and system of values that form the disciplinary matrix are in question. Rival positions abound until one begins to dominate.

At this point, the discipline moves into a period of normal science. During normal science the scientific community holds a consensus view concerning the disciplinary matrix. Scientists may solve scientific problems within the paradigm of shared exemplars. The shared exemplars, consisting of theory predictions, are developed until such time as it becomes increasingly difficult to develop exemplars whose theory predictions accord with perceived phenomena. At this point a period of crisis ensues.

The relationship between Kuhn’s conception of science and Long and Dowell’s conception of applied science is presented in Fig. 3.

Disciplines of different types attempt to solve different design problems. Scientific disciplines attempt to solve the problems of understanding, whereas engineering disciplines attempt to solve the problem of design. Transforming discipline knowledge and practices that have been operationalized, tested and generalized with respect to the scientific problem of understanding, does not necessarily lead to knowledge that can be operationalized, tested or generalized with respect to the engineering problem of design.

2.2. Engineering conception

The conception of an engineering design discipline describes a practice in the form of ‘specify then implement’ and knowledge in the form of ‘principles’. In the engineering conception of design, artifacts are designed by a process of specification, followed by a process of implementation. The process is supported by knowledge in the form of principles. The application of principles to the design process provides a guarantee that the artifact will satisfy the client’s requirements. The conception of HCI as an engineering discipline is represented in Fig. 4.

Comment 7

See Comments 5 and 6.

 

Fig. 4. The engineering conception of HCI.

The knowledge and practices of an engineering discipline are defined, operationalized, testable and generalizable.

The Long and Dowell (1989) conception assumed that HCI was essentially in a Kuhnian crisis period and the 1989 paper can be viewed as an attempt to establish disciplinary matrix, albeit one based on an Engineering approach. Salter (1995) took this work further by establishing a generic conception of an engineering discipline.

3. A generic conception of engineering design

Salter (1995) built upon the work Dowell and Long as well as Kuhn to build a generic conception of engineering design. In what follows this generic conception has been extended to include a set of criteria to which any specific conception should correspond in order to satisfy the generic conception of design.

3.1. Disciplinary matrix and scope of the general problem

Recall that Kuhn’s notion of a disciplinary matrix consists of symbolic generalizations, metaphysical assumptions and systems of values of the discipline. By introducing notions of desired performance and interactive worksystems exhibiting actual performance, Long and Dowell are explicitly outlining some of the metaphysical assumptions and systems of values of the discipline of HCI. Thus, describing the scope of a discipline problem amounts to providing two of the three components of a disciplinary matrix.

Comment 8

According to Dowell and Long (1989), Kuhn’s first element of the discipline matrix is a ‘shared commitment to models, which enables a discipline to recognize its scope or ontology….’ For Cognitive Engineering (here read HCI) ‘we may interpret this requirement as the need to acquire a conception of the nature and scope of cognitive  (HCI) design problems’ (a Conception to be found in Dowell and Long (1989 and 1998)).

Kuhn asserts that the value of knowledge is in its usefulness in solving problems. According to Salter (1995) design problems have two key components: the requirements component and the artifact component. The requirements component is the ‘what’ of the design problem. In Long and Dowell’s terms the requirements component is the desired performance of the interactive worksystem. The artifact component represents the ‘how’ of the design problem. In Long and Dowell’s terms, the artifact component is the interactive worksystem together with the actual performance it exhibits.

Comment 9

Salter’s proposes to characterize the two key components of design problems as the ‘requirements’ component (equated in Long and Dowell’s (1989) conception with ‘desired performance’) and the ‘artefact’ component (equated to the ‘interactive worksystem, together with the actual performance it exhibits’). In so doing, however, it should be remembered that for Dowell and Long (1989 and 1998), Performance is made up of two elements. The first, related to the Domain (of work) is ‘Task Quality’ – how well the work is performed. The second, related to the Interactive Worksystem, is the (Resource) Costs (both those associated with the User and with the Computer), incurred in performing the work that well. (See also Comment 3). There is a need, then, to identify the Costs element somewhere in the Requirements/Artefact dualism.

 

Any conception of an engineering discipline of design must describe the scope of the general problem. This leads to the first criterion.

Criterion 1: The description of the general problem should describe the requirements component and the artifact component of the problem and the nature of the relationship between them.

3.2. The phenomena of design

The shared exemplars of Kuhn’s framework are grounded in perceived phenomena. ((The use of the word phenomena here is used to refer to an observable occurrence.So Kuhn was stating that shared exemplars must be consistent with observable occurrences. Salter (1995) was claiming that there are a set of observable occurrences associated with client requirements and a set of observable occurrences associated with an artifact.))

Salter (1995) distinguishes two types of phenomena associated with design problems, the phenomena associated with the requirements component and the phenomena associated with artifact component (see Fig. 5).

 

 

Fig. 5. Design phenomena.

The phenomena associated with the requirements component of the design problem are called the client requirements. A client here refers to an individual, or an organization whose requirements may consist of the, possibly conflicting, requirements of sub-organizations and individuals. The phenomena associated with the artifact component of the design are termed the artifact. For any design problem it is necessary that there be practices for determining whether artifacts fulfill or satisfy client requirements. These practices are termed empirical since they apply to phenomena. The term empirical derivation is used to describe a practice for developing an artifact from client requirements and the term empirical validation is used to describe a practice for determining if an artifact satisfies client requirements.

Figs. 1 and 5 can be combined to create a re-expression of the HCI design problem in terms of requirements and artifact ((Some commentators are unhappy with identifying the domain solely as part of client requirements, arguing that the artifact would also need to include the domain as the domain may be redesigned to improve performance. However such a modification of the domain can also be perceived as a modification of the requirements that is outside of the scope of the HCI problem.)) (see Fig. 6).

The success of any engineering discipline depends upon its ability to support the design of artifacts that satisfy client requirements. This leads to criterion 2:

Criterion 2: The conception of any discipline of design must have a set of empirical practices for establishing the relationship between artifact and client requirements.

Comment 10

Salter proposes a generic conception of Engineering Design (Section 3) to support the specification of the General design problem of economic systems, as a special case of engineering design. Because the Generic Conception is of Design and because most HCI researchers associate their work with design (directly or indirectly), the generic Conception can be used to unify and to differentiate approaches to HCI design.

First, according to Salter (Section 3), the scope of the generic Conception has ‘two key components: the Requirements Component and the Artefact Component’ and the relations between them (Criteria 1). Allowing for differences in terminology (for example, ‘need’ for ‘requirements’, ‘computer’ for ‘artefact’ and ‘relationship’ for ‘relations’, all the approaches to HCI design, represented by the contributions to the Festschrift, would appear to agree on this scope for HCI, that is, Requirements and Artefact (see Sutcliffe and Blandford; Carroll; Dix; Wild; Hill; and Long – all 2010).

Second, according to Salter (Section 3.2 and Figure 5). The phenomena of design of the Generic Conception are: Client Requirements (associated with the Requirements component); the Artefact (associated with the Artefact component); and the Empirical Derivation (of the Artefact from the Client Requirements) and the Empirical Validation (of the Client Requirements by the Artefact) (expressing the relations between Client Requirements and the Artefact). Again, allowing for differences in terminology, for example, ‘user requirements’ for client requirements’; ‘technology’ for ‘artefact’ and ‘test’ for ‘validation’, all the approaches to HCI design, represented by the contributions to the Festschrift, would appear to accept these phenomena for HCI, that is, Client Requirements; Artefact; and their Empirical Derivation and Empirical Validation relations (Sutcliffe and Blandford; Carroll; Dix; Wild; Hill and Long – all 2010).

In conclusion, then, all the contributors to the Festschrift, representing various approaches to HCI design (for example, scientific (Sutcliffe and Blandford); Craft and Applied Science (Carroll); design and scientific (Dix); scientific and engineering (Wild); and engineering (Hill) would appear to agree on the basic scope and the phenomena of Salter’s Generic Conception of Engineering Design, as applied to HCI design. To this extent, approaches to HCI design might be considered to be unified with respect to scope and phenomena.

3.3. Design practice exemplars

Lying between the disciplinary matrix and phenomena in Kuhn’s framework is the concept of the paradigm of shared exemplars. For a scientific discipline, the general problem is one of understanding by means of explanation and prediction. Thus Kuhn’s exemplars form understanding as explanations and predictions. Long and Dowell (1989) stated that for an engineering discipline, the general problem is one of design. Salter (1995) concluded from this that one form of shared exemplars are design examples representing abstractions of client requirements and artifacts and the relationships between these abstractions. (see Fig. 7).

Salter (1995) termed the abstraction of client requirements the specific requirements specification, whereas the abstraction of artifact was called the specific artifact specification. The term ‘Specific’ is used since these abstractions are specific to the particular design problem being considered. Salter (1995) stipulated that the relationship between specific requirements specification and specific artifact specification should be formal. This gives rise to criterion 3.

Criterion 3: The conception of any discipline of design must have each of the following:

  • a) A set of empirical practices for establishing the relationship between client requirements and specific requirements specification.
  • b) A set of empirical practices for establishing the relationship between artifact and specific artifact specification.
  • c) A set of formal practices for establishing the relationship between specific requirements specification and specific artifact specification.

Comment 11

Salter’s Generic Conception of Engineering Design includes the concept of Design Practice (termed exemplars, following Kuhn (1970) – see Section 3.3 and Figure 7). Design Practice assumes, and is built on, the scope and phenomena of Engineering Design (see Comment 10). The Specific Requirements Specification is empirically derived from the Client Requirements and empirically validates them. The Specific Artefact Specification is empirically derived from the Artefact and validates it. In addition, however, Design Practice requires the Formal Derivation of the Specific Artefact Specification from the Specific Requirements Specification and the Formal Verification of the latter by the former.

The requirement for formal derivation and verification relations between the Specific Requirements and Artefact Specifications generally excludes the approaches to HCI design of the Festschrift contributors not committed to Engineering Design. The only contribution reporting something approaching a Design Practice Exemplar, consistent with Salter’s conception, is that of Hill. Her models and method are ‘explicit’ and intended ‘to support design directly’.

However, even in her case, there must be some doubts, at this stage of her research. First, according to Hill, the models and method support design problem diagnosis ‘directly’; but design solution prescription only ‘indirectly’. Second, it remains to be seen to what extent in the longer term explicit equates to formal, in the case of Hill’s models and method. Hill herself considers her models and method support less well than ‘validated engineering design principles’, as proposed by Dowell and Long (1989). ‘Less well’ here might be understood as less formally.

Dowell and Long (1989) are in no doubt, that such HCI engineering principles would be formal and support ‘specify then implement’ design practice. In term’s of Salter’s generic conception, the principles would support the formal derivation of the design solution (Specific Artefact Specification) from the design problem (Specific Requirements Specification) and the formal verification of the latter by the former. Subsequent attempts to develop HCI engineering design principles have included the required formality of the relations between design problem and design solution (Stork, 1999; Cummaford, 2007).

 

3.4. Design research exemplars

For a scientific discipline, Long and Dowell (1989) believed that the practice of the discipline is the research that aims to construct and validate knowledge that supports understanding in the form of explanation and prediction. Salter (1995) claimed that for an engineering design discipline, whose knowledge is defined, operationalized, tested and generalized with respect to the problem of design, there is a distinction between practice and research. Engineering practice involves employing engineering knowledge to solve specific design problems, whereas engineering research involves the construction and validation of engineering knowledge.

Thus, for an engineering discipline, Salter (1995) claimed that there is an alternative but equivalent of Kuhn’s shared exemplars, that is, examples of engineering research. Since engineering research constructs and validates knowledge that is defined, operationalized, tested and generalized with respect to the problem of design, engineering research exemplars consist of abstractions of specific requirements specification and specific artifact specification and the relationships between them (see Fig. 8).

Salter (1995) termed the abstraction of the specific requirements specification the general requirements specification, whereas the abstraction of the specific artifact specification was called the general artifact specification. The term ‘General’ was used, since these abstractions are general to particular classes of design problems. The relations between all abstractions are formal. This gives rise to criterion 4.

Criterion 4: The conception of any discipline of design must have each of the following:

  • a) A set of formal practices for establishing the relationship between specific requirements specification and general requirements specification.
  • b) A set of formal practices for establishing the relationship between general requirements specification and general artifact specification.
  • c) A set of formal practices for establishing the relationship between general artifact specification and specific artifact specification.

 

Fig. 6. The re-expression of the HCI design problem.

Comment 12

Figure 6 re-expresses the HCI design problem in terms of Client Requirements and Artefact. However, there is no explicit expression of ‘effectiveness/performance’ (see Comment 3) or Task Quality and Worksystem Costs (see Comment 9). To the extent that Salter is pulling through these concepts into his conception, they are presumably implicit in his Figure 6.

 

 

Fig. 7. Design practice exemplars.

 

 

Fig. 8. Design research exemplars.

 

Salter’s (1995) generic conception of an engineering discipline has now been outlined and extended with a set of criteria that any specific instance of an engineering discipline should meet. In order to instantiate the generic conception for engineering design for a discipline of economic engineering it is necessary to postulate the general design problem for economic systems.

Comment 13

Salter’s Generic Conception of Engineering Design, as well as the concept of Design Practice, includes the Concept of Design Research (also termed exemplars, following Kuhn (1970) – see Section 3.4 and Figure 8). Research practice assumes, and is built on, Design Practice (see Comment 11), which in turn assumes, and is built on, the scope and Phenomena of Engineering Design (see Comment 10). In addition, however, Research Practice requires the formal derivation of the General Requirements Specification from the Specific Requirements Specification and the formal verification of the latter by the former, together with the formal derivation of the General Artefact Specification from the Specific Artefact Specification and the formal verification of the latter by the former. Further, the General Artefact Specification is formally derived from the General Requirements Specification and the formal verification of the latter by the former.

Failure to espouse the Design Practice Exemplar of the Generic Conception of Engineering Design, by approaches to HCI design other than Engineering, necessarily entails the failure to espouse the Design Research Exemplar. Indeed, doubly so, as all derivation and verification relations in the Research Practice Exemplar are formal. Hill would appear to be the only exception among Festschrift contributors. With the reservations, expressed in Comment 11, Hill’s putative Design Practice Exemplar is embedded in her research, as required by the Generic Conception of Engineering Design. Further, formal relations in her Design Practice would support formal relations in Research practice. This would be consistent with Hill’s claim concerning validated engineering design principles, that would support ‘the design of general solutions to general classes of HCI design problems’, as proposed by Dowell and Long (1989). The latter are in no doubt that HCI engineering design principles research would necessarily require formal research relations to support formal ‘specify then implement’ design practice.

Subsequent attempts to develop HCI engineering design principles have embedded design practice in research practice (Stork, 1999; Cummaford, 2007). They have also espoused the actual (Stork, 1999) or desired (Cummaford, 2007) formal relations for both, as required by Salter’s Generic Conception of Engineering Design.

In conclusion, then, all approaches to HCI design would appear to accept the Scope and Phenomena of design, as expressed in Salter’s Generic Conception of Engineering Design (including the Festschrift contributors). Unsurprisingly, only approaches with a commitment to HCI engineering design are (or might be) consistent with the Conception’s expression of Design and Research Practice. However, the Conception still has a unifying potential for HCI design in that given the same Scope and Phenomena, alternative conception components, levels of expression and relations are possible.

4. The general design problem for economic systems

The general design problem for economic systems must have a clearly defined scope with the problem expressed in terms of requirements and artifact. Fig. 9 modifies Fig. 6 for application to the problem of the design of economic systems:

 

 

Fig. 9. The general design problem for economic systems.

Economic agents interacting together within a regime ((Regime here is used to denote a set of regulations. This set may be empty as in thecase of illegal markets such as the black economy or illicit drug dealing. Even foreconomic systems such as these there may however, be implicit regulation – it is notnormally considered good business for drug dealers to kill their clients en mass–eventhe death of one client is likely to attract unwanted attention of the authorities)) of regulation are thought of as an economic system, the artifact. The agents of the economic system are further distinguished into clients, whose requirements the system seeks to serve, and non-client agents. Agents may include human, organizations, software systems, making economic design a cross-disciplinary activity.

The concept of effective work is captured by the notion of desired economic system performance, which is expressed in terms of client preference. Client preferences are expressed with respect to a work transformation. Client preferences may be common across all clients of the economic system or may be specific to a particular client group.

The notion of economic systems doing work is somewhat novel but central to the notion of economic engineering being expounded here. It is illustrated in the macro and micro economic discussions in Sections 5 and 6.

A standard table format is used to describe economic requirements in terms of work transformation and client preferences. This format is given in Table 1.

A standard table format is also used to describe an economic system. This format is given in Table 2.

Domain
Work transformation
Describes the work done by the economic system in terms of some work transformation of objects
Client preferences
Describes preferences of particular groups of clients
Common preferences
Describes preferences common to all clients

Table 1: Standard format for an economic design problem.

Economic system: sample system
Clients
Describes the clients that form part of the economic system. There should be a correspondence between the clients listed here and the client preferencess
Non-client agents
Describes any non-client agents that are part of the system and the roles that they play
Regulation
Describes any regulation that is part of the economic system

Table 2: Standard format for an economic system.

Economic system worksystems exhibit actual performance, which is a function of how the work transformation achieves client preferences.

4.1. Formalizing client preferences

From the generic engineering conception the specific and general requirements specification for economic systems should be presented formally. In order to achieve this the notion of desired performance and hence client preferences must be expressed formally.

Traditionally in economics and game theory (Osbourne and Rubinstein, 1999) the preferences have been stated using a utility function or the slightly weaker notion of a linear ordering. Utility functions assign a real number to the occurrence that forms the actual performance of the worksystem. Liner orderings insist that all actual work system performances can be ranked along the real line, insisting therefore that it is possible to state for each actual performance, which is preferred. Work on bounded rationality (Simon, 1957) indicates that both of these notions are too strong. Clients are unlikely to be able to assign a number to actual worksystem performance and equally as unlikely to be able to rank all such performances.

Salter (Salter (1995)) introduced the notion of preference ordering, based upon mathematical pre-orders to address this issue. Linear orderings and utility functions are special cases of preorderings.

A pre-ordering hS, 6i is a set S together with a relation 6 over S such that for all a, b, and c in S:

  • a 6 a (reflexivity)
  • if a6b and b6c then a6c (transitivity)

A preference ordering hS, 6, Pi is a pre-ordering hS, 6i together with a subset of S, P of preferences, such that for any a and b in S:

  • if aeP and a6b, then beP

Client preferences may be expressed as a preference ordering over the work transformations that are carried out by the economic system.

Work (Salter, 1995, 1993) indicates that Mathematical Category Theory (MacLane, 1998; Lawvere and Schanuel, 1987), of which the Theory of Orderings can be considered a part, may be an appropriate formalism for expressing some class of engineering design problems.

The generic conception of engineering design has been outlined and instantiated through the postulation of the general design problem from economic systems. Given this statement of the general problem and equipped with the tool of preference orderings it is possible to analyze the entry-level market for American doctors and Roth’s implicit conception of economics engineering.

Comment 14

Salter has pulled through into his expression of the General Design Problem for Economic Systems a number of concepts from the Conception of Long and Dowell (1989). Some concepts, like Desired Performance, seem to preserve their original meaning. Other concepts, however, like User Costs seem not to preserve their original meaning. Readers should be aware of these differences.

5. The entry-level labor market for American doctors

Traditionally the role of the majority of economists has been to understand and predict aspects of economic systems, thus playing the role of scientists according to the conception of Long and Dowell. Increasingly however economists are becoming actively involved in the construction of economic systems including the design of markets for electric power (Wilson, 2009), airwave spectrum auctions (Milgrom, 2000) and labor markets for entry-level doctors (Roth, 2002; Roth and Peranson, 1997) Thus economists are increasingly playing the role of designer.

An example of the economic design of the entry-level labor market for American doctors reported by Roth (2002) is considered. The concept of residency programs (then called medical internships) began in the 1900s. Newly graduated doctor’s work as residents in hospitals to complete their training. The work of the economic system is to match doctors to residency programs within hospitals. The matching should take account of doctor’s preferences for residency programs and the residency program’s preference for doctors.

Table 3 uses the standard table format for the domain to describe the economic design problem.

The domain described is, by necessity, a simplification ((The requirements are also simplified to aid exposition. For example medicalschools could have been included as participants in an attempt to capture the effortexpended by them in supporting medical student)) of requirements that have evolved over time. The rationale behind the common preferences aspects of the domain may not appear immediately clear. However a discussion of the history of the problem will provide this rationale.

The initial economic system operated a decentralized model with no major regulations (see Table 4).

This system worked well initially but competition amongst hospitals for medical students led to appointments of doctors to positions up to 2 years prior to taking up the residency position. This was not considered satisfactory, leading to the requirement given by the first common preference that the matching should occur in the doctor’s final undergraduate year. Thus justifying common preference (a). This led to a revised economic system (Table 5) in which additional regulations were imposed upon passing student information and date of appointments.

Domain
Work transformationTo match newly qualified doctors to resident programs. Where resident programs consist of a number of available residency positionsClient preferences

  • Doctor
  • For residency programs
  • Residency programs
  • Preferences for doctor

Common preferences

  • a) The matching between doctors to programs should not happen earlier than the final year of the doctors undergraduate program
  • b) The matching should occur in a reasonable time, say tbeforeResidency,
    before the doctor is due to start the residency
  • c) The matching should be simple for the doctors, not occupying more
    than, say rdoctors, of the doctor’s resources
  • d) The matching should be simple for the residency programs, not occupying more than, say rresidencyProgram, of residency program’s resources
  • e) The matching should occur consistently throughout the year, with not
    more than say m matches occurring in any one month
  • f) The match should be stable in the sense that it should not be possible to produce a different match that better meets both doctor’s and residency program’s preferences

Table 3: The design problem for the entry-level doctors market.

Economic system: decentralized model (1900)
Clients

  • Doctors applying for advertised positions
  • Resident Programs advertising positions and selecting doctors through a waiting list system

Non-client agents

  • No major ones except for advertising outlets, etc. Regulation
  • None

Table 4: The doctor matching system 1900s–1940s.

Economic system: decentralized model (1945)
Clients

  • Doctors applying for advertised positions
  • Resident programs advertising positions and selecting doctors through
    a waiting list system

Non-client agents

  • No major ones except for advertising outlets, etc.

Regulation

  • Dates references and transcripts could be sent restricted
  • Contracts cannot be signed till last year of study

Table 5: The doctor matching system 1945.

The new system led to further problems as doctors receiving an offer from one residency program held off accepting until they had heard from a preferred program. This led to slow movement of waiting lists and lots of last minute matching which was also considered unacceptable. This was not considered satisfactory thus justifying common preferences (b)–(e). This led to the development of a centralized clearing system in 1951 (see Table 6).

The centralized clearing system worked well until the 1970s but actual performance degraded until the 1990s. Many issues were raised, in particular as to whether the matches produced were considered optimal. Thus leading to common preference (f). To address these needs, the National Resident Matching Program (NRMP) was developed using a clearing system algorithm by Roth and Peranson (1997) (see Table 7).

Economic system: centralized clearing (1951)
Clients

  • Doctors submit preferences for positions
  • Resident programs submit preferences for doctor

Non-client agents

  • A centralized clearing process

Regulation

  • None

Table 6: The doctor matching system 1951.

Economic system: US National Resident Matching Program – NRMP (1998)
Clients

  • Doctors submit preferences for positions
  • Resident programs submit preferences for doctors

Non-client agents

  • Engineered centralized clearing process

Regulation

  • None

Table 7: The NRMP doctor matching system.

Roth considers the redesign to be an example of an economic design discipline:

‘These developments suggest the shape of an emerging discipline of design economics, the part of economics intended to further the design and maintenance of markets and other economic institutions’ (Roth, 2002).

Roth considers the role of the economist as one of engineer, and implicitly outlines a conception of economics engineering (Roth, 2002). In what follows, this conception is made explicit and analyzed against the generic engineering conception outlined above.

For the NRMP design the general requirements and artifact specifications are partly described in terms of Mathematical Game Theory. The following is taken from Roth (2002) but presented in the style of preference orderings outlined above. This is done in order to illustrate that Roth’s work is consistent with the notion of preference orderings and to facilitate the analysis of the NRMP general requirements against the generic engineering conception: The NRMP requirements are as follows:

  • There are two disjoint sets of F 1⁄4 ff1;…fng of firms and W 1⁄4 fw1;…wpg of workers where each firm fi seeks up to qi workers and each worker seeks exactly one firm.
  • A match is a functionl : F ! }ðWÞ that maps firms to sets of workers. The set of all such functions is called M.
  • For each worker wieW there is a worker preference ordering hF; wi ; Pwi i representing the worker’s preferences for firms such thatforallfi;fj 2Pwi; fi <fj; fi >fj orfi=fj.In other words all of the preferences are linearly ordered and can thus be represented as a sequence hfi, fj, . . .i.
  • For each firm fi e F there is a firm preference ordering hW; fi ; Pwi i representing the firm’s preferences for workers such that for all wi;wj 2Pfi, wi <wj, wi >wj or wi =wj. In other words, all of the preferences are linearly ordered and can thus be represented as a sequence hwi, wj, . . .i.
  • A match preference ordering hM,6,Pi can be specified on the setof matches M. The set of preferences is defined as P1⁄4fl2MklðfÞjq^ðw2lðfÞ)w2Pf ^f 2Pwg. In other words, for a matching to be a preference, no firm should be matched against more workers than it requires and all workers and firms should be matched against their preferences. The ordering is specified with two conditions. The first condition states that if |l(f)| < q and l0 is identical to l except that l0 (f) = l(f) [ {w}then l < l0 whenever wePf and fePw In other words a matching that adds an additional worker to a firm that is not at capacity is preferred, as long as the worker and firm are in each other’s preferences. The second condition states that if w0 < f w, f0 < w f, wel(f0),w0el(f)andwel0 (f)inlandl0 thatareotherwise identical then l < l0. In other words a mapping is preferred if it provides a better match for any worker and firm pair.
  • Using the terminology of Roth a match l is considered Stable if there is no match l0 such that l < l0. In other words, a stable match cannot be improved upon by better satisfying user’s preferences. This formalizes common preference (f).

The postulated general problem of economic design has been used to describe the client requirements and artifacts for the redesign of the entry-level labor market for American doctors. Further, the mathematical formalization of preference orderings has been shown capable of capturing the formal aspects of the matching problem. Thus the general problem of economic design has been validated against the labor market redesign example.

The consistency and completeness of Roth’s implicit conception is now analyzed against the generic engineering conception. Criterion 4 is considered first.

There is a general requirements specification given in terms of the sets F and W and there are associated preference orderings. A general artifact specification is an algorithm that will always produce a stable match. Such an algorithm is documented by Gale and Shapley (1962) who go further and prove a theorem that given the general requirements there will always be a stable match and that a stable match will be produced by their algorithm. Thus the formal relationship between general requirements specification and general artifact specification has been satisfied. Thus Roth’s conception is consistent and complete with respect to part (a) of Criterion 4.

In the Roth paper the specific requirements specification is given semi-formally as an English language description of the particular matching problem in the case of doctors and residence programs. The sets F and W are matched to residency programs and doctors respectively. Thus the general requirements specification and the specific requirements specification are related, all be it semi-formally. In a more formal approach specific requirements specification might be stated in terms of a design language such as Universal Modeling Language (UML) (Fowler, 2003). For Software Engineers with some knowledge of set theory and orderings, a formal relationship could be constructed between the UML specification and the game theory definitions, the general requirements specification. In the case of the problem discussed above, this would, for example, map a class Doctor to the set W and the class Residency Program to the set F. Thus, Roth’s approach is consistent, but not complete with respect to part (b) of Criterion 4.

Roth outlines no specific artifact specification. From a software engineering perspective a specific artifact specification, in the form of a computer program, may be produced in a software programming language such as Java (Flanagan, 2005). For software engineers the relationship between an algorithm, the general artifact specification, and a software program implementing that algorithm, the specific artifact specification may be considered formal. It is therefore possible to complete Roth’s conception, which is consistent but not complete with respect to part (c) of Criterion 4.

Thus, Roth’s conception appears consistent but not complete with respect to Criterion 4. Criterion 3 is now considered.

Even though Roth’s description of the specific requirements specification, is only semi-formal, he lists involvement with representatives of both student doctors and resident programs at the beginning of the design, ongoing contact with stakeholders throughout the design and final agreement for the design. Thus establishing the empirical relationship between the phenomena of client requirements and the specific requirements specification, even if the latter is semi-formal. Thus Roth’s conception is consistent and complete with respect to part (a) of Criterion 3.

As mentioned above Roth exposes no notion of specific artifact specification. However, from a software engineering perspective, the Java program described above would be instantiated as a running process on some physical computation hardware establishing an empirical relationship between specific artifact specification and artifact. It is therefore possible to complete Roth’s conception, which is consistent but not complete with respect to part (b) of Criterion 3.

The relation between the UML specification and the Java program may be established using tools such as the Rational Unified Process (RUP) (Kroll and Kruchten, 2003; IBM Website, 2009), Java classes can be generated from the UML specification and the UML specification reversed engineered from the Java classes using tools such as Rational Rose Developer for Java (IBML Website (2009)). Thus a formal relationship between specific requirements specific and specific artifact specification can be established. It is therefore possible to complete Roth’s conception, which is consistent but not complete with respect to part (c) of Criterion 3.

Thus, Roth’s conception appears consistent but not complete with respect to Criterion 3. Criterion 2 is now considered.

Although not reported by Roth, it may be presumed that prior to operation user acceptance tests would have be performed to ensure that, artifact, the implemented process, met client requirements. Roth does report that NRMP system has been operating successfully for a number of years. To consider the importance of stability Roth, also considers a number of alternative matching systems in operation in the US, the UK and Canada. From a total of 16 systems, the nine systems that produce a stable match are all still in operation. Of the seven systems that do not produce a stable match, all but two have failed. The analysis of the operating systems was supplemented by laboratory experiments to ensure there were no other reasons for the failures. Thus the empirical relationship between the specific artifact discussed and client requirements is produced. Further to this, the relationship has been established for a number of different specific design cases. Thus Roth’s conception is complete and consistent with respect to Criterion 2. Consider Criterion

Roth does not explicitly describe a general problem of economic design. However, it has been possible to map the design problem considered to the postulated general problem for economic design. Thus Roth’s is implicit conception is consistent but not complete with respect to Criterion 1.

Summarizing across the criteria, it can be seen that, given the postulated problem of economic design, Roth’s conception is consistent with an engineering conception of economic design.

Further to this, the formal relationship outlined between general requirements specification and general artifact acts as a specification of engineering principle. This Match Stability Principle may be stated as follows:

In order to produce stable matches (see common preference f), the Gale and Shapley matching algorithm should be used.

The sources of the incompleteness of Roth’s implicit conception are worth considering.
In order to demonstrate the consistency of Roth’s conception to the generic engineering conception, aspects of the design were artificially completed using tools from the field of software engineering. This indicates that the knowledge and practices from this field may prove useful in building an engineering discipline of economic design.

It should also be noted that the design exemplar has focused on one of the performance criteria, common preference (f). Roth (2002) actually does focus on other issues including concerns about whether preference orderings are worker or firm optimal. The consideration of these has not been possible due to reasons of space. However there still remain issues that have not been addressed by Roth. Common preferences (d) and (e) that are related to resources expended by doctors and resident programs in using the system. It may be argued that the general design problem is an abstraction and these were not seen as being a problem for the design before NRMP. On the other hand, this is a significant cognitive design problem, which may have an impact on overall acceptance of the economic system.

Further cognitive issues arise in relation to assumptions as to how preferences are stated. The game theoretic model assumes preferences can be ordered. This is a standard assumption for game theory. It is a cognitive issue as to whether this is necessarily the case. The consideration of these issues would likely involve the application of knowledge and practices from the emerging discipline of cognitive engineering (Dowell and Long, 1998).

The analysis of the completeness issues of Roth’s conception indicates that knowledge and practices from other disciplines may form an important part of any discipline of economic engineering.

It is important to consider also that the design exemplar given above is a simplification of the actual problem for the market being designed. Married couples have preferences for residency programs in the same locale. Unfortunately game theoretic results state that if such interrelated preferences are added there is not guaranteed to be a stable match within the match preference ordering (Roth, 1984). This issue might seem to significantly reduce the effectiveness of the stability matching principle. However Roth and Peranson (1997) indicate through theoretical computations and analysis of actual matching data from the NRMP system that cases where stable matches are not produced are extremely rare in practice.

The way the emerging discipline of economic design seeks to address this type of issues is key to the type of discipline that emerges. The value Long and Dowell (1989) adds here is the distinction between applied science and engineering disciplines. If the discipline is happy to rest with simple theoretical models that offer guidance to the design of actual systems it will have the knowledge associated with an applied science discipline and the match stability principle will really be a match stability guideline. If, however, as Roth appears to advocate, the formal knowledge is elaborated to provide principles that, for example, establish under what preference correlation and numeric conditions stable matches are guaranteed, the discipline will have the knowledge of an engineering discipline.

Roth’s implicit conception that focuses on microeconomics and the design of individual markets has provided a contrast between applied science and engineering disciplines. In what follows the macro-economic design issues engendered by global financial crisis are considered from the perspective of the general design problem for economic engineering.

6. The late 2000s global financial crisis

A brief description of the global financial economic system pre2009 is provided.(Table 8).

Economic system: global financial system (pre-2009)
Clients

  • Consumers
    – – Investing
    – – Obtaining credit
  • Firms and other legal entities
    – – Raising capital
    – – Investing

Non-client agents

  • Financial institutions
  • – – Banks (retail and wholesale)
  • – – Hedge funds
  • Credit reference agencies – consumers (Equifax, Experian, etc.)
  • Credit rating agencies – non consumer (Standard and Poor, Moody, etc.)
  • Government
  • – – Central banks
  • – – Regulators

Regulation

  • Capital adequacy requirements
  • National regulatory supervision
  • Accounting regulation
  • Deposit insurance

Table 8: The global financial system (Pre-2009).

Problems in the actual performance of the system emerged in 2007, as a result of a collapse of the US Housing bubble. In 2008 the global financial system failed in a catastrophic manner (Wikipedia, 2007–2009):

  • A number of financial institutions found themselves in financial difficulties leading to effective take over by the government (Northern Rock – UK, Fannie Mae – US, Freddie Mac – US), take over by other institutions (Bradford & Bingley – UK, Bear Stearns – US, Merrill Lynch – US, Washington Mutual – US) and even bankruptcy (Lehman Brothers – US).
  • In order for financial institutions to meet capital adequacy requirements the governments of the US and the UK had to inject $700 billion and £500 billion into financial institutions, respectively.
  • Global equity markets collapsed wiping trillions of dollars off asset values.
    The failure led to a Credit Crunch that has made capital and credit significantly more difficult to obtain and has simultaneously reduced returns on investment, further degrading the actual performance of the system.

For now, a description of the domain of the global financial system will be forgone as a redesign, proposed by the UK Financial Services Authority (FSA) is considered. As might be imagined there are many other potential solutions including, for example, a return to Keynesian approach to economics that dominated between the 1940s and 1970s (Gailbraith, 2008; Stratton and Seager, 2008). In what follows only the FSA’s response is considered in order to illustrate the potential pitfalls of considering redesign without a clear understanding of the design problem.

6.1. The FSA response to the crisis

The Turner Review (Turner, 2009) proposes alterations to the global financial system to avoid a recurrence of recent catastrophic actual performance failures in the future. In order to simplify the presentation only some of the main points of the redesign are listed (see Table 9).

It is perceived that financial systems were not sufficiently capitalized going into the crisis so the FSA recommends that:

‘The quality and quantity of overall capital in the global banking system should be increased’ (Turner, 2009).

Unsurprisingly, the FSA proposes a number of extensions to regulatory supervision including extending regulation to hedge funds and regulation of offshore financial centers based upon globally agreed regulatory standards.

Economic system: global banking system (FSA recommendations)
Clients

  • Consumers
    – – Investing
    – – Obtaining credit
  • Firms and other legal entities
    – – Raising capital
    – – Investing

Non-client agents

  • Banks (retail and wholesale)
  • Credit reference agencies
  • Government

Regulation

  • Increased capital adequacy requirements
  • Extended regulatory supervisio
  • Accounting regulation to include off cycle reserves
  • Regulation of credit rating agencies
  • Increased deposit insurance
  • Modified remuneration structure

Table 9: The FSA response to the crisis.

Domain
Work Transformation

  • To match consumer’s and other legal entities investment and credit
    requirements
  • To enable transactions between clients and to provide reporting of these
    transactions

Client Preferences

  • For investment and credit matches
  • For transaction type
  • For transaction report

Common

  • Transactions should be carried out between correctly identified clients
  • Clients should be aware of the credit worthiness of other parties to
    transactions
  • Client knowledge of the nature of transactions should be sufficient for
    them to be aware of the risks associated with different transactions

Table 10: The design problem of the global financial system.

The FSA believes that the risk management methodologies in use by financial institutions, in particular, the Value at Risk (VAR) (Gastineau and Kritzman, 1996) method did not sufficiently take into account the fluctuations in the economic cycle and thus the ‘Capital required against trading book activities should be increased significantly (e.g. several times)…’ And that published accounts should include an ‘Economic Cycle Reserve’.
The crisis has in part been attributed to failures of the credit rating agencies and so the report proposes that:

‘…they should be subject to registration and supervision to ensure good governance and management of conflicts of interest and to ensure that credit ratings can only be applied to securities for which a consistent rating is possible.’

The FSA recommends that retail deposit insurance is increased ((This measure has already been taken in the UK.)) and that depositors should be made aware of the extent of the insurance cover.

Finally, from the perspective of our analysis, the FSA proposes changes to the remuneration structures within financial institutions to ‘avoid incentives for undue risk taking’.
The recommendations of the Turner review should go a long way to preventing a recurrence of a similar crisis thus avoiding potentially catastrophic actual performance failures in the global financial system. However, the review does not specify a design problem for the global financial system. The closest it comes is to provide a list of ”What went wrong?” This implement and test approach is typical of a craft discipline as outlined by Long and Dowell. Without the consideration of a design problem that identifies desired performance, there is no way to judge, a priori, whether the Turner recommendations will, if adopted act to improve or degrade the ongoing overall actual performance of the global financial system. To attempt such analysis it is necessary to attempt to specify the general design problem for the global financial system.

6.2. A conception based response to the crisis

A simple domain model of the global banking system is postulated in Table 10.
The work transformation is seen as having two elements. The first of these is to match ((It may be argued that the work transformation should be to meet rather than just match consumer and legal entities investment requirements. The distinction between match and meet is a subtle one. Using the formalism of preference orderings, to meet requirements every client’s preference for investment and credit would need to be in the specified set P. This is unlikely to be possible unless some condition on the reasonableness of client preferences with respect to other participants. This seems too strong a condition to impose.)) investors to those requiring credit. This is taken to mean all financial transactions from simple deposits and loans, to complex derivative transactions and indeed any transaction that manages risk. The second element of the transformation is the services of banks that are a key component to the financial system, the provision of transaction management and reporting services to customers.

The client preferences capture preferences for investment and credit matches as well as the type of transactions that may be carried out and the nature of the reports on these transactions.

The common preferences capture key aspects required of the banking system: identity verification, credit reference and rating, and provision of financial advice.

Some of the FSA recommendations outlined above, in particular improved regulation of credit rating agencies and ensuring depositors are aware of the nature of deposit insurance should act to improve actual performance for this domain. However, other effects are likely to have a negative impact on ongoing performance. In order to increase capital and meet requirements for off cycle reserves, financial institutions will need to increase the spread between investment and credit thus providing less preferred investment-credit matches to clients. Increased regulation will also increase costs to financial institutions further increasing spreads. A more worrying aspect of increased regulation is the consequently increased barrier to entry for new financial institutions. This occurs on top of a radical financial institution consolidation that has already increased such barriers. The effect of raised barriers to entry is to reduce innovation in the financial institutions and reduce competition, thus leading to a further degradation of actual performance.

Economic system: proposed redesign of global financial system
Clients

  • Consumers
  • – – Investing
  • – – Obtaining credit
  • Other legal entities
  • – – Investing
  • – – Raising capital

Non-client agents

  • Transaction management institutions with functions:
  • – – Transaction management
  • – – Transaction reporting & accounting
  • – – Identity verification
  • – – Credit rating and reference provision
  • – – Financial advice provision
  • Investment institutions
  • Government
  • – – Central banks
  • – – Regulators

Regulation

  • Capital adequacy requirements for banks
  • Global regulatory supervision
  • Based upon a tiered structure of investment intuitions. With increased regulation for higher tiered institutions. Government guarantees pay- ments for all tier one institutions. Level of government guarantees tails off as you go down the investment institutional tiers
  • Accounting regulation – enforced through transaction management insti- tutions, extended based upon investment institution tier

Table 11: The proposed redesign of the global financial system.

The problem is not that more regulation is not needed but rather that regulation needs to be considered as part of a redesign of other aspects of the financial system. The statement of a design problem opens up the possibility for the knowledge and practices of other disciplines to be employed towards the development of a solution. In Table 11 a design is presented that employs an approach normally used by software system architects:

‘The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them.'(Bass et al., 2007).

Such an approach considers the structure and relations of the components of the system, in the economic system case, the non-client agents, and how these components may be reconfigured to improve actual performance. To justify the proposed redesign evidence is given that agents that perform some of the functions described are already emerging.

A key aspect of this redesign is a separation of concerns amongst financial institutions. Such an approach was made after the great depression of the 1930s in the US Glass-Steagall Act (1933), the partial repeal of which in the Leach-Bliley Act (1999) (Barth et al., 2000) has been partly blamed for the current crisis (Halligan, 2009; Housel, 2009). The separation proposed in this redesign, however, goes much further than the 1933 act. The primary separation is between institutions with the function of transaction management and institutions whose function is investment.

The transaction management institutions play no investment role and therefore are risk neutral with respect to investment risks. They fulfill client’s transaction requirements moving money between different client’s investment institutions. It can be argued that emerging organizations are already fulfilling this role (e.g. Paypal ((Paypal actually does hold deposits but this is to facilitate transactions, rather than for investment purposes.))

(Paypal, 2009). A key aspect of any transaction management system is the correct identification of the clients that are party to a transaction. Thus, identity verification is a key role of transaction management institutions. Other functions of transaction management institutions include transaction reporting and financial advice. Web-based organizations have emerged that play this function Wesabe (2000) and Geezeo (2009). These entities go further than merely reporting financial transactions, enabling users to analyze their expenditure by constructing accounts. Finally transaction management institutions, with full knowledge of client’s transactions (and even accounts), and independence from any investment incentive, are ideally placed to offer accurate credit reference and rating information.

The concept of separation of concerns is carried further into investment institutions. A comment from Nobel laureate Paul Krugman acts as motivation for this separation. He postulates the following simple rule:

‘. . . anything that does what a bank does, anything that has to be rescued in a crisis the way banks are, should be regulated like a bank'(Krugman, 2009).

The separation for investment institutions aims to tie the level of regulation to the level at which the institutions will be rescued by governments. At the highest tier might be simple deposit takers that make low credit risk loans. Such institutions would be rescued in full in the event of a collapse through a deposit insurance scheme. By necessity, these institutions would be highly regulated. There are already emerging institutions such as (Zopa (2009) fulfilling this function putting depositors and lenders together in a radically different fashion to that used by existing banking institutions. At the lowest tiers are lightly regulated institutions that are involved in highly risky investments but that under no circumstances would be rescued by government. Transaction management institutions, through their role as independent financial advisors to their clients, are responsible for ensuring that clients understand the difference between the different types of institutions.

The simple proposed design should act to improve actual performance of the global financial system.

Improved identity management should better ensure that transactions are carried out between correctly identified clients. Such identity management may also have applications beyond the financial sector helping to improve identity management across the internet (Cameron, 2005). Competition amongst transaction management institutions should ensure that client preferences for transaction types and transaction reports are met. The provision of accounting services and independent financial advice by transaction managers would act to improve client knowledge of financial transactions and better enable them to understand the risks associated with these transactions. Shiller (2008) cites poor financial advice in the provision of sub-prime mortgages as one of the causes of the housing bubble that led to the financial crisis. He further recommends that financial advice should be offered to individuals through government subsidy. The inclusion of credit reference and rating functions the transaction management institutions, would greatly improve the information upon which the credit worthiness assessments are made. This not only improves client’s knowledge of the other parties to their transactions but also, through the elimination of false negatives, leads to potentially improved investment-credit matches. The tiered structure of investment institutions, and its consequent separation of low risk and high-risk investments enable clients to better select investments, also acts to improve investment-credit matches.

The recommendations of the FSA outlined above can be incorporated into the tiered structure of investment institutions without having such a negative effect on the actual performance of the overall system. Further, the independent transaction management institutions can independently assess the risk profile of investment institutions as part of their credit reference and rating function, reporting to the regulatory authorities.

In contrast to microeconomics, there appears, as of yet, to be no emerging discipline of macro-economic engineering. ((Shiller’s (1993) discussion of the use of macro-markets for managing societies large risks being one possible exception.))

There is no general design knowledge, there are no putative principles to be analyzed. The value the Long and Dowell (1989) work adds here is the address of macro-economic issues within the context of a general design problem.

The simple system architecture solution above is not a complete solution to the current financial crisis. For this to be the case there would need to be elements such as an empirical validation of the problem statement, an implementation plan (for the empirical derivation) that migrated the current financial system to the new architecture, many other issues need to be addressed using the knowledge and practices of multiple disciplines. However the statement of the problem and the solution in this design-focused form does enable, a prior discussion as to the effectiveness of the solution. Just the problem in terms of desired performance opens possibilities of using the knowledge and practices from outside of mainstream economics.

7. Conclusion

Using a generic engineering conception derived from the work of Long and Dowell, and a postulation of the general design problem for economic systems, instances of both microeconomic and macro-economic design have been analyzed.

The general design problem for economic design has been considered in each analysis. Through Roth’s exposition of the redesign of the entry-level labor market for American doctors it was possible to describe the design problem and multiple different economic system solutions in terms of the general problem. The general problem was also used to provide a new perspective upon the redesign of the global financial system in response to the current crisis.

For the Roth work, the analysis against the generic engineering conception illustrated that:

  • The implicit conception of an engineering discipline exposed through the Roth’s example of redesign is consistent with the generic engineering conception.
  • Although Roth’s conception is not complete with respect to the generic conception, completeness might be sought through consideration of the knowledge and practices of other disciplines.

For the case of the global financial system, it has been illustrated that considering the issues within the context of a design problem, may lead to solutions that would otherwise not be considered. Here, too, given the complex nature of the problem, bringing in practitioners from other disciplines and considering the problem from a design-focused perspective may prove beneficial.

The value offered by the Long and Dowell is the conceptualization of the fundamental distinction between the general problems of scientific and engineering disciplines and the consequent distinction this engenders for knowledge and practice. It may be argued (Wikepedia, 2009) that in the past, a scientific discipline of economics has dealt with emergent problems through Kuhnian style paradigm shifts in macro-economics. The response to the great depression of the 1930s led to the dominance of Keynesianism (Keynes, 1936) This paradigm was in turn overthrown in response to the low growth and stagflation of the 1970s, ushering in an era of Monetarism (Freidman, 1962).

Solving the complex design problems of the economic systems of the 21st century may require more that these 20th century paradigm shifts of a scientific discipline.

Practitioners from different disciplines may need to be drawn together into an engineering discipline of economic design, of the type envisaged for HCI by Long and Dowell. Further as the interactions of human economic agents are increasingly mediated through software economic agents one would expect HCI to play a significant role in such a discipline.

References

Barth, J.R., Brumbaugh Jr., R.D., Wilcox, J.A., 2000. The repeal of glass-steagall and the advent of broad banking. Journal of Economic Perspectives 14 (2), 191–204.

Bass, L., Clements, P., 2007. R. Kazman Software Architecture in Practice. Addison Wesley.

Cameron, K., 2005. The Laws of Identity, The Identity Blog website, December 5, 2005. <http://www.identityblog.com/stories/2005/05/13/TheLawsOfIdentity. pdf>.

Dowell, J., Long, J.B., 1989. Towards a conception of an engineering discipline of human factors. Ergonomics 32 (11), 1513–1535.

Dowell, J., Long, J.B., 1998. Conception of the cognitive engineering design problem. Ergonomics 41 (2), 126–139.

Flanagan, D., 2005. Java in a Nutshell. O’Reilly.

Fowler, M., 2003. UML Distilled: A Brief Guide to the Standard Object ModelingLanguage. Addison-Wesley.

Freidman, M., 1962. Capitalism and Freedom. University of Chicago Press.

Gailbraith, J.K., 2008. The collapse of monetarism and the irrelevance of the New
Monetary consensus,25th Annual Milton Friedman Distinguished Lecture at Marietta College, Marietta, Ohio, March 31, 2008. <http://www.gov.utexas.edu/ papers/CollapseofMonetarismdelivered.pdf>.

Gale, D., Shapley, L., 1962. College admissions and the stability of marriage. American Mathematical Monthly 69, 9–15.
Gastineau, G.L., Kritzman, M.P., 1996. Dictionary of financial risk management. Frank Journal of Fabozzi Associates.

The Geezeo Website, 2009. <http://www.geezeo.com>.

Halligan, L., 2009. Outrage at bonuses won’t solve the mess we’re in, The Daily
Telegraph website, February 16, 2009. <http://www.telegraph.co.uk/finance/ comment/liamhalligan/4623601/Outrage-at-bonuses-wont-solve-the-mess- were-in.html>.

Housel, M., Barker, C., 2009. Who’s More to Blame: Wall Street or the Repealers of the Glass-Steagall Act? Notley Fool Website, April 6, 2009. <http:// www.fool.com/investing/general/2009/04/06/whos-more-to-blame-wall-street- or-the-repealers-of.aspx>.

IBM Website – Rational Unified Process, 2009. <http://www-01.ibm.com/software/ awdtools/rup/?S_TACT=105AGY59&S_CMP=WIKI&ca=dtl-08rupsite>.

IBML Website – Rational Rose Developer for Java, 2009. <http://www.01.ibm.com/ software/awdtools/developer/rose/java/index.html>.

Keynes, J.M., 1936. The General Theory of Employment. Interest, and Money. Kroll, P.,

Kruchten, P., 2003. The Rational Unified Process Made Easy: A Practitioners
Guide to the RUP. Addison-Wesley.

Krugman, P., 2009. The Return of Depression Economics and the Crisis of 2008.
W.W. Norton Company Limited.

Kuhn, T.S., 1970. The Structure of Scientific Revolutions. University of Chicago Press.

Lawvere, F.W., Schanuel, S.H., 1987. Conceptual Mathematics, A First Introduction
to Categories. Cambridge University Press.

Long, J.B., Dowell, J., 1989. Conceptions of the Discipline of HCI: Craft, Applied
Science and Engineering, People and Computers V. Cambridge University Press.

MacLane, S., 1998. Categories for the Working Mathematician, second ed. Springer.

Milgrom, P., 2000. Putting auction theory to work: the simultaneous ascending
auction. Journal of Political Economy 108 (200), pp. 245–272.

Osbourne, M.J., Rubinstein, A., 1999. A Course in Game Theory. MIT Press. The Paypal webs, 2009. <http://www.paypal.com>.

Roth, A.E., 1984. The evolution of the labor market for medical interns and
residents: a case study in game theory. Journal of Political Economy 92, 991–
1016.

Roth, A.E., 2002. The economist as engineer: game theory, experimentation, and
computation as tools for economic design. Econometrica 70 (4), 1341–1378.

Roth, A.E., Peranson, E., 1997. The effects of the change in the NRMP matching
algorithm. Journal of the American Medical Association 278, 729–732.

Salter, I.K., 1993. A framework for formally defining the syntax of visual languages. In: Proceedings of the IEEE Symposium on Visual Languages. IEEE Computer
Society Press, pp. 244–248.

Salter, I.K., 1995. The Design of Formal Languages, PhD Thesis, University College
London.

Shiller, R.J., 1993. Macro Markets: Creating Institutions for Managing Society’s
Largest Economic Risks, Clarendon Lectures in Economics. Oxford University
Press, Oxford.

Shiller, Robert J., 2008. The Subprime Solution. Princeton University Press.

Simon, H., 1957. A behavioral model of rational choice. In: Models of Man, Social
and Rational: Mathematical Essays on Rational Human Behavior in a Social
Setting. Wiley, New York. Stratton, A.

Seager, A., 2008. Darling invokes Keynes as he eases spending rules to
fight recession, The Guardian, October 20th 2008. <http://www.guardian.co.uk/
politics/2008/oct/20/economy-recession-treasury-energy-housing>.

Turner, J.A., 2009. The turner review (a regulatory response to the global banking
crisis). Financial Services Authority (3).

The Wesabe Website, 2000. <http://www.wesabe.com>.

Wikepedia – Paradigm Shift, 2009. <http://www.ikipedia.org/wiki/Paradigm_shift>.

Wikipedia – Financial crisis of 2007–2009. <http://www.en.wikipedia.org/wiki/
Financial_crisis_of_2007%E2%80%932009>.

Wilson, R.B., 2002. Architecture of power markets. Econometrica 70, 1299–1340.

The Zopa Website, 2009. <http://www.zopa.com>.

 

 

 

 

 

Human–computer interaction: A stable discipline, a nascent science, and the growth of the long tail 150 150 admin

Human–computer interaction: A stable discipline, a nascent science, and the growth of the long tail

Human–computer interaction: A stable discipline, a nascent science, and the growth of the long tail

Alan Dix

Lancaster University, Computing Departments, InfoLab21, South Drive, Lancaster LA1 4WA, United Kingdom
John Long Comment 1 on this paper

As far as I can remember, I first met Alan Dix in the early eighties. Kee Yong Lim and I gave a seminar to the York group on MUSE (Method for Usability Engineering). At the time, Alan along with Michael Harrison and others, were working on formal methods in HCI, which subsequently became a book of the same name in my HCI series with Cambridge University Press. Alan’s contribution was entitled: ‘Non-determinism as a paradigm for understanding the user interface’. I have met and discussed with Alan off and on over the intervening years, mostly at conferences. The exchanges have never been less than lively, I think he would agree. I am delighted that he contributed to my Festschrift. Alan is an interesting combination of mathematician and plain speaker (as well as humorist etc etc ) and wisely keeps the two modes largely separate. The great advantage of plain speaking is that he is able to express insights, concerning HCI, directly and with novelty, by ignoring ‘received HCI wisdom’, for example, the difference between HCI as a community and as a discipline. A possible disadvantage, however, of plain speaking is that natural language and technical use of the concepts may lead to some confusion, for example, the difference between the scientific and the everyday meaning of ‘understanding’. The advantages and disadvantages of plain speaking are both apparent in Dix’s article and the reader will no doubt enjoy identifying both.

Abstract

This paper represents a personal view of the state of HCI as a design discipline and as a scientific discipline, and how this is changing in the face of new technological and social situations.

Comment 2

If HCI is both a design discipline and a scientific discipline, as Dix claims, then presumably HCI has two sets of knowledge and two sets of practices (for design and understanding respectively). However, there may, or may not, be relations between the knowledge of the one and the practices of the other. Any such relations, for example, as in applied science, need to be made explicit and justified by Dix (and others), for example, in the manner of Salter  (2010, Figure 3 – Kuhn and Applied Science).

Going back 20 years a frequent topic of discussion was whether HCI was a ‘discipline’.

Comment 3

HCI, as a community, has certainly matured, in the sense of increasing in size and variability (see Carroll (2010) for an elaboration of this claim). However, if a discipline’s knowledge supports its practices (whether of design or of science (see Comment 1 earlier), then it is unclear how practices can be mature (in the sense of fully developed), while its knowledge ‘is still developing or needs to develop’. Dix’s later identification of the challenge of developing reliable knowledge in HCI supports this point. Knowledge, which is not reliable, cannot support mature practices, which are reliable.

 

It is unclear whether this was ever a fruitful topic, but academic disciplines are effectively about academic communities and there is ample evidence of the long-term stability of the international HCI/CHI community.

Comment 4

Here, Dix is at one with Long and Dowell (1989) and Dowell and Long (1989) and in disagreement, for example, with Carroll (2010). The question arises, then as to how HCI knowledge can be made more reliable. The Long and Dowell references propose one answer to the question. However, the fundamental distinction between an HCI discipline and the HCI community, as made here by Dix,  remains  critical for any answer to this question.

 

However, as in computer ‘science’, the central scientific core of HCI is perhaps still unclear; for example, a strength of HCI is the closeness between theory and practice, but the corresponding danger is that the two are often confused. The paper focuses particularly on the challenge of methodological thinking in HCI, especially as the technological and social context of HCI rapidly changes. This is set alongside two other challenges: the development of reliable knowledge in HCI and the clear understanding of interlinked human roles within the discipline. As a case study of the need for methodological thinking, the paper considers the use of single person studies in research and design. These are likely to be particularly valuable as we move from a small number of applications used by many people to a ‘long tail’ where large numbers of applications are used by small numbers of people. This change calls for different practical design strategies; focusing on the peak experience of a few rather than acceptable performance for many. Moving back to the broader picture, as we see more diversity both in terms of types of systems and kinds of concerns, this may also be an opportunity to reflect on what is core across these; potential fragmentation becoming a locus to understand more clearly what defines HCI, not just for the things we see now, but for the future that we cannot see.

1. Overview

This paper has its roots in the Inaugural Lecture of SIGCHI Ireland (Dix, 2008), where it seemed a suitable occasion for a sort of ‘state of the nation’, giving a personal view of where HCI stands as a discipline and how it can develop and grow. This then seemed a suitable topic to build upon for this John Long Festschrift special edition of Interacting with Computers, as Long has himself written with such sharp insight on the directions of the field and raised questions that have prompted many others to look at the discipline of HCI as a whole, not merely their own work within it. The SIGCHI Ireland talk also drew upon the discussions at the UCLIC/Equator two-day workshop on The Future of HCI in the UK in 2007 (see also Blandford, 2007), especially the discussion on roles and genres in HCI.

The basic argument of this paper is that while HCI has matured as a community and also as a practice, it is still developing, or needs to develop its own roots as an academic discipline.

Comment 5
 This point has been addressed in Comment 4.

I focus especially on methodological thinking, not in the sense of attempting to establish a single methodological framework, which seems doomed to failure, but in encouraging an ongoing methodological critique of the methods we borrow from other disciplines as we adapt them to our own.

Comment 6

A ‘methodological critique’ presumes a framework of some kind. Multiple frameworks, described at a high level of description, would constitute a de facto ‘single methodological framework’, either by abstraction or by generification. The point is important for identifying the possible relations, which exist between the work of different researchers.

This discussion of methodology is set alongside two other, interconnected, challenges to the discipline, the need to establish reliable, validated knowledge (another theme of Long’s, although here used more broadly),

Comment 7

According to Long (1997), validation of HCI knowledge requires its ‘conceptualisatopn; operationalisation; test; and generalisation’. Dix is not explicit about his broader use of the term.

and the need to understand the way different roles within the HCI community fit together and should communicate their results in order to produce a stronger science as a whole.

Note that I will occasionally use the term ‘science’, but in a broader, perhaps more historically felicitous sense, than the ‘hard’ sciences. Much of HCI research (but not necessarily all, see Section 3) is oriented towards producing knowledge that is usable for design, hence that has, in a sense, ‘truth’ about the world. This knowledge may not be quantitative, or formal, but does need to be what I term synthetic (Dix, 2008); that is helping you to achieve some effect, or, in Roger’s (2004) terms, prescriptive, ”providing advice as to how to design or evaluate”. In the broad academic dichotomy this seems the domain of the Sciences as opposed to the Arts.

Fig. 1. Roadmap of the argument structure of the paper.

As this is a long paper, Fig. 1 shows a roadmap of the main argument structure. The next section provides a motivation for this work, noting the strengths of HCI as a growing discipline and the need to develop stronger independent disciplinary roots. Section 3 takes a short detour and discusses the issue of what HCI is about, its ultimate purpose and goal as a discipline (as opposed to the purpose of individual pieces of work within the discipline); this is largely to contextualise the work within that of others, most notably Long, who have focused strongly on this question. In contrast this paper is more about how the discipline is conducted, what it does. Section 4 picks up this theme proposing and discussing three challenges for the discipline: methodology, knowledge and roles. The need for clarity in understanding methodology is particularly important given the rapid changes in the technology and human practices that are the subject of HCI. Section 5 discusses these changes, probably familiar ground for many readers. This then leads, in Section 6, to a case study of methodological thinking in the work of Razak on single person studies. This technique is of particular use when applied to design for peak experience, best for some as opposed to good for all. This trend towards the long tail of more individual micro-applications and the democratisation of information technology seems to be common to many of the developments in the use of technology and presents its own challenges for traditional HCI practice.

The paper devotes a substantial part of Section 4 to discussing evaluation and the whole of Section 6 to the use of single person studies. In both cases these are used as examples of the importance of clear methodological thinking within HCI; although evaluation is itself a central topic in the discipline. It is hoped that the paper gives insight on both these topics; however, the main focus is on the broader issue of methodology alongside the other core challenges of knowledge and roles in HCI as an integrated community and a scientific discipline.

2. HCI discipline and science

2.1. The growth of a community

The roots of HCI go back at least 50 years, with Brian Shackel’s (1959) paper on ergonomics of displays. However, the real beginnings of HCI as an emerging discipline are more like 25 years ago, with the founding of early conferences: Interact, CHI, British HCI and Vienna HCI (now ceased). My own first international conference was Interact ’87 in Stuttgart at which Brian Shackel gave a plenary, a welcome, I believe, on behalf of IFIP TC.13.1 A key question he posed was whether HCI was a discipline, or merely a meeting between other disciplines. Now this seems rather like navel gazing, but, at the time when HCI was developing coherence, it was a significant question.

Looking back now we can easily say, ”of course it is an academic discipline”, because what is an academic discipline if it is not an academic community; after 25 years of IFIP TC.13, and numerous national societies: SIGCHI, Interaction (formerly known as British HCI Group), not least the recent SIGCHI Ireland – clearly there is a community!

Comment 8

An academic discipline surely presumes a community. However, a community does not necessarily presume a discipline – see also Comment 3.

But that is a little too glib. Science, using the word in the broadest sense, goes beyond community; to be an academic discipline also requires a coherent basis for knowledge. Mere acceptance of knowledge by a group is not sufficient; we need some assurance of the truth or validity of our knowledge.

Comment 9

True or valid knowledge would be mature, because reliable. Such knowledge would support mature (because reliable) practices – see also Comment 2 earlier.

 

When flying we are not happy to rely merely on the accepted opinion of aerospace engineers; we want to know that they have a basis for their designs beyond accepted practice.

2.2. Craft or science

This brings to mind the discussion in the late 1980s, initiated by Long and Dowell (1989), about whether HCI was a craft, engineering or science. Arguably craft is really more about individual experience, but craftsmanship is not what one would want in an aeroplane, nor in Internet banking. Whether we call it science, engineering or simply being academic, we need to be able to give away knowledge to others who should then be able to apply it with assurance. However un-politically correct it is to use this sort of positivist language – yes we do want truth and fact sometimes!

Comment 10

As an aside, I hope by now that Dix has read Carroll’s (2010) acerbic comments on positivist tendencies in HCI. Maybe we need a new ‘school’.

Note that in talking about the facts of HCI we need to distinguish between the domain studied, which is by its nature complex and nuanced, and our understanding of it, within which we seek clarity. Of course, we rarely, if ever, have complete knowledge – there is always an epistemological gap; and the level of confidence we accept varies from domain to domain. However, for many aspects of HCI research, while the subject matter is often culturally determined and rich, facts about these contexts are not matters of opinion. Similarly, in design itself, while the goals of design and the people for whom the product is being created are contextually and culturally situated, the success of methods in achieving appropriate designs is not.

Comment 11

The success of methods in achieving appropriate designs is a measure of their reliability and so maturity. Methods here constitute HCI discipline knowledge, which supports discipline practices, as reported in Hill’s paper (2010). See also Comments 3 and 4.

So, are we getting there; are we developing this coherent basis for knowledge in HCI?

2.3. Second generation HCI

The demographic of the HCI community varies between countries, but certainly in the UK there are an increasing number of ‘second generation’ HCI people; that is people who have done PhDs, masters and maybe even undergraduate courses with a strong HCI element and have now become the teachers and senior researchers themselves. I would describe myself as originally a mathematician who moved into HCI, others have roots in psychology, computer science, sociology, but an increasing number are straight HCI people.
As a sign of community this is very powerful; no longing for a half-remembered academic homeland elsewhere, but an academic generation who own HCI as their home.
However, this has also given me concern for a number of years. As we gradually lose those strong connections with our old disciplinary roots, have we developed equally strong ones within HCI?

2.4. From community to discipline

Indeed, there are signs that we do not yet have strong enough methodological roots. One key example is the relationship between HCI research and practice. One of the great strengths of HCI is that the two are close. To some extent this is true of computing also, but even more so in HCI. There are few fields where the practitioners and researchers can so freely attend the same events, present work to one another and hold discussions. Again this is powerful for HCI as a community and good news for funders looking for industrial relevance; indeed, in many countries, it has been commercial pressure that has driven often reluctant computing departments to take HCI seriously. However, the danger of this is that it is easy to confuse the two. Nowhere is this more evident than in usability evaluation.

We all know that evaluation is the sine qua non of HCI; professionally, often the key role of the usability practitioner; and, academically, try getting a paper published without that evaluation box being ticked! The techniques and tools for evaluation are often the same for usability practice and HCI research, whether formal experiments, usability labs, ethnography, prototyping, or maybe even cultural probes or technology probes. However, whilst the techniques are similar, the goals are different. For the usability professional, the ultimate aim is to improve the product, whereas the goal of research is to gain new understanding.

Comment 12

Dix raises the interesting issue of the relationship between HCI research and practice. ‘One of the great strengths of HCI is that the two are close …… However, the danger of this is that it is easy confuse the two …. (For practice) ‘the ultimate aim is to improve the product, whereas the goal of research is to gain new understanding’. It is worth noting, that the ‘closeness’ and ‘confusion’ issue is resolved by Salter’s Design Research Exemplar (Figure 8). Research and practice are close, inasmuch as the latter is embedded in the former. Both, then, may properly use the same design methods. Research and practice, however, are distinguished, and so not confused, because only research possesses a General Requirements and a General Artefact Specification. Further, unlike practice, all its component relations are formal. Research acquires and validates HCI knowledge, both substantive (for example, models) and methodological (for example, evaluation), which supports HCI practices (for example, design).

In fact, even these goals are interlinked: research systems often need to be designed well enough for effective experimentation or deployment; and effective design will be based on a thorough understanding of the context and technology.

Comment 13

Here, the ‘understanding’ of the practitioner is not to be confused with the ‘understanding’ of an HCI scientific discipline. The latter would explain already observed HCI phenomena and predict yet-to-be-observed HCI phenomena.

 

However, for the researcher this formative creation of an experimental prototype is NOT the research itself, but merely the preparation for the research; and for the practitioner the understanding they gain is primarily in order to design better systems now, not establish fundamental knowledge for 10 or 20 years’ time.

Comment 14

See Comment 13.

So here we have a great strength of our community, but one that needs a clear understanding of purpose in order to contribute to a well-founded academic discipline. Reading any conference proceedings or journal it is evident that this clarity of purpose is not yet there. We clearly have work still to do.

Comment 15

The absence of clarity is indeed a problem for the foundation of an academic discipline of HCI (in Dix’s words). However, the absence of consensus is also a problem. This absence derives from a lack of agreement, as to what kind of discipline HCI aspires and from the differences of purpose of such possible disciplines (for example, design for HCI as Engineering and understanding for HCI as Science (Long and Dowell, 1989)).

 

 

 

3. What HCI is about and how it goes about it

Long and Dowell (1989) seminal paper on whether HCI was craft, science or engineering, and moreover Long’s (1996) elaboration of the relationship between HCI research and design, were predominantly about what HCI is about as a discipline: the subject matter, what it produces; the latter paper in particular focused on framing the ”HCI general design problem”. Similarly, Diaper (1989) in his opening editorial for Interacting with Computers, and re-iterated more recently (Diaper and Sanger, 2006), suggests that the goals of the discipline of HCI are ”to develop or improve the safety, utility, effectiveness, efficiency and usability of systems that include computers”.

Of course, many would object to both Long’s and Diaper’s apparently Taylorist approaches to the definition of HCI’s purpose, certainly adding experience (McCarthy and Wright, 2004), or even human values (Harper et al., 2008).

Comment 16

If work is ‘any activity seeking effective performance’ (long, 1996), then this conception is able to subsume Taylorist, and indeed, any other conceptions of ‘work’, including experience and human values (see also Long, 2010).

 

However, it should be noted that Diaper and Sanger (2006) expressly state that for them ‘work’ is taken to encompass leisure and is the full activity of being human; that is, satisfaction is subsumed within effectiveness. Similarly, Long (1996) states that work includes ”office work, factory work and home work” (albeit followed by the rider ”any activity seeking effective performance”). However, this is all part of the discussion of what HCI is about.

 

Note that this ultimate ‘purpose’ of HCI is distinct from the goal of an individual piece of work discussed at the end of the last section. The goal of an evaluation of a prototype within HCI research should always be for gathering understanding not improving the prototype as an artefact. However, the larger purpose of obtaining that understanding may be to help others engaged in practical design to improve their own devices and systems or maybe purely for the sake of the understanding itself.

Comment 17

The ‘understanding ‘ here is presumably scientific (that is, constituted of explanation and prediction), because it is acquired by HCI research. The means by which such knowledge is applied to the design of artefacts needs to be made explicit (see also Comments 1 and 12). If, on the contrary, HCI research is of (engineering) design, then evaluating a prototype would be a (validation) test of HCI knowledge (substantive or methodological), used in the development of the prototype (Long,1997).

 

Although unavoidably based on the author’s personal prejudices about the ultimate purpose of HCI, this paper attempts, not to avoid this issue, but to address a slightly different one, namely, how HCI as a discipline goes about addressing its goals, concerns, purpose: that is how it goes about doing what it is about. Thus the breakdown in the next section is fundamentally about the community of HCI:

  • (i) roles – how the community is constituted,
  • (ii) methodology – how the members of the community act,
  • (iii) knowledge – how the members of the community communicate.

Note that the ‘knowledge’ here has two purposes, (iii.a) communication within the community and (iii.b) communication to (or with) those outside the community. As I am focusing on the academic discipline of HCI, ‘outside’ here will include HCI practitioners (when acting in that role; a practitioner may also be an academic and vice versa). It is the latter, (iii.b), that Long (1996) is referring to when he describes the ‘discipline’ of HCI to be the ”use of HCI [knowledge]3 to support HCI practices…”. In contrast, the focus of this paper is more on the internal communications (iii.a) that together build a coherent and reliable body of knowledge, but of course the availability and reliability of that body of knowledge is exactly what is needed for its exploitation by those ‘outside’.

Comment 18

Building a coherent and reliable body of knowledge for HCI requires the acquisition of such knowledge and its validation (as conceptualization; operationalisation; test; and generalisation – Long, 1996). ‘Internal communications’, then, need to address at least all these validation practices; but also the consensus, as concerns them, such that researchers can build on each others’ work.

 

It may be evident that human interactions, including conferences and discussions, are not included explicitly in my breakdown, although they are of course often the place where ‘knowledge’ is first presented and also often the place where it is conceived. In fact the breakdown is not precious and it may be extended to include social and community events explicitly in this picture. Indeed, in reporting on the methods used in DEPtH, a project focused on understanding physicality in design, we remarked on the central importance of events as a source of knowledge and data as well as a place for community building (Dix et al., 2009). That is, the events were important in DEPtH as part of its methodology.

Comment 19

Undoubtedly, events were important to the project, as claimed by Dix. However, is the methodology, of which they were a part, HCI knowledge? If so, they would need to be conceptualized; operationalised; tested; and generalized (Long, 1996). Such events, as outlined, appear not to evidence such potential.

 

 

So, the focus on externalised knowledge as the locus of community interaction rather than face-to-face meetings may seem odd, but effectively it acknowledges the different ways in which knowledge is passed on. On the one hand there is a form of diffusion or contagion, where knowledge passes from person to person. This is not to suggest that knowledge is in lump-like memes (Dawkins, 1976), like passing on a library book; the process is much richer, more one of knowledge being formed and informed in the relations between people. We may discuss concepts with colleagues, at conferences, and in online forums, building up an understanding within the academic community. We may also discuss with practitioners, both learning about their concerns and experience and also passing on the research community’s accumulated and distributed understanding.

Comment 20

Is this understanding part of HCI knowledge, for example, of a scientific HCI discipline (see Comment 2). If so, it would need to be validatable (see Comment 11).

 

This human process is of the utmost importance. It can be seen as a form of long-term establishment of common ground. While the theory of common ground (Clark and Brennan, 1991; Clark, 1996) is normally applied to conversations or similar discourse, the same arguments for growing mutual understanding apply to communities albeit with slightly different ramifications and mechanisms. This human process is also a crucial part of education.

Comment 21

The reference to ‘common ground’ is very interesting. It prompts the question of whether ‘common ground’ is sufficient to support the consensus, necessary for HCI researchers to build on each others’ work, as required by Long and Dowell (1989).

 

However, while the human process of diffusion and mutual understanding is important, it is not sufficient; one of the distinguishing features of an academic discipline is in its externalisation of knowledge. We communicate through papers, videos, and other artefacts such as software (although it rarely continues running for long). This is by nature de-contextualised and abstracted, inasmuch as any externalisation abstracts from its subject, but precisely because of this it is applicable and persistent. The published extant knowledge of the discipline is the defining boundary object4 of the community.

Now in fact this picture is itself idealised, and some might say naive, in that the interpretation of externalised knowledge is itself governed by understanding gleaned usually through diffusive processes. My own alma mater discipline, mathematics, is an arch example of this, where there is a vast, largely unwritten, understanding of processes and interpretation that accompanies the formal mathematical theorems and proofs. However, while it is certainly naïve to ignore the rich community that surrounds academic externalised ‘knowledge’, both practical and theoretical concerns suggest this externalisation as an ideal, or at least a touchstone.

As noted, this paper is not primarily focused on the purpose of HCI, what it is about. Still, the dichotomy between inside and outside, academic and practitioner deserves a few words.

Long’s own work situates HCI as an academic discipline (or in his words HCI research) in relation to HCI design as seen in the quote above or his phrase ”fit-for-design-purpose” (Long, 1996). Similarly Diaper, whilst critiquing the common use of ‘usability’ on its own as a synonym for HCI, still gives goals that are focused on design, albeit interpreted more broadly than pure usability: ”safety, effectiveness, efficiency, and usability”. Coming from a different, more qualitative, position within HCI, Rogers (2004) also, in her excellent review and critique of (social and psychological) theory in HCI, effectively situates HCI theory in relation to its utility in HCI practice.

In teaching I usually distinguish between HCI as an academic discipline and HCI as a design discipline. The latter concerns using skills, knowledge and processes in the production of devices, software and other artefacts (or more generally interventions (Dix et al., 2004)) that in some way influence human interactions with computers (or more generally technology). The former, HCI as an academic discipline, is the study of situations involving people and technology (note, the series name of the British HCI conference), the design practices involved in such situations, and tools and techniques that are or can be used in either. In order to clarify this distinction, the remainder of this section will use ”HCI Practice” to denote the set of situations and design practices and ”HCI Research” to denote the academic discipline.

Comment 22

If HCI research, as an academic discipline, studies human-computer interaction and HCI practice designs human-computer interactions, the relation between research and practice needs to be made explicit (see also Comments 2 and 17). In passing, Dix’s position here seems to be similar, or even identical, to that of Carroll (2010).

 

This separation is relatively uncontentious and accords with all the positions described so far; in particular ”HCI Practice” here is close to or identical to Long’s ”HCI design”. What is critical is whether HCI Research is simply about HCI Practice, or whether HCI Research is for HCI Practice (and usually for its improvement). If the latter is the case, then HCI is purely a vocational discipline focused on its practical outcomes.5 The, now slightly dated, use of the term ‘usability’ for HCI or, more commonly now, ‘interaction design’, both orient us to regard HCI as just vocational. Now the word ‘just’ in no way minimises the importance of the vocational use of HCI, nor minimises its importance within HCI, but challenges whether it is the sole end of HCI.

Comment 23

It would be interesting to know, whether Dix considers electronic and other forms of engineering ‘just vocational’. If so, HCI research could be ‘for’ HCI practice, possibly in the absence of HCI research ‘about’ practice. The function of the latter would then need to be made explicit and justified (see also Comments 2, 17 and 22).

 

The confusion already alluded to between the methods in HCI is partly fuelled by this apparent identification of HCI Research and HCI Practice. It would be consistent to maintain the distinction between HCI Practice itself (the practice) and HCI Research aboutbut-for-the-purposes-of HCI Practice. However, this muddling of the two in methodology does seem symptomatic of a general lack of distinction.

Comment 24

The apparent confusion between HCI practice, not supported by HCI research, and HCI practice, supported by HCI research, can be clarified in at least two ways. First, the former can be considered as craft (with its own craft knowledge, acquired by experience, example and trial-and-error) and the latter, as engineering (with its own knowledge, acquired by research) (Long and Dowell, 1989). Second, the former is embodied in empirical derivation and validation relations between Client Requirements and Artefact and the latter is embodied, in addition, in formal derivation and validation relations between Specific Requirements Specification and Specific Artefact Specification. The relationships between the two practices are empirical derivation and validation (Salter, 2010, Figure 8).

 

The roots of HCI undoubtedly began in the practice of creating systems and situations where people could effectively (and all the other adjectives) interact with technology. However, as HCI has grown, it has encompassed a whole study of human endeavour and activity, which appears to go beyond purely vocational aims. Probability was born out of gambling, but does not restrict itself to studies funded by bookmakers.

Whether or not one accepts the arguments for pure ‘curiositydriven’ research, and even if one takes a utilitarian approach, there are good reasons to believe that HCI Research for HCI Practice is often best served by focusing on HCI Research about HCI Practice.

Comment 25

It would be of great interest to know exactly what these ‘good reasons’ are. Dix alludes to the acquisition of ‘more general knowledge’; but fails to indicate and exemplify how HCI research ‘about’ practice best serves HCI research ‘for’ practice (and so practice itself).

 

An excessive utility focus tends to mean that research runs behind technology. Work on the newest thing is too late for it, and looking for the next big thing is almost bound to fail; the big win is in using the new, the old and ideas for the next as means of uncovering more general knowledge. It is this knowledge that will be of value for the next new big thing and the next and the next after that. For example, consider the explosion in Facebook research after it reached its first 100 million; this has applied some existing theory from outside HCI, but appears to be largely a matter of the field catching up on a phenomenon after it has happened, rather than informing the development and design. This clearly happens in any discipline when there are major changes, but it should be the exception not the rule.

Perhaps most worrying is that we can start to accept a technology-driven approach as normal. At CSCW 2002, Brand (1994) gave a keynote taking the idea of timescales of structures in his book” How Buildings Learn” and generalising this to look at different timescales in other kjnds of activity. He tentatively suggested that there was perhaps a similar set of timescales for research (longest timescale), development (faster) and production (now). However, one of the questioners suggested that in fact in CSCW (and read also HCI) research was if anything faster-moving than development, driven constantly by the latest technology; the audience seemed to agree without noticing the unintended irony of the situation following a talk that emphasised the importance of long-term thinking.

This all said, the rest of the paper stands neutral on this (important) issue of what HCI is ultimately about, and instead focuses more on the way in which we as a community orient ourselves to create reliable knowledge no matter what form it takes.

Comment 26

Creating HCI knowledge, which reliably supports HCI (design) practice, cannot be other than intimately related to ‘what HCI is ultimately about’. To divorce the two matters in considering the challenges of methodology, knowledge and roles is a novel and interesting approach; but may prompt as many problems as it solves. See also following comments.

 

4. Three challenges

To recap, HCI research often has trouble distinguishing itself from HCI practice. Whether or not one believes that the purpose of HCI research is to serve HCI practice, clearly there is a difference between the two. However, this seems to stem from a lack of solid methodological roots as HCI has grown away from its parent disciplines, but not clearly established its own methodological heart.

Considering this, I propose three challenges, defined in the previous section, that we need to address in order to develop the academic discipline of HCI: challenges of methodology, of knowledge and of roles. As noted these are primarily challenges addressed to the conduct and process of the community in relation to building a systematic and reliable body of knowledge. I am not addressing the (more thorny) issues of what HCI is and what it is about. Even within this narrower remit, there are surely many other challenges, not least that of the inter-human relationships noted; however, three is enough to start with and is the classical number of points in any argument.

4.1. Methodology

New disciplinary roots require new methods. We have many methods in our HCI toolkit, so this does not seem to be a problem. However, these methods are often ‘borrowed’ from other disciplines. Within an established discipline one can use accepted methods without a great deal of thought as to whether they are appropriate because one is using them in the same way that others have before. But if we simply adopt these methods in a new context without considering the desiderata that made them appropriate in the original context, they may be misleading or lead us to conclusions that are downright wrong.

To adopt methods in a new discipline means we have to understand why the methods work; that is we have to think methodologically. Now here I am not using methodology in the way that has become common in computing and in HCI; that is simply to mean method! When I say we need to think methodologically I mean we need to think about our methods.

Comment 27

The Nielsen and Miller examples are both interesting and instructive. They also underline the problem of relating HCI research about HCI to practice. Neither Nielsen nor Miller’s practice prescriptions, even including all the conditions of correct application, have been validated (as conceptualized; operationalised; tested; and generalized – Long, 1996) with respect to design, as the solving of design problems. Thus, the practice prescriptions, including the conditions of application, cannot be considered ‘reliable HCI knowledge (of the kind desired by Dix).

 

The frequent confusion between usability evaluation and evaluation in research is just such a failing–adopting methods without understanding methodologically why they are appropriate and what they are for. But to be fair this is because thinking methodologically is hard. Many established disciplines have been around hundreds of years and so have had time to develop or evolve appropriate methods, but if we look to newer disciplines or subdisciplines we often see methodological problems.

As an example, and at the risk of alienating a community … some years ago I was evaluating some work that was on the boundaries between HCI and distributed artificial intelligence (DAI). The work seemed to be fundamentally flawed, as the researcher had performed single runs of a stochastic simulation with different conditions. In an HCI setting this might be like running an experiment with a single user in each condition. Now because of the nature of simulations there are times when a single run is sufficient; when certain conditions hold, a long enough run will exhibit all possible behaviours and so one long run is effectively like doing lots of short ones. Unfortunately in this case, the runs were too short due to memory problems. I was worried that I was going to have to send the researcher back to do repeat runs of everything! Happily there was a second reviewer who came from the DAI community and so I was able to ask him about this . . . what was the accepted practice in the community. Again happily the other reviewer was not just from the community, but also had a deep methodological understanding of the history of the area. Early in the development of DAI a key figure had shown that single runs were acceptable so long as the relevant conditions applied … unfortunately the ‘single run’ part got remembered, but the conditions had been forgotten. The research we were evaluating was simply following accepted practice in its discipline . . . it is just that the accepted practice was flawed.

Now before you judge DAI too harshly, think how often you have read papers (or written them yourself) that quote ”you only need to test with five users” (Nielsen, 2000) or Miller’s (1956) 7 ± 2 without really checking that they are valid in the context? In the case of the figure of five users, this was developed based on a combination of a mathematical model and empirical results (Nielsen and Landauer, 1993). The figure of five users was calculated.

  • (i) as the optimal cost/benefit point within an iterative development cycle, considerably more users are required for summative evaluation or where there is only the opportunity for a single formative evaluation stage;
  • (ii) as an average over a number of projects and to be applied properly needs to be assessed on a system by system basis; and
  • (iii) based on a number of assumptions, in particular, independence of faults, that are more reasonable for more fully developed systems than for early prototypes, where one fault may mask another.

Just as in the DAI example, Nielsen and Landaur’s original paper outlines many of these limitations, but the finding is typically used without similar care.

Similarly, Miller’s (1956) 7 ± 2 is about working memory, yet is frequently applied incorrectly when, in fact, other cognitive or visual processes are important, for example, the length of menus. In fact a visual menu does not require much working memory so long as it is organised clearly, and in laboratory experiments Larson and Czerwinski (1998) found that far broader menus were optimal. One can easily work out simple back-of-the-envelope models of menus to calculate the optimal trade-off between breadth and depth based on time to search the menu visually and time for the page to refresh (Dix, 2003). For the web, where visual search is typically much faster then refresh time, the optimal figure is typically 60-plus items per menu. This said, Miller’s 7 ± 2 may be useful when applied to the depth of the menu, if users try to keep track in their head of where they have been. There is some evidence that older users, often with poorer working memory, find deep menu structures particularly difficult (Rouet et al., 2003). For more examples of misuse, Eisenberg (2004) produced an excellent critique of 7 ± 2 from a designer’s viewpoint, although he does not appear to have realised that the poor uses of it are actually misuses.

As well as misusing past results, it is easy to go fundamentally wrong in the application of empirical methods whilst deriving new results. A vignette I have used several times previously (Dix, 2004,) illustrates the importance of methodological thinking and what can happen when this fails. It concerns a paper that was published at a major ACM sponsored conference in HCI a few years ago. To spare the authors, I will not name the paper. It may sound familiar; however, this is likely to be because it is typical of many papers that fail methodologically in the same way.

Fig. 2. Systems placed in a 2 x 2 matrix.

Note that the paper being described was not only in a conference with one of the most rigorous reviewing procedures, but also included among its authors a major figure in the field. The kind of methodological problems described below are not simply those of students or junior researchers, but common amongst the most senior figures in HCI.

The particular paper was in fact a solid empirical paper: experiment, design, and evaluation. It was considering collaborative support for a task; call it task X. The work began by considering three pieces of software; call them A, B and C:

  • A domain-specific software, synchronous group interaction,
  • B generic software, synchronous,
  • C generic software, asynchronous.

The three systems were placed in a 2 x 2 matrix: domain-specific vs. generic on one axis and synchronous vs. asynchronous on the other (Fig. 2). Incidentally, these 2 x 2 matrices are ubiquitous in many areas and are indeed extremely powerful analytic tools (Dix, 2002, 2008).

The paper then went on to describe the experiment comparing these conditions. There were a reasonable number of subjects in each condition (not just 5!) and sensible quality measures were used for assessing the outcomes of the task. Furthermore the experiment revealed statistically significant results … well certainly p < 0.05. There were two main effects:

  • (i) domain-specific software was better than generic software (Fig. 3a), and
  • (ii) asynchronous was better than synchronous (Fig. 3b).

The paper then concluded that what was clearly required was the missing gap: domain-specific asynchronous software, and then went on to describe the design and evaluation of an application in this area.

This all sounds exemplary, so what’s wrong with it?

First of all the paper was a little strong in its suggestion that (i) and (ii) meant that domain-specific asynchronous software would be best of all. Interaction effects6 are very common in HCI, and there was no argument as to why we should not expect an interaction in this case. However, that said, certainly the results would suggest that it is a good case to investigate further.

However, the big problem is harder to see, indeed if you blinked at the wrong moment when reading the paper it was easy to miss entirely. The paper started off with three systems and quite properly analysed them along dimensions. However, it then went on to conduct the experiment as if there were two independent variables being manipulated, when in fact there was precisely one piece of software for each condition. This is analogous to having an experiment with precisely one user in each condition – clearly problematic, and yet it is common to use a single piece of software, just as in this case, and never realise there is a problem!

In case it is not obvious why this is so bad: these were three completely different pieces of software that happened to have the relevant properties. Suppose application B just happened to have been very poorly designed. Application B would perform worse than application A, giving rise to the apparent effect that domain-specific was better than generic (Fig. 3a). Application B would also perform worse than application C, giving rise to the apparent effect that asynchronous was better than synchronous (Fig. 3b). That is the effects may have been due to an entirely extraneous factor, and nothing to do with the actual properties being studied.

Fig. 3. Experimental results (schematic).

The problem here is that the paper (and many in HCI) has ‘borrowed’ controlled experimental methods from psychology, but these methods embody assumptions that often do not hold in HCI. In particular, controlled psychological experiments are designed so there is a single simple cause or manipulation between conditions. However, when used in HCI, as above, there are many uncontrolled causes. Often we want experiments that have some form of ecological validity, which makes this worse.

However, just imagine trying to run the above study as if it were really a heavily controlled psychological experiment: we take a single piece of generic synchronous software B, first tweak it so it becomes domain-specific (call it A) and then tweak it again to make it still generic but asynchronous (call it C). This sounds stronger, and sometimes can work. However, in this case (and many) you would need to take a piece of software that is well designed for a particular situation (task generic and synchronous) then change to ‘just’ make it domain-specific or ‘just’ make it asynchronous; and furthermore you would need to do this without changing anything else and have it equally ‘good’ in all other respects after the tweak . . . clearly not possible.

Comment 28

The possibility, however, would not be the same for all types of research. The purpose of the study, as ‘considering collaborative support for a task’, is insufficiently specified for us to judge. If the research is ‘about’ HCI (intended to ‘help reveal the mechanisms underlying the observed phenomena’), then, as in Psychology, controlled experimentation could be appropriate and possible. Alternatively, if the research is ‘for’ HCI (as in acquiring knowledge to diagnose and solve design problems), then controlled experimentation might not be appropriate or possible. Dix is right to underline the issue of methodology here; but such issues cannot be addressed in the absence of the nature of HCI knowledge, the HCI practice it supports and its validation. This requirement holds for different conceptions of the discipline of HCI, for example, craft, applied science and engineering (Long and Dowell, 1989).

 

Does this mean no experiments are possible in HCI? Far from it; by understanding the assumptions underlying controlled experiments and the way in which HCI experiments do not meet these assumptions, we are in a position to alter the practice of the experiment and the methods of analysis in order to make more reliable interpretations of the outcomes.

In this case, we could collect additional qualitative data (video, logs, audio) as is common in HCI, but then use these in order to help interpret the quantitative measures. Based on our knowledge of human interaction and the data that was collected, the researchers might have been able to come to some judgement as to whether the effect seen was to do with the difference between synchronous and asynchronous interaction, or due simply to specific features of application B or C.

Quantitative end-to-end measures are good at telling you whether there is an effect, and how strong it is, but it is the qualitative data that helps you to understand why you are seeing the phenomenon. Furthermore, richer data can help reveal the mechanisms underlying the observed phenomena. By ‘mechanism’ I mean the details of how a person engages in some form of activity or task including, where appropriate, observable phenomena, social and cognitive aspects. That is, a detailed account not just of what happens end-to-end, but the steps, actions and thoughts that are necessary between. When you understand mechanism, it is easier to see whether an overall result is likely to generalise to new situation, and to address empirical or observational data in an analytic manner (Dix, 2008). Furthermore, if you understand mechanism then you may be able to add new measures or interventions to study finer aspects of the overall interaction.

The importance, and sometimes difficulty, of establishing mechanism is important in other fields also. Nutritional advice for many years quoted a recommended daily protein intake far higher than today. The reasons for this lay many years before when protein deficiency was first recognised during a famine in Africa. Once the medical team studying this realised the disease they were seeing was due to protein deficiency, they began to administer protein supplements until the signs of disease went away and concluded that this was the required amount of protein. What they did not realise was that the body also burns protein for energy if other sources (fats and sugars) are not available. The children involved in the programme were starving and the team were first providing sufficient calories in the protein supplement before the children could start to use it properly as protein. Like HCI research, nutritional research involves many complex interconnected factors and it is often hard to alter one without affecting others. This makes it even more important to understand the underlying physiological mechanisms, however difficult this may be, in order to prevent major mistakes.

While this section has focused particularly on evaluation and in particular empirical evaluation, it is here to serve as an example of the wider issue that we need to explicitly think about methodology in all its manifestations.

4.2. Knowledge

To be a strong discipline, we need ways of gathering sound knowledge, ways of knowing what is true, and ways of establishing validity. As part of The Future of HCI discussions in the UK in 2007, (Blandford, 2007) emphasises that ”HCI research delivers new insights … that are valid” and validation is critical also in Long (1996) discussion of the relation between HCI research and design.

As previously noted, evaluation is central within HCI, to the extent that if one wanted to point to a touchstone for what is accepted by the community to be the sign of valid research, surely evaluation would be it. The major exception is pure studies of existing work-practices in domains where technology is already present or expected to be introduced. Otherwise whatever kind of ‘thing’ you have produced as an outcome of your research, be it a concept, a method, a toolkit, or an application, what the reviewers of your paper want to see is some level of evaluation and typically evaluation with real users.

Comment 29

‘Evaluation’ here, by implication, is close to Long’s view of validation, as conceptualized; operationalised; tested; and generalized (1997). ‘Evaluation’ cannot be simply limited to the notion of ‘test’.

 

Of course seeking some form of validation of your work is critical – after all I have said that we are after truth not mere opinion. However, it is wrong to assume that evaluation is the only means to verify validity. In mathematics, you do not evaluate a theorem to see if it is true, you prove it, that is provide a justification of why it should be true. Mathematics is unusual in being able to put all of its trust in justification; the particular closed nature of mathematical argument makes this possible. In general, academic disciplines vary as to the relative importance of evaluation (Fig. 4); in particular, evaluation is more important where the phenomenon being studied is complex or hard to predict, or ability to reason may be limited. For example, in medicine one might establish, based on theory or prior art, that a particular family of compounds is likely to be effective in treating a condition (justification), but the complexity of the human body and pharmaceutical chemistry means you need laboratory studies and eventual clinical trials to find out which actually works (evaluation).

Fig. 4. Forms of validation.

Arguably if our work is only validated through evaluation it is pure invention, not academic research at all – after all we should have some reason for what we do, not just randomly trying what occurred to us in the bath one morning.

Comment 30

Salter’s distinction between empirical and formal derivation and validation at the levels of: Client Requirements/Artefact; Specific Requirements/Specific Artefact Specification; and General Requirements Specification/General Artefact Specification would be of use here to clarify the difference between design practice and design research (2010, Figure 8).

 

Evaluation is especially problematic for generative artefacts – that is things that in some way make other things or can be instantiated in different ways (Dix, 2008; Ellis and Dix, 2006). This includes theories, methods, guidelines, tools, architectures . . . indeed just about anything we produce as research outputs in HCI! The problem is that evaluation cannot exhaust all possible uses or instantiations of a generative artefact, so can never validate it fully. Indeed, as an easy to remember catch phrase:

the evaluation of generative artefacts is methodologically unsound (Ellis and Dix, 2006)

Even a single piece of software is a generative artefact as it is only in the specific moments of use that it becomes grounded. We cope with this in usability testing by trying to have sufficient users working on a sufficient range of tasks in order to sample the space of potential use. However, once we get to design notations or guidelines, sampling becomes all but impossible. To say we have reasonably covered the space we would need to get many different designers with many different briefs and then usability test each outcome with many different users … and that is just to answer the simple question ”does it work?”, let alone ”why?” and ”how can we improve it?”.

Note that this is not to suggest that empirical evaluation plays no part in validation of these complex generative artefacts such as methods; it is just that any empirical evaluation needs to be part of a theoretical argument or some other form of justification. As an example of this, Furniss et al. (2007) recent work studying usability evaluation methods is exactly adopting this approach, using a combination of observations of practitioners using different methods, but set within a theoretical framework including distributed cognition and resilience engineering theory. Related work on has shown how various usability evaluation methods have different scope as to what aspects of the design they are best suited (Blandford et al., 2008); that is each forming part of a larger argument or process.

Within HCI there are a gamut of techniques available for both justification and evaluation, including for justification:

  • existing published results of experiments and analysis,
  • one’s empirical data from previous experiments, studies, etc.,
  • expert opinion (published or otherwise) and common sense,
  • arguments based on the above. . . and for evaluation,
  • fresh empirical evaluation, user studies, timing data, etc.,
  • peer reviews of one’s work (do other people agree it is a good
    idea),
  • comparison with previous work (do the parts that should behave the sameactually do so).

In any field, the powerful thing is how these work together to establish validity. Even in mathematics, the domain of pure justification, it is common to try out a potential theorem against example data either to look for counter examples (Popperian falsification (Popper, 1959) is evaluation based), or to suggest how a proof might proceed: anyone who has done geometry in school will have experienced this using sketched triangles and circles at the beginning of a proof. Here the evaluation is guiding the process of justification. This can be the case in HCI: as you notice patterns in empirical data you think ”of course, that must be because . . .”.

Equally important is that when one builds the justification of why something should work, the argument will not be watertight in the way that a mathematical argument can be. The data on which we build our justification has been obtained under particular circumstances that may be different from our own, we may be bringing things together in new ways and making uncertain extrapolations or deductions. Some parts of our argument may be strong and we would be very surprised if actual use showed otherwise, but some parts of the argument may involve more uncertain data, a greater degree of extrapolation or even pure guesswork. These weaker parts of the argument are the ideal candidates for focusing our efforts in evaluation. Why waste effort on the things we know anyway; instead use those precious empirical resources (our own time and that of our participants) to examine the things we understand least well.

This was precisely the approach taken by the designers of the Xerox Star. There were many design decisions, too many to test individually, let alone in combinations. Only when aspects of the design were problematic, or unclear, did they perform targeted user studies. One example of this was the direction of scroll buttons (see Fig. 5): should pressing the ‘up’ button make the text go up (moving the page), or the text go down (moving the view)? If there were only one interpretation it would not be a problem, but because there was not a clear justification this was one of the places where the Star team did empirical evaluation . . . it is a pity that the wrong answer was used in subsequent Lisa design and carried forward to this day, but that is a different story! (Johnson et al., 1989; Dix, 1998,)

Fig. 5. Xerox Star and modern (Mac OS X) scrollbars.

So ideally, for good science, we would focus our evaluation where our justification is weakest, thus obtaining maximum information from our work and pushing forward the field. Of course, we should certainly be aware, while we probe these areas of greatest uncertainty, that our assumptions may be wrong, that the obvious may in fact turn out to be false; but we should not primarily make the obvious our focus.

There is of course a place, albeit largely absent in HCI research, for reproducing previous studies as a basis for further work, especially when the earlier work was promising but inconclusive. In mathematics you will go back and recheck the proofs of earlier theorems on which your work depends. If you do not do this and subsequently a flaw is found in the older proof then your own work fails with it. If such a flaw were found in, say, Nielsen and Landauer (1993), would we have the means as a discipline to ‘fail’ all the succeeding work that relied on it?

There is a difference between reproducing previous studies for the purposes of verification and doing the obvious for the purposes of getting an ‘easy hit’. The overarching aim should always be systematically to increase the knowledge of the field.

Sadly when advising students I have to tell them that there is a conflict between this recommendation for good science, and what is best to get published. The easiest way to get a publication is to choose something that you have a pretty strong argument for and then run some sort of experiment on it. With such an experiment you know what to expect and so you can frame a clear experimental hypothesis, and are very likely to get a result that will be statistically significant. However, this gives least new knowledge. In contrast, experiments focused on the weak points in the justification will have unknown answers, may yield inconclusive results and are least likely to have statistical power – while they have the potential to add knowledge to the discipline they are risky for the individual.

Note that this risk is the opposite of ”nobody has done this, let us try it” experimentation. Instead it is the systematic exploration of gaps in knowledge set within a context that highlights valid possibilities.

As a discipline we should not find ourselves in a position where good science and publication are at odds. This is bad for new researchers entering the discipline and it is bad for the discipline itself. So when reviewing work we should seek

  • (a) reasons why the issue/feature is as it is – that is, rational not just random ideas, systematic growth of the field; and equally important,
  • (b) reasons why the issue/feature need not be as it is – that is, not obvious, adding information to the field.

Note that these two together ensure that collectively we systematically explore gaps in knowledge.

4.3. Roles

The discussion has moved to criteria for good work, but within HCI there are many genres of work, so we need different criteria of judgement depending on the genre. Again this can be a real problem during reviewing of papers. A couple of years ago as a meta-reviewer I had to explicitly say that I was entirely discounting one of the reviews, because the review was effectively criticising the genre of work within HCI, not assessing it with respect to criteria within the genre. Often it is not so clear and it is easy to let one’s own general opinions about the most appropriate approach (experimental, ethnographic, formal) colour the judgement of a particular piece of work. Blandford (2007) warns reviewers to judge research on its own merits, not ”would I have done it the same way”.

Now this is not to say that there is no place to critique and debate the validity of particular genres or approaches to work and assess their strengths and weaknesses when applied to particular problems. It is just that we should debate the validity of the genre as a whole within the discipline; and the validity of a given piece of work within its genre, so long as that genre is accepted as valid within the discipline and is applied within its understood bounds.

In the UK, the HCI community has noticed this is particularly problematic when it comes to reviews for projects and grants within HCI. It is very hard to get across-the-board support from reviewers to say a piece of work is of the best quality; someone will have something negative to say. If an HCI project is then viewed amongst those from different areas where the reviewing is more consistent (whether positive or negative), the best HCI projects will lose out compared to the best from those other areas. Now this is partly because the quality criteria within HCI are soft and less clear than in some areas, but partly because at least one of the reviewers may not like the general approach/genre.

There is certainly a need for discussion of the value of particular approaches and establishing new ones, but that should be a separate discussion. Furthermore, we need to think explicitly about these different approaches, techniques or genres and the criteria appropriate for each. This makes it easier to assign appropriate people to review work – and the recent CHI Conference subcommittees are an excellent move in this direction. Furthermore, if we are aware of these different criteria we can more easily say ”personally I don’t like this style of work, but within its genre it is strong”.

Just as there are different genres of work, there are different roles that we may take within HCI research.

Imagine a physics paper that started off with some experiments at CERN, then performed group-theoretic analysis of superstring theory, and finally applied the results to the design of a vacuum cleaner. This is clearly risible. But HCI papers are often like this, and furthermore expected to be: a little bit of theory, build a toy system, run some experiments, analyse the results, give implications for design.7 Now this can sometimes be done well, so it is not that we should never have work like this, but surely it should be more common to have different aspects of this work performed by those that do them best, rather than expecting every paper to have a bit of everything?

HCI as an academic discipline (and maybe science) will develop most strongly if we can understand how different parts fit together and allow people and teams to focus where their core strengths lie.

I can think of three broad roles (although I am sure there are more):

  • generating ideas and theories,
  • developing systems and designs,
  • performing empirical studies.

Table 1 and Fig. 6 list some of the different criteria for each role; although in the process of delineating these criteria, empirical studies divide in two because the actual gathering of data may need different expertise from its analysis. This was certainly part of the origins of ethnography – report with as little interpretation as possible in order that someone else can interpret later.

Some criteria
Ideas & theories Clarity & Parsimony adequacy of explanation ability to feed into experiment, design, more theory
Systems and designs Rationale, novelty (useful) critical appraisal (of novelty) availability for future research
Empirical studies – data gathering (experiment, study, ethnography) Clarity of situation, provenance availability of data for further analysis
Empirical studies – data analysis (theoretical, inductive, statistical) Soundness, lack of bias suitability for meta-analysis

Table 1 Roles and criteria.

Fig. 6. Roles in HCI research.

Comment 31

Readers might like to consider some relationship between Dix’s concept of roles and Long’s concept of validation (1996). For example, between ‘ideas and theories’ (conceptualisation); ‘system and design’ (operationalisation); ‘empirical studies’ (test); and ‘ideas and theories’ (generalization).

 

 

Also, the parlous state of statistics in Cairns (2007) is no doubt in part due to the ‘do it all’ methods. In medicine there are special medical statisticians who are not medics themselves but do the statistics, because the medics themselves do not expect to be able to do this.
Within each role there are criteria that are more to do with internal coherence of the work, but, at least as important, there need to be criteria about making sure the work contributes to the bigger picture of the discipline as a whole.

  • If I have a new theory or framework; is it expressed clearly enough so that someone else can apply it to their new design, or analysis of experimental results?
  • If I have constructed a new system that embodies some idea; is it available so that other researchers can deploy it for long-term study, or use it in an experiment?
  • If I have gathered some ethnographic data; have I obtained sufficient consentsand described my data gathering techniques well enough so that the raw data can be made available for others to study in different ways?
  • If I have performed some statistical analysis on an experiment; have I presented the results in a way that others can interpret and possibly perform meta-analysis?

The web was developed so that physicists could share data. We need to develop HCI so that we share data, systems and results equally easily, so that we can properly use each individual’s expertise and skills to build a coherent discipline that is greater than any of us.

Long (1991) emphasises the importance of a discipline having an accepted ontology, and in Long (1996), effectively develops a framework providing just such a high-level ontology. This is effectively about common language in order to communicate clearly. Sutcliffe (2000) proposals for reuse of HCI knowledge, and those engaged in patterns research (Tidwell, 2009, 2005; van Welie, 2009), instead seek to create common formats for sharing knowledge. In fact it is not essential that we all share a single language, for all our working and reporting, nor that we understand fully one another’s methods. Within our sub-areas of HCI we can use our own esoteric languages, but the core outputs of our work need to be communicated clearly to enable others to build on them.

Comment 32

It is difficult for HCI researchers to build on each others’ work, if there is no agreement about what is an HCI design problem and so what is an HCI design solution. The claims of (reliable) HCI knowledge, for example, as models and methods, cannot be tested for their effectiveness and so their validity.

 

It is not acceptable for an ethnographer to tell a technologist to go read Garfinkel, nor for me, with a PhD thesis originally about ”Formal Methods and Interactive Systems”, to tell an ethnographer to go read Gauss. Given the very broad nature of HCI, there may even be special roles for those who present the outcomes of one sub-area of HCI to others, a form of internal education within the discipline. Possibly we need suitable reward mechanisms such as special high profile venues for such communication works, in the way ACM Computer Surveys served computing as a whole.

5. The changing face of HCI

Of course HCI is changing as computer technology changes and these changes will require yet deeper considerations of the way we interact together as an academic community and discipline. These changes also demand that we question more profoundly accepted methodological practice. Most readers will be aware of the rapid rate of change in recent years, but this section briefly reviews these changes, before looking at a specific case study of the way some of these changes forced methodological reflection.

The birth of HCI as a discipline was around the same time as the introduction of the desktop computer, and it became hard to consider interfaces that were not WIMP-based GUIs. This was to some extent a breaking free from the computer in the machine room, but it only got as far as the desktop, and there it stayed for nearly 20 years. Indeed, Buxton (2001) wrote:

”In the early 1980s Xerox launched Star, the first commercial system with a Graphical User Interface (GUI) and the first to use the ”desktop” metaphor to organise a user’s interactions with the computer. Despite the perception of huge progress, from the perspective of design and usage models, there has been precious little progress in the intervening years. In the tradition of Rip van Winkle, a Macintosh user from 1984 who just awoke from a 17-year sleep would have no more trouble operating a ”modern” PC than operating a modern car” (Buxton, 2001).

However, over the last 5–10 years (with plenty of preliminary research work before), we have seen the role of the computer change dramatically in society.

With mobile and ubiquitous computing and tangible interfaces, the computer physically escapes the desktop into the outside world. This is not just the subject of research for the future, but day-to-day reality for all of us. I often ask people ”how many computers in your house” and most still say two or three or maybe (if they are very ‘nerdy’) more, but rarely do people remember their microwaves and HiFi, central heating and washing machines. Even our body load of computation is substantial. A few years ago I emptied my own pockets during a masters seminar and found four clear computers without anything particularly nerdy: (1) a mobile phone; (2) car keys, which include some form of coding processing for the remote locking; (3) a film camera, but with an LCD screen; and (4) the now ubiquitous chip in a credit card.

The Internet has also seen the computer escape the desktop virtually (Fig. 7). While the growth of corporate networking lead to the development of CSCW as an area, the Internet has been crucial in establishing collaboration that cuts across organisational boundaries and enters the home. It is also hard to believe that 10 years ago there was no Google. Equally it is easy to forget how un-ubiquitous universal Internet access is. In the 1998 business plan for aQtive, one of the dotcom companies I was involved with, we talked about designing products ready for the coming PopuNet Dix (1998) – the network for everyone, everywhere and everywhen. At the time this was just beginning to be a reality when at the office desk, but in the home a slow and expensive dial-up connection was the best one could hope for. The ‘everywhen’ was particularly critical – continuous not just continual, not just available (anytime), but always there; what is now termed ”always on”. For those with the iPhone in western cities this now appears to be the reality, but even within the UK, Europe or the US just move a little into rural areas, sail out to the islands, or climb amongst the mountains, and connectivity becomes more broken. Now move further afield to the developing world and we can see how easy it is to overestimate the universality of the Internet and correspondingly, albeit unwittingly, design to divide.

Amongst many things that have changed with the growth of the web is that much software that was a product has become a service. Although I still use an email client installed on my computer (because, whilst travelling a lot, I am not ‘always on’), many only use web-based email services. Now we also have online word-processing, spreadsheets and more. If you buy an expensive hair styling kit, then you will continue to use it even if you find flaws, but if you visit a hairdresser and do not like the service or style, then the next time you go to a different one. Shrink-wrapped products allow you one choice point maybe every few years, but services allow near continuous choice. From a business perspective for shrink-wrapped software you can ‘get away’ with bad usability and poor user experience, so long as you have good marketing; the customers have already paid their money by the time they find out it is rubbish! In a service-based world, usability and user-experience become key to success.

As we can see, these technological changes lead to changes in the environment within which HCI works, not just because we have different hardware to play with, but because recent technological change has had a major impact at a commercial and social level. Sometimes technology is servant to social change, perhaps ignored or resisted. However, at various points technological changes have made a radical difference to social order. For example, the invention and adoption of the stirrup not only revolutionised mounted warfare, but was also a driver for the whole feudal system (White, 1968). While wanting to avoid simplistic technological determinism, it is also clear that there are major societal, cultural and even cognitive changes that we need to recognise for their impact on research and practice in HCI as well as for their broader political and ethical import.

One of the aspects that is obvious is the increasing focus within HCI on user experience. The physical movement of computers out of the office into the home and into our hands as well as the domestication of the web means that the old utilitarian ‘efficiency and effectiveness’ now have to pay second fiddle to ‘satisfaction’. One of the amazing things about the numerous ethnographies of the home and of leisure (which are both methodologically harder than work ethnographies) is just how complex day-to-day life is (De Certeau, 1984). The industrial revolution, Taylorism, and the continuing need to deal with staff turnover, has led to a largely controlled and ‘normalised’ workplace with personal differences minimised. Of course workplace studies constantly show how much fluid working depends on various adaptations, but always set against and located within a framework of order. In contrast the home has never needed the same levels of uniformity except those established externally by work and society.

The domestication of technology is nowhere more apparent than in Web 2.0 with its focus on user-contributed context and social networking (O’Reilly, 2005). Again it is interesting to look back to the dotcom days, less than a decade ago. In 1999, when working on a new product/service, vfridge, we articulated the idea of the web sharer (Dix, 1999). At the time many were saying that there would be a shake-up in the web world with DIY home pages withering and all the traffic going to a tiny handful of sites (Yahoo!, AOL, Amazon) principally operating in a publication or broadcast manner, like the TV except with easier ways to buy things. Now this seems laughable, similar to the (misquoted) early predictions that five computers would be enough for the whole world,8 but at the time was becoming accepted wisdom. In contrast we sought to design products for the ‘web sharer’: ordinary people sharing with one another.

Everyone may be a web sharer—not a publisher of formal public ‘content’, but personal or semi-private sharing of informal ‘bits and pieces’ with family, friends, local community and virtual communities such as fan clubs.

This is not just a future for the cognoscenti, but for anyone who chats in the pub or wants to show granny in Scunthorpe the baby’s first photos.

Fig. 7. After 25 years chained to the desktop, the computer breaks free (images courtesy Matt Oppenheim).

the web sharer vision (Dix, 1999) There were probably others voicing similar ideas, but, like con tinental drift in the 1960s, this was against the prevailing wisdom of the time and so difficult to publish and hard now to trace. The crucial thing is that in 10 years what was completely counter-cultural has become passe. Given these rapid and substantial changes in the technical and social context of HCI, there is even greater need to re-examine methods and if necessary modify them or develop new methodology.

6. Case study: a single person study

Within this setting of changes in HCI we will focus on the PhD research of Fariza Razak on the use of single subjects in research and design. The purpose of this section is not to present the work in full, but rather to use it as a case study to illustrate the need for methodological thinking to address new kinds of issues. Because of this only sufficient details are presented for the purposes of illustrating the general issues; for more about Razak’s (2008) work see her thesis ”Single Person Study: Methodological Issues”. Interestingly, while studying just one user initially seems to stand against all accepted HCI practice, in fact we shall see that single user studies of different forms are common in many disciplines.

6.1. Background

Razak started out interested in mobile user experience and in particular looking towards mobile learning. As a preliminary exercise she conducted a small study asking a handful of people about their use of mobile technology. As anyone who has done this sort of study knows, it is very hard to get beyond the banal learning what you knew before you started. You ask questions and people tell you the answers you could have predicted, the difficulty is finding the questions to ask that are less obvious as questions and will lead to new knowledge really growing the field.

In the initial study one of the participants stood out as unusual. She said she rarely used her mobile phone and yet other answers seemed to suggest the opposite, for example referring to use of time organisation features that others hardly mentioned. Clearly, her initial answer was about the ‘normal’ use of the phone as voice communication, but it was more to her than that.

Because this subject was unusual and different I suggested Razak spent some time investigating her in more detail. Little did either of us know at that time that this would become the key focus of Razak’s PhD work.

Note that the subject was chosen because she was in one respect an extreme, an outlier, outside of the average. When analysing experimental results, outliers are often removed for statistical purposes, ignored as anomalous or extreme. Instead the outlier was chosen precisely because she was one. As an academic I always find extremes valuable. Partly because the abnormal, or extra-ordinary (strange how the words have different connotations) are just more interesting in themselves, but also because they cast light back onto the ordinary, showing us things that are often tacit and unnoticed. Djajadiningrat et al. (2000) also describe how extreme characters helped expose aspects of use, especially ‘undesirable’ emotions and character traits, that more ‘normal’ personas and scenarios may hide.

Estrangement, the ability to see the world from a different perspective, is of great value in uncovering the hidden dimensions of the quotidian and I challenge students to have deliberate bad ideas (Dix et al., 2006). Similarly, Merleau-Ponty et al. (1945) writes ”in order to see the world and grasp it as paradoxical, we must break with our familiar acceptance of it” and schraefel and Dix (2009) asks chemists to make cups of tea. Comedians are particularly good at this, seeing the oddness in the everyday, highlighting things that we recognise in ourselves that are slightly embarrassing or just strange when we look at them. Indeed I have often suggested that students look up humorous books about their domain of study as they may learn more from a comic’s eye than from many an academic study. The best ethnographers also seem to have this ability to see the normally overlooked details of a situation. Garfinkel (1967) used ‘breaching experiments’ with his students, getting them to break normal social conventions in order to see that they exist; like a car engine, one is unaware of the parts until they fail.

This simple decision to study a single unusual subject started Razak down the route of pursuing this single person study as the central focus of her work and her thesis became not one about mobile experience or mobile learning, but a methodological account of the issues surrounding the use of a single person for research and design.

6.2. The first text

One of the early steps along this path was a diary study of Razak’s subject. When we discussed the results of the study, the words of the very first entry leapt out of the page.

This first text read

Dear God Don’t need lots of frens! As long as real ones stay with me, so bless them all, especially the sweetest one reading this.

and the subject’s comment (her emphasis):

this SMS MADE MY DAY!

Research on SMS behaviour often discusses its use for intimate communications ”thinking of you”, ”love you” (Gamberini et al., 2004; Spagnolli and Gamberini, 2007). However, this message was something slightly, but significantly, different. The message here was in a way less personal; it was representative of a particular type of message: often small quotes of a devotional or otherwise encouraging nature, from friends, but not necessarily from her husband, or close family. John Rooksby (personal communication) described them as messages that need no reply. They are sent to encourage but not to establish communication in any interactive sense and certainly not to ‘communicate’ in an information sense – they are more gifts of thought.

Perhaps the closest thing in the physical world are the little cards or bookmarks that have poems, sayings or prayers written on them, often surrounded by flowers . . . a world away from the design studio with its austere black-robed occupants.

This single text message and the reaction it caused fundamentally changed our view of the use of the mobile phone.

But can studying a single user in this way contribute to theoretical HCI research or practical interaction design?

6.3. Research from single-user studies

At first the idea of studying a single user runs counter to common academic sense. Surely we need to study many users to be able to stand any chance of generalising results? However, it turns out that this is not so uncommon in other disciplines such as special education or studies of neurological deficit. For example, Battro (2001) studied the development of a child who had only a single brain hemisphere due to a congenital brain defect. Similarly, Damasio (1994) in building up his understanding of the role of emotion in human reason, draws heavily on documentary evidence of Phineas Gage, who in 1848 suffered a traumatic brain injury leaving him with full intellectual capabilities but severe emotional deficit. In psychology there have been well respected uses of single subject studies, and even in HCI ethnographies are typically of a single situation (even if it includes several people) and, as we have seen, experiments often use a single application or piece of software (even if they have many subjects).

Furthermore, the study of a single user brings particular benefits. As often found in ethnographic studies, rich empirical data reveals new issues . . . in this case the very first text! Furthermore, studying a single user in depth allows the researcher to build up a deep personal rapport with the subject and hence make sense of what would otherwise be irrelevant or meaningless aspects of the data.

In fact discovering novelty only needs one example, like a botanist discovering a new flower; a single specimen shows that the new species exists. Of course a different person at a different time in a different place would find a different flower. Studying a single person is not the way to find all the important issues, or establish how common a particular issue is, but the depth may be a good way to find new usage phenomena.

However, this still leaves us with the question: having found a new phenomenon, how common and critical is it? In other words, how do we generalise? In an empirical study if the sample of users is wide enough (not just psychology or CS students!), then we assume that if an issue is common in the sample it is common in the population. Of course, we can use new insights from any method, including those from studying a single user, to drive empirical work of this kind. However, in the case of the initial text message, extensive empirical studies were unnecessary for us to recognise that this was something that we would expect to see elsewhere, not for everyone (and maybe least for UK CS undergraduates, whom we might have studied in a larger scale survey), but at least for particular kinds of people and communities. That is we were able to generalise by reasoning based on the data we had seen, knowledge of other research work, our own personal experience and not least (albeit much undervalued in academia) common sense!

Generalisation through reasoning is again common in other areas, for example semiotics and mathematics, and is typically based in deduction or abduction rather than induction as used in reasoning from voluminous empirical data. However, possibly drawing on my mathematical roots, I would like to make a stronger claim:

generalisation never comes (solely) from data,

Instead, generalisation always comes through understanding. Even when we have copious data, the knowledge that we have chosen representative groups, the level of extrapolation we choose to make from the experimental tasks, or the belief in the methods are all matters of judgement. We generalise with our heads not our senses.

6.4. Designing for a single user

Akin to the research question of how we obtain knowledge from a single user is the practical one of whether we can use a single user in design. We all know that ‘five users is enough’. . . but one!

So as an experiment Razak attempted to design an application especially for that single user. Having got to know this individual intimately, what would be perfect for that single person? With a single user it is possible to spend sufficient time to collaboratively co-design in a way that is tuned for the specific lifestyle, abilities and personality of the user. Having done this one can then ask whether the application would work for others and maybe do more traditional user testing. Maybe this hyper-tuned application may form the start of a slightly more generalised application that is of more general appeal.

We do not expect such a perfectly tuned application to be liked by everyone, indeed often the opposite. In the case of Razak’s subject the application periodically texted uplifting messages, and one test user clearly found some of the messages simply annoying. However, a surprising number of other users did find it engaging.

Again, like single-user studies in other disciplines, one does not have to go far to find areas where taking 100, 20 or even 5 users would seem like overkill . . . indeed many designers find no users sufficient (although sometimes this is apparent).

In fact Nielsen and Landauer (1993) calculate the ”five users” figure based on a cost-benefit trade-off between the number of faults found with N users, the costs of performing the test with them, and the costs of a prototyping cycle. This calculation also took into account the fact that the number of new usability problems found with each additional user drops due to overlapping faults between users. If the costs of prototyping are high compared with the costs of usability tests, then it is worth doing more usability tests in each iteration cycle, if prototyping is cheaper or usability testing more expensive, then tighter cycles are optimal with fewer users tested per cycle. Nielsen and Landauer measured the actual prototyping costs in a number of projects compared with actual error rates and usability test costs and it was these empirical figures, derived using 1993 technology and applications, which gave rise to the now ubiquitous ”five is enough”.

Of course prototyping costs are now substantially smaller than they were in 1993, and if the costs of prototyping are low enough, then the optimal point may not be five but even a single user per cycle. This is precisely the approach taken in Marty and Twidale’s (2004, 2005a,b) ”extreme evaluation”, where short usability tests are carried out with a single person.

While extreme evaluation only evaluates with a single person per prototyping iteration, successive iterations will typically involve different people. Furthermore, the software being designed was not designed ‘for’ the single individual, just tested with one user. In contrast, Razak developed an application specifically optimised for just one user, although our expectation was that this would in fact lead to a concept that would have wider appeal. This form of single-user designing would not be good for all kinds of application of product, but is particularly useful when designing for peak experience.

6.4.1. Designing for peak experience

Imagine you have a group of children and want to give them lunch. In the UK you might well choose baked beans. Not the most exciting choice, but few children actively dislike baked beans; they are acceptable to everyone. However, give each of those children a euro (or maybe two) in a sweet shop . . . they will all come away with a different chocolate bar, the chocolate bar that is ‘OK’ for everyone gets chosen by none. Or imagine choosing a menu for a wedding dinner … maybe chicken with a bland sauce … something nearly everyone will eat, but few people would choose for themselves in a restaurant.

Much of traditional HCI design is like baked beans – a word processor installed for the whole company, a mail program used by every student, good enough for everyone. However, increasing personal choice, especially for web-based services, makes design more like the chocolate bar; different people make different choices, but what matters is that the product chosen is not ‘good enough’ for all of them, but best for some.

This designing to be best for some, the chocolate bar rather than the baked beans, results in a product for peak experience. Fig. 8 shows schematic profiles of three imaginary products. The horizontal axis represents different people/users and the vertical axis represents their level of satisfaction or experience. There is one ‘good enough’ product, which offers a consistent but mediocre experience. There are also two ‘peak’ products that offer high satisfaction for a few users and low satisfaction for others. Note with sufficient peak products the good-enough product will never be chosen.

Traditional user-centred interface design may use user profiles, or personae chosen to be representative of a group as a whole, with a focus on the typical or the average – good for all. We typically move from identified user needs to interaction solutions with an emphasis on method and processes that ensure usability.

In contrast, designing for peak experience may need a stronger focus on the individual user, possibly extreme personae, focusing on the specific and eclectic – best for some. Often the move is from concept to use with an emphasis on novel ideas and inspiration.

Some years ago I was on a panel at ECCE with Jon Sykes (2004) from the group at Glasgow Caledonian studying games and emotion. He was asked about the processes used by video games designers and many in the audience were shocked at the apparently ad hoc and non-user-centred way in which the designers have ideas, discuss them amongst themselves and only very late in the process submit them to user testing but this is exactly what one would expect in order to design for peak experience; a good enough video game will be bought by no-one.

Similarly, many of the most successful Web 2.0 sites such as Facebook and del.icio.us started out being for the designers and their friends; tuned to a small group or even a single individual. We would normally castigate designers who design for themselves, but somehow, in spite of that, or maybe because of that, they are successful.

Interestingly, even the computer language of choice of many of these sites, PHP, was originally developed by one person for his own home page (PHP, 2009).

Now this is not to say that there is no role for traditional HCI practice; there are many products that do need to be used by everyone (e.g. bank web sites) and even in Web 2.0 web sites, such as YouTube, there are some aspects where traditional usability breaks down and experience dominates, but other parts, such as the uploading of videos, where standard usability is crucial (Silva and Dix, 2007).

However, where individual choice and user experience dominate, we need to look increasingly at peak experience. Mash-ups, widgets and open-data allow large numbers of applications designed for smaller groups of intended users; indeed one of the defining features of Web 2.0 (O’Reilly, 2005) is this focus on the long tail of large numbers of web sites and web applications used by few people, as opposed to more traditional web applications aimed at mass use. We cannot expect that the vast army of mash-up builders will each employ a usability consultant; HCI for the long tail may need to consider how to build necessary aspects of usability into platforms or maybe even popularise HCI – the equivalent of ‘house makeover’ programmes on daytime TV.

7. Bringing it together

The single person study was introduced as a case study to show the importance of clear methodological reflection. At first it appears to flout the community conventions for effective HCI. However, by understanding methodology we were able to recognise similar methods in other disciplines and within HCI and also to see how it could be used effectively as part of research and design. Note that this is not to promote single-user studies above other techniques, but if it and other methods can be understood methodologically they can be applied where they are appropriate and give value depending on the context.

Note too that the adoption of the single user study was because we were addressing an issue at the changing boundaries of HCI. As a textbook writer I am always interested in what changes and what does not change between revisions. The things that have hardly changed in 15 years are likely to still be of value in another 15 years, but the things that changed in the last revision are likely to change again. We need to look particularly carefully at the new or changing things, so that we see the things of lasting value and are not simply swimming with fashion.

Fig. 8. Profiles of a ‘baked beans’ good-enough-for-all product, vs. two products offering peak of experience for some.

Just as taking the extreme user helped us to understand ‘normal’ use, so also as we look at new areas of technology, they help us understand afresh the old. Often the lens of unfamiliarity helps us explore the heart of things.

Others exploring the extremes of HCI have also needed to think seriously about methodology and how to adapt it to the circumstances of their work, for example, Button and Dourish (1996), with technomethodology; McCarthy and Wright (2004), in designing for experience and enchantment; and Gaver (2007), in ‘polyphonic assessment’ of designs for everyday life.

The danger of establishing new methodology and potentially new vocabulary and theory is that we further fragment the genres and roles within HCI. How do we avoid a community of interest becoming a cabal?

This takes us back to the three challenges at the heart of this paper. We do not need to establish a common ontology or model for all of HCI, a single method that we all use; we do not even need to understand the details of the language, methods and theories within all of our sub-communities. However, we do need to ensure that we understand the genres of work and the roles they play within a coherent discipline, we do need to ensure that the methods used within each role and genre are coherent, and above all we need to ensure that the results of each genre (not necessarily the full arguments and methods) are communicated in a way that is accessible to others in the wider community.

To be an academic discipline is about community, but not just any community, a community that establishes clear knowledge and together learns.

Acknowledgements

Many thanks to Aaron Quigley, Gavin Doherty and Liam Bannon for inviting me to share in the inaugural celebration of SIGCHI Ireland on which this paper was based; and to everyone who attended the presentation and chatted after and before that talk.

Special thanks also to Fariza Razak whose work I have referred to extensively, to Matt Oppenheim (alias hardware monkey) for his wonderful cartoons of the computer breaking free, and to Fiona Dix for proofreading. Thanks also to the critical insights of the anonymous reviewers of this paper and to the special issue editors.

References

Battro, A., 2001. Haifa Brain is Enough: The Story of Nico . Cambridge Studies in Cognitiveand Perceptual Development, vol. 5. Cambridge University Press.

Blandford, A., 2007. HCI research and quality: discussion document. In: UCLIC/

Equator Two-day Workshop on The Future of HCI in the UK: Research and Careers, 14–15th June, 2007. Loughborough University. <http:// www.uclic.ucl.ac.uk/projects/future-uk-hci/>.

Blandford, A., Hyde, J., Green, T., Connell, I., 2008. Scoping analytical usability evaluation methods: a case study. Human–Computer Interaction 23, 278–327. doi:10.1080/07370020802278254.

Brand, S., 1994. How Buildings Learn: What Happens After They’re Built. Viking Press.

Button, G., Dourish, P., 1996. Technomethodology: paradoxes and possibilities. In: Tauber, M. (Ed.), Proceedings of CHI ’96, The SIGCHI Conference on Human Factors inComputing Systems: Common Ground (Vancouver, British Columbia, Canada, April13–18 1996). ACM, New York, NY, pp. 19–26. <http://doi.acm.org/ 10.1145/238386.238394>.

Buxton, W., 2001. Less is more (more or less). In: Denning, P. (Ed.), The Invisible Future: The seamless integration of technology in everyday life. McGraw Hill, New York, pp. 145–179.

 

Cairns, P., 2007. HCI. not as it should be: inferential statistics in HCI research. In: Ball, L., Sasse, M., Sas, C., et al. (Eds.), Proc. of HCI 2007, vol. 1, BCS, pp. 195–201.

Clark, H., 1996. Using Language. Cambridge University Press, Cambridge. Clark, H.,

Brennan, S., 1991. Grounding in communication. In: Resnick, L., Levine, J., Teasley, S. (Eds.), Perspectives on Socially Shared Cognition. American Psychological Association, Washington, pp. 127–149.

Damasio, A., 1994. Descartes’ Error: Emotion Reason and the Human Brain. Putnam
Publishing. paperback: Penguin, 2005.

Dawkins, R., 1976. Memes: the new replicators. In: The Selfish Gene. Oxford
University Press, London (Chapter 11).

De Certeau, M., 1984. The Practice of Everyday Life (trans. S. Rendall, University
ofCalifornia Press, Berkeley, 1984). L’invention du Quotidien, vol. 1, Arts de
Faire’, 1980 (in French).

Diaper, D., 1989. The discipline of human–computer interaction. Interacting with
Computers 1 (1), 3–5.

Diaper, D., Sanger, C., 2006. Tasks for and tasks in human–computer interaction.
Interacting with Computers 18 (1), 117–138. doi:10.1016/j.intcom.2005.06.004.

Dix, A., 1998. Hands across the screen – why scrollbars are on the right and other stories. Interfaces 37 (Spring), 19–22. <http://www.hcibook.com/alan/papers/
scrollbar/>.

Dix, A., 1998. Sinister scrollbar in the Xerox Star xplained. Interfaces 38 (Summer),
11 (short update to the above article) <http://www.hcibook.com/alan/papers/
scrollbar/scrollbar2.html>.

Dix, A., 1998. PopuNET: Pervasive, Permanent Access to the Internet. eBulletin,
aQtive Ltd. <http://www.hiraeth.com/alan/ebulletin/PopuNET/PopuNET.html>.

Dix, A., 1999. The Web Sharer Vision. eBulletin, aQtive Ltd., November 1999.
<http://www.hiraeth.com/alan/ebulletin/websharer/>.

Dix, A., 2002. Teaching innovation. Excellence in Education and Training
Convention, 17th May 2002. Singapore Polytechnic. <http://
www.hcibook.com/alan/talks/singapore2002/>.

Dix, A., 2003. Upside down As and algorithms – computational formalisms and
theory. In: Carroll, J. (Ed.), HCI Models Theories and Frameworks: Toward a Mulitdisciplinary Science. Morgan Kaufmann, San Francisco, pp. 381–429 (Chapter 14) <http://www.hcibook.com/alan/papers/theory-formal-2003>.

Dix, A., 2004. Controversy and provocation. In: Proceedings of HCIE2004. The 7thEducators Workshop: Effective Teaching and Training in HCI, 1st and2nd April 2004. Preston, UK. ISBN 0-9541927-5-3 <http://www.hcibook.com/alan/ papers/HCIE2004/>.

Dix, A., 2004. European HCI Theory – a uniquely disparate perspective. In: European HCI Research Special Area CHI 2004, Vienna, Austria, 24–29 April 2004. <http:// www.hcibook.com/alan/papers/chi2004-euro-theory/>.

Dix, A., 2008. Human–Computer Interaction in the Early 21st Century: a Stable Discipline, a Nascent Science, and the Growth of the Long Tail. SIGCHI Ireland Inaugural Lecture, 2nd December 2008. Trinity College, Dublin. <http:// www.hcibook.com/alan/talks/Dublin-2008/>.

Dix, A., 2008. Theoretical analysis and theory creation. In: Cairns, P., Cox, A. (Eds.), Research Methods for Human–Computer Interaction. Cambridge University Press (Chapter 9).

Dix, A., Finlay, J., Abowd, G., Beale, R., 2004. Interaction design basics. In: Human– Computer Interaction, third ed. Prentice-Hall (Chapter 5).

Dix, A., Ormerod, T., Twidale, M., Sas, C., Gomes da Silva, P., McKnight, L., 2006. Why bad ideasare a good idea? In: Proceedings of HCIEd.2006-1 Inventivity, Ballina/ Killaloe, Ireland, 23–24 March 2006. <http://www.hcibook.com/alan/papers/ HCIed2006-badideas/>.

Dix A., Gill S., Ramduny-Ellis D., Hare J., 2009. Design and physicality – towards an understanding of Physicality in design and use. In: Designing for the 21st Century: Interdisciplinary Methods & Findings, Gower.

Djajadiningrat, J., Gaver, W., Frens, J., 2000. Interaction relabelling and extreme characters: methods for exploring aesthetic interactions. In: Boyarski, D., Kellogg, W. (Eds.), Proceedings of DIS2000 Designing Interactive Systems: Processes, Practices Methods and Techniques (New York 17–19 August 2000). ACM Press, New York, pp. 66–71.

Dourish, P., 2006. Implications for design. In: Grinter, R., Rodden, T., Aoki, P., Cutrell, E., Jeffries, R., Olson, G. (Eds.), Proceedings of CHI ’06, The SIGCHI Conference on Human Factors in Computing Systems (Montreal, Quebec, Canada, April 22–27, 2006). ACM Press, New York, pp. 541–550. doi:10.1145/1124772.1124855.
Eisenberg, B., 2004. Debunking Miller’s Magic 7. ClickZ, October 29, 2004. <http:// www.clickz.com/3427631>.

Ellis, G., Dix, A., 2006. An explorative analysis of user evaluation studies in information visualisation. In: Proceedings of the 2006 Conference on Beyond Time and Errors: Novel Evaluation Methods For information Visualization (Venice, Italy, May 23–28, 2006). BELIV ’06. ACM Press, New York, pp. 1–7. <http://www.hcibook.com/alan/papers/beliv06-evaluation/>.
Furniss, D., Blandford, A., Curzon, P., 2007. Usability evaluation methods in practice: understanding the context in which they are embedded. Proceedings of the 14th European Conference on Cognitive Ergonomics: Invent! Explore! (London, August 28–31, 2007), ECCE ’07, vol. 250. ACM Press, New York, pp. 253–256. http://doi.acm.org/10.1145/1362550.1362602.

Gamberini, L., Spagnolli, A., Pretto, P., 2004. Temporal structure of SMSmediatedconversation. In: Time Design Workshop, CHI2004, Wien, April 25 2004.

Garfinkel, H., 1967. Studies in Ethnomethodology. Prentice-Hall, Englewood Cliffs. Gaver, W., 2007. Cultural commentators: non-native interpretations as resources forpolyphonic assessment. International Journal of Human–Computer Studies
65, 292–305.

Harper, R., Rodden, T., Rogers, Y., Sellen, A., 2008. Being Human: Human–Computer
Interaction in the Year 2020. Microsoft Research, Cambridge. <http://
research.microsoft.com/en-us/um/cambridge/projects/hci2020/>.

Johnson, J., Roberts, T., Verplank, W., Smith, D., Irby, C., Beard, M., Mackey, K., 1989. The Xerox Star: a retrospective. Computer 22 (9), 11–26. 28–29. <http://
dx.doi.Org/10.1109/2.35211>.

Larson, K., Czerwinski, M., 1998. Web page design: implications of memory,
structureand scent for information retrieval. In: Proceedings of CHI ’98 Human
Factors in Computing Systems. ACM Press, pp. 25–32.

Long, J., 1991. Theory in human–computer interaction?, IEE Colloquium on Theory
in Human–Computer Interaction (HCI) (Digest No192) 17 Dec 1991. IEE, London, pp. 2/1–2/6. <http://ieeexplore.ieee.org/stamp/ stamp.jsp?arnumber=241136&isnumber=6182>.

Long, J., 1996. Specifying relations between research and the design of human– computer interactions. International Journal of Human–Computer Studies 44, 875–920.

Long, J., Dowell, J., 1989. Conceptions of the discipline of HCI: craft, applied science, andengineering. In: Sutcliffe, A., Macaulay, L. (Eds.), Proceedings of the Fifth
Conference of the British Computer Society, Human–Computer Interaction SpecialistGroup on People and Computers V (Univ. of Nottingham). Cambridge University Press, New York, pp. 9–32.

Marty, P., Twidale, M., 2004. Lost in gallery space: a conceptual framework for analyzingthe usability flaws of museum Web sites. First Monday 9, 9.

Marty, P., Twidale, M., 2005a. Extreme Discount Usability Engineering. Technical Report ISRN UIUCLIS–2005/1+CSCW.

Marty, P., Twidale, M., 2005b. Usability@90 mph: Presenting and Evaluating a New. High-Speed Method for Demonstrating User Testing in Front of an Audience. First Monday 10, 7.

McCarthy, J., Wright, P., 2004. Technology as Experience. MIT Press, Cambridge.

Merleau-Ponty, M., 1945. M. Phénomènologie de la Perception. Gallimard, Paris (quote in text from: M. Merleau-Ponty, Phenomenology of Perception, K. Paul
(trans.), Routledge, 1962, 2002).

Miller, G., 1956. The magical number seven, plus or minus two: some limits on our
capacity for processing information. The Psychological Review 63, 81–97.
<http://www.musanim.com/millerl956/>.

Nielsen, J., 2000. Why You Only Need to Test With 5 Users, Alertbox, March 19,
2000. <http://www.useit.com/alertbox/20000319.html>.

Nielsen, J., Landauer, T., 1993. A mathematical model of the finding of usability
problems. In: Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems (Amsterdam, The Netherlands, April 24–29,1993). ACM Press, New York, pp. 206–213. doi:10.1145/169059. 169166 .

O’Reilly, T., 2005. What Is Web 2.0: Design Patterns and Business Models for the NextGeneration of Software, O’Reilly Media, 30th September 2005. <http:// www.oreillynet.eom/pub/a/oreilly/tim/news/2005/09/30/what-is-web20.html> (accessed 23.06.07).

PHP Manual, 2009. A History of PHP. <http://gtk.php.net/manuall/en/html/ intro.whatis.php.history.html> (accessed 01.05.09).

Popper, K., 1959. The Logic of Scientific Discovery. Basic Books, New York.

Razak, F., 2008. Single Person Study: Methodological Issues, PhD Thesis. ComputingDepartment, Lancaster University, UK. <http://www.hcibook.net/
people/Fariza/>.

Rogers, Y., 2004. New theoretical approaches for HCI. Annual Review of Information
Science and Technology 38.

Rouet, J.-F., Ros, C., Jégou, G., Metta, S., 2003. Locating relevant categories in web menus:effects of menu structure, aging and task complexity. In: Harris, D., Duffy, V., Smith, M., Stephandis, C., (Eds.), Human-centred Computing: Cognitive Social and Ergonomic Aspects, vol. 3 of Proc. of HCI Intnl Laurence Earlbaum, New Jersey, pp. 547–551.
schraefel, m.c.,

Dix, A., 2009. Within bounds and between domains: reflecting on makingtea within the context of design elicitation methods. International Journal of Human–Computer Studies 67 (4), 313–323. April.

Shackel, B., 1959. Ergonomics for a computer. Design 120, 36–39. Silva, P., Dix, A., 2007. Usability – not as we know it! In: Proceedings of BCS HCI 2007, People and Computers XXI, BCS eWiC. <http://www.hcibook.com/alan/
papers/HCI2007-YouTube/>.

Spagnolli, A., Gamberini, L., 2007. Interacting via SMS: practises of social closeness andreciprocation. British Journal of Social Psychology 22, 343–364. Star, S., 1989. The structure of ill-structured solutions: boundary objects and heterogeneous distributed problem solving. In: Gasser, L., Huhns, M. (Eds.), Distributed Artificial Intelligence, vol. II. Morgan Kaufmann, SF Mateo, pp. 37– 54.

Sutcliffe, A., 2000. On the effective use and reuse of HCI knowledge. ACM
Transaction on Computer–Human Interaction 7 (2), 197–221. http://
doi.acm.org/10.1145/353485.353488.

Sykes, J., 2004. Presentation at panel on ”Funology: A Science of Enjoyable Technology?” M. Blyth (chair), ECCE-12, (St. Williams College, Yor, 12–15 September 2004).

Tidwell, J., 2005. Designing Interfaces: Patterns for Effective Interaction Design. O’Reilly.

Tidwell, J., 2009. Common Ground: A Pattern Language for Human–Computer
Interfacedesign. <http://www.mit.edu/~jtidwell/interaction_patterns.html>
(accessed May 2009).

van Welie, M., 2009. Welie.com: Patterns in Interaction Design, dated 2008. <http://
www.welie.com/> (accessed May2009).

White, L., 1968. Medieval Technology and Social Change. Oxford University Press.

 

Conceptualizing a possible discipline of human–computer interaction 150 150 admin

Conceptualizing a possible discipline of human–computer interaction

 

John M. Carroll
Center for Human–Computer Interaction and College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, USA

John Long's Comment 1 on this Paper

When the editors first told me that Jack Carroll had contributed to the John Long Festschrift, I was delighted. First, for personal reasons. I have always liked Jack and taken pleasure in his company at numerous conferences and workshops. Second, for professional reasons. He is never less than serious about HCI and is always good value in discussions. His reputation in the field of HCI is second to none.

Not surprising, then, that in the early days of HCI, I invited him to join the Editorial Board of my HCI Series with Cambridge University Press. Nor that I strongly supported the publication of his book ‘Designing Interaction’ (1991).

I am still delighted, even having now read Jack’s Festschrift contribution, in spite of its characterization by one reviewer as a ‘damning critique……with venom in the pen’ of my 1989 HCI Discipline paper with John Dowell. Although not very celebratory in the usual sense, such may be the price of professional seriousness.

In contrast, Jack is generous (and celebratory) in his acknowledgment of our ‘stimulating debates’ 0ver the years and their benefit to his motivation and thinking. He claims that I ‘always seemed ready for another round’. True then and now. Here goes…… 

Abstract

This essay is a personal reflection on John Long’s keynote address at the BCS People and Computers meeting in Nottingham in the summer of 1989. I try to locate the paper’s purpose and significance within the history of human-computer interaction (HCI), both prior to 1989 and subsequently, and particularly with respect to the abiding questions of what sort of enterprise HCI is, and of what sorts of knowledge it uses and produces.

Comment 2

Carroll is, of course, free to focus on a single paper (Long and Dowell, 1989) of Long’s 20 years of research in HCI (Dowell and Long (1989) is the only other paper referenced). The reader’s view of the Long and Dowell may be sharpened, as a result. However, there is also a risk, that issues, raised by Carroll, have been addressed elsewhere by Long’s research. Such issues are addressed throughout the commentary, hopefully so contributing the better to Carroll’s consideration of ‘what sort of enterprise HCI is’ and ‘of what sort of knowledge it uses and produces’

 

1. Introduction: HCI’s first three decades

Comment 3

Section 1 is intended to characterize HCI prior to the publication of Long and Dowell (1989). As such, it provides the historical context for the consideration of their paper and developments of HCI since 1989. It is upbeat and largely uncritical, even appropriately celebratory, one might say (see ‘About this article’ – above). However, in general, the underlying argumentation and evidence for the assertions and judgments made are undeclared. Comments on this section, then, are limited to illustrating the latter point. This is not to say that the historical context fails to serve the purpose, claimed by Carroll, only that in-depth comment on each assertion would be misguided.

The general position, here, is that, in agreement with Carroll, the HCI community has grown; but, in disagreement with him, the HCI discipline has not made desired progress.

Human–computer interaction (HCI) is an area of research and practice that emerged in the early 1980s, initially as a specialty area in computer science. HCI has expanded rapidly and steadily for three decades, attracting professionals from many other disciplines and incorporating diverse concepts and approaches. To a considerable extent, HCI now aggregates a collection of semi-distinct fields of research and practice in human-centered informatics. However, the continuing synthesis of disparate conceptions and approaches to science and practice in HCI has produced a dramatic example of how different epistemologies and paradigms can be reconciled and integrated.

Comment 4

It is fair to say that different epistemologies and paradigms co-exist within HCI, as evidenced by a whole succession of conferences, whose platforms are shared by researchers having different approaches to HCI, for example, engineers; ethno-methodologists; psychologists; designers etc etc.  However, the argumentation and the evidence that these different epistemologies and paradigms have been reconciled and integrated is undeclared; for example, respectively Card, Moran and Newell (1983) and Heath and Luff (2000), not to mention the three faces of interaction (Grudin, 2005).

Reconciliation and integration, here, is taken to include: (1) between knowledges; (2) between practices; and (3) between knowledges supporting practices (that is, exemplars – see Kuhn (1970) and Salter (2010)). Reconciliation and integration also assumes some criteria for their validation, for example, completeness, coherence and fitness-for- (design) purpose (Long, 1997 and 2000). Epistemological and paradigmatic reconciliation and integration of different approaches to HCI can, of course, take different forms.

Denley and Long (2001 and 2010) distinguish four such forms: (1) ‘by concept’, for example, different HCI approaches having a common framework or theory; (2) ‘by product’, for example, different HCI approaches contributing different products, for example, ‘user requirements’ and ‘evaluation’ to the same system development process; (3) ‘by process’, for example, different approaches to HCI, using each others’ methods, such as ‘task analysis’ and ‘grounded theory’; and (4) ‘by practitioner’, for example, practitioners having different approaches to HCI collaborating informally on the same system development. A (weak) form of reconciliation and integration of epistemologies and paradigms in HCI certainly occurs ‘by practitioner’,  ‘by process’ and by ‘by product’; but not by the (strong) form of ‘by concept’.

Carroll’s claim is unclear in this respect. His concern with ‘what sort of enterprise HCI is’ suggests forms (2) and (3). His concern with ‘what sort of knowledge it uses and produces’ suggests (1), that is, ‘by concept’, the one form of reconciliation and integration, which would appear not to have been instantiated.

 

Until the late 1970s, the only humans who interacted with computers were information technology professionals and dedicated hobbyists. This changed disruptively with the emergence of personal computing around 1980. Personal computing, including both personal software (productivity applications, such as text editors and spreadsheets, and interactive computer games) and personal computer platforms (operating systems, programming languages, and hardware), made everyone in the developed world a potential computer user, and vividly highlighted the deficiencies of computers with respect to usability for those who wanted to use computers as tools.

The challenge of personal computing became manifest at an opportune time. The broad project of cognitive science, which incorporated cognitive psychology, artificial intelligence, linguistics, cognitive anthropology, and the philosophy of mind, had formed at the end of the 1970s. Part of the programme of cognitive science was to articulate systematic and scientifically-informed applications to be known as ”cognitive engineering”. Thus, at just the point when personal computing presented the practical need for HCI, cognitive science presented people, concepts, skills, and a vision for addressing such needs. HCI was one of the first examples of cognitive engineering.

Other historically fortuitous developments contributed to the establishment of HCI. Software engineering, mired in unmanageable software complexity in the 1970s, was starting to focus on nonfunctional requirements, including usability and maintainability, and on non-linear software development processes that relied heavily on testing. Computer graphics and information retrieval had emerged in the 1970s, and rapidly came to recognize that interactive systems were the key to progressing beyond early achievements. All these threads of development in computer science pointed to the same conclusion: The way forward for computing entailed understanding and better empowering users.

Finally human factors engineering, which had developed many techniques for empirical analysis of human–system interactions in so-called control domains such as aviation and manufacturing, came to see HCI as a valuable and challenging domain in which human operators regularly exerted greater problem-solving discretion. These forces of need and opportunity converged around 1980, focusing a huge burst of human energy, and creating a highly visible interdisciplinary project.

One of the most significant achievements of HCI is its evolving model of the integration of science and practice. Initially this model was articulated as a reciprocal relation between cognitive science and cognitive engineering. Later, it ambitiously incorporated a diverse science foundation, notably Activity Theory, distributed cognition, and ethnomethodology, and a culturally embedded conception of human activity, including the activities of design and technology development. Currently, the model is incorporating design practices and research across a broad spectrum. In these developments, HCI provides a blueprint for a mutual relation between science and practice that is unprecedented.

Early HCI sought to develop synergies between cognitive science and cognitive engineering. During the 1980s a rich reciprocal relationship developed. In areas like user modeling, HCI directly applied key cognitive science theories to the design of command languages and information visualizations. In other cases, HCI provided guidance to cognitive science through embodied concepts like direct manipulation and user interface metaphor. Mutual reciprocity between underlying science and application is rare, but not unprecedented (the discovery of the transistor effect in physics emerged from applied research). For HCI, this starting point was an intellectually sophisticated and ambitious foundation for more radical possibilities.

HCI research and application provided a strong force for theoretical integration within cognitive science. The very first HCI theories were far more ambitious integrations than had been attempted in the basic science. For example, the model human processor (Card et al., 1983) integrated aspects of perception, attention, short-term memory operations, planning, and motor behavior in a single model, at a time when most cognitive science models addressed only isolated laboratory phenomena. Ironically, early models were criticized within HCI as too limited with respect to understanding and creating applications. This self-criticism promoted increasingly comprehensive modeling that has jointly driven the basic science and its applications. But more importantly, these early successes, and their deconstruction, further fueled paradigmatic aspirations in HCI.

In the latter 1980s and early 1990s, HCI assimilated ideas from Activity Theory, distributed cognition, and ethnomethodology. This comprised a fundamental epistemological realignment. For example, the representational theory of mind, a cornerstone of cognitive science, is no longer axiomatic for HCI science. Information processing psychology and laboratory user studies, once the kernel of HCI research, became important, but niche areas. Field studies became typical, and eventually dominant as an empirical paradigm. Collaborative interactions, that is, groups of people working together through and around computer systems (in contrast to the early 1980s user-at-PC situation) have become the default unit of analysis. The contemporary theory-base of HCI draws broadly upon social, cognitive and computation science, and strongly emphasizes design research, pragmatics, and aesthetics. It is remarkable that such fundamental realignments were so easily assimilated by the HCI community.

Comment 5

No-one doubts that ‘fundamental epistemological realignments’, of the kind cited by Carroll, have taken place within HCI, during the period 1980 – 2000. Nor that such realignments were assimilated, in some way, by the HCI community, as part of its development as an ‘enterprise’ (see Carroll’s Abstract earlier). However, Carroll leaves the exact nature of that assimilation undeclared. The reader might assume that the assimilation relates to the incrementation of the ‘sorts of knowledge it uses and produces’.

However, such assimilation is put in doubt by evidence, reported by Newman (1994). His analysis of the CHI and INTERCHI 1989-1993 proceedings showed that only 30 per cent of papers fell into the categories of: improved modeling techniques; solutions; and tools (foundational categories for the development of any discipline). The remaining papers described ‘radical solutions’ (that is, not derived from incremental improvements to solutions to the same problem) and experience and/or heuristics, gained mostly from studies of radical solutions.

Whatever the epistemological re-alignments, cited by Carroll, they do not appear to have produced an assimilation, which resulted in the incrementation of HCI knowledge. Such incrementation is, of course, central to the development of a discipline – engineering or scientific.

Although HCI was always conceived of as a design science, this was construed at first as a boundary, with HCI providing guidance to system design and development. Throughout the 1990s, however, HCI directly assimilated, and eventually itself spawned, a series of design communities. This engagement with design communities coincided with substantial advances in user interface technologies that shifted much of the potential proprietary value of user interfaces into graphical design. Somewhat ironically, designers were welcomed into the HCI community just in time to help remake it as a design discipline. A large part of this transformation was the creation of design disciplines that did not exist before. For example, user experience design and interaction design were not imported into HCI, but rather were among the first exports from HCI to the design world. Design is currently the facet of HCI in most rapid flux. It seems likely that more new design proto-disciplines will emerge from HCI during the next decade.

Conceptions of how underlying science informs and is informed by the worlds of practice and activity have evolved continually in HCI since its inception. Throughout the history of HCI, paradigmchanging scientific and epistemological revisions were deliberately embraced by a field that was, by any measure, succeeding intellectually and practically. The result has been an increasingly fragmented and complex field that has continued to succeed even more. This example contradicts the Kuhnian view of how intellectual projects develop through paradigms that are eventually overthrown.

Comment 6

Other researchers do not share this view of Kuhn with respect to HCI. For example, Dowell and Long (1998) list the necessary elements of a ‘discipline matrix’ by which an HCI discipline might emerge and evolve. First, a ‘shared commitment to models’, which enables a discipline to recognize its scope or ontology. Second, values, which guide the solution to (discipline) problems. Third, ‘symbolic generalisations’, which function both as laws (principles) for solving (discipline) problems. Fourth, ‘exemplars’, which are instances of problems and their solutions. Exemplars work by demonstrating the use of models, values and symbolic generalisations to solve discipline problems.

Elsewhere, Salter (2010), following Kuhn (1970), suggests that HCI might evolve in two stages. During the first, termed the ‘crisis’ stage, the shared commitment to models, values and symbolic generalisations are in question, that is, not shared by the HCI community as a whole. During the second, the ‘normal’ stage, the HCI community holds a consensus view, concerning the elements of the discipline matrix and uses them to solve HCI discipline problems.

Carroll’s characterisation of the first three decades of HCI accords well with Kuhn’s Stage 1, that is, the pre-paradigmatic period, often characterised as the period of the ‘warring schools’. This conclusion is consistent with the earlier claim, made in Comment 4, that epistemological and paradigmatic reconciliation and integration currently occur only ‘by product’, ‘by process’ and ‘by practitioner’. None requires a consensus view, concerning the elements of Kuhn’s disciplinary matrix.

However, there is no reconciliation and integration ‘by concept’, which does require a consensus view.  The conclusion of HCI being in a pre-paradigmatic stage is also consistent with Newman’s finding, that 70 per cent of CHI papers, either described radical solutions or experience and/or heuristics, associated with radical solutions. Neither constitutes an exemplar, derived from a consensus HCI disciplinary matrix, as required by Kuhn’s paradigmatic stage (see also Comments 4 and 5).

Note that the Dowell and Long (1989) Conception, expressing the general design (for effectiveness) problem for HCI, was intended to support a consensus among HCI researchers – a pre-requisite for HCI to pass from its current pre-paradigmatic stage to a paradigmatic one.

 

The continuing success of the HCI community in moving its meta-project forward thus has profound implications, not only for human-centered informatics, but for epistemology. (Other sketches of the history of HCI are Carroll, 1997; Myers, 1998; Grudin, 2005.). In this paper, I will elaborate the foregoing interpretation of the emergence of HCI, particularly with respect to conceptions of HCI as a discipline, and of what sorts of theory are possible and appropriate in HCI. My touchstone for this is the contribution of John Long to the development of HCI, especially during the latter 1980s, an extremely formative period. I will place these contributions into a context, reconstructed through the benefits of hindsight, and the latitudes of an essay format. In my view, there is not one true narrative for HCI (or for anything else of even reasonable complexity). Nevertheless, I think my view is grounded and valid, and more importantly, it leads to prospective interpretations of HCI, and of Long’s contributions to it. We do not want to, and, in any case, cannot relive the history of HCI, but we surely ought to try to learn what we can from it.

2. Long’s conception of an engineering discipline of HCI

I met Long in 1989. We met at a conference in Nottingham, at which we were both invited speakers. We were both wrestling, in our separate ways, with what some have called the mid-1980s ”theory crisis” in HCI. As briefly sketched above, HCI had been born at the beginning of the 1980s as a paradigm case of cognitive engineering. But what exactly was cognitive engineering? Most early discussions were vague. I first encountered the programmatic notion of cognitive engineering in discussions at the inaugural meting of the Cognitive Science Society in La Jolla, California, and subsequently in a talk by Norman (1982) at the early HCI conference held at the US Bureau of Standards in Gaithersburg, Maryland. The kernel of the idea was that domains strongly shape cognition, and that studying and supporting cognition in real and complex domains is salutary, if not essential, for developing a science of cognition and, of course, for applying it to real problems.

Comment 7

According to Carroll, the domain for Cognitive Science constitutes the scope of Cognition. In contrast, according to Dowell and Long (1998), for Cognitive Engineering (that is HCI) the domain constitutes the scope of the interactive worksystem. For Dowell and Long, the domain is the means by which performance of the worksystem is expressed (that is, what work (object/attribute/state changes) it effects and how well that work is carried out (‘Task Quality’). In the latter case, both the domain and the computer (technology) together constitute the scope of user cognition.

 

There is no doubt that this conception was transformative with respect to a range of cognitive science activities in the 1980s and subsequently, among them HCI. But likewise there is no doubt that this conception leaves out many critical details. For example, what does it mean to apply cognitive science? How exactly would that work? It would necessarily involve generalizing from principles and results that originated in narrow and contrived laboratory tasks. There would have to be some perilous inductive leaps, at the least. And even if some of the leaps worked out, would applying cognitive science in some number of cases be enough to warrant the claim that cognitive engineering had arrived?

Comment 8

These are excellent, but still unanswered questions. The conception of HCI as Engineering (Dowell and Long, 1989 and 1998), coupled with the concept of validation – as conceptualization, operationalisation, test and generalization (Long, 1997 and 2000) – together is an attempt to answer such questions.

HCI in the 1980s was strongly invested in the cognitive engineering vision, though for the most part the larger framework remained unarticulated. Card et al. (1983), and more pointedly Newell and Card (1985), made the most comprehensive early effort toward articulating a framework for science and engineering in HCI. Their work embraced a simplistic view of the relation between science and engineering, emphasizing approximation but not boundary conditions on or scope of applicability, and it did not produce the paradigmatic consensus they had hoped for, indeed it evoked sharp criticism (Carroll, 2006; Carroll and Campbell, 1986).

Comment 9

Newell and Card (1985) recognised the need for paradigmatic consensus, even if, as argued by Carroll, they failed to bring it about. Their position is consistent with the position taken in Comment 4.

 

But the Card, Moran and Newell work did provide a touchstone and focus for other theory-based work during the 1980s; it clearly raised the possibility of a theory-based paradigm, and it attracted many other voices to the debate.
When I met Long, we were both engaged in trying to work out what kind of a discipline or project HCI was, or could be seen as, and what kinds of knowledge or theory it used or could use. We were trying to describe how cognitive science knowledge might emerge from and be applied to HCI design work.

Comment 10

Carroll’s claim here is not incorrect. However, by 1989 (Long and Dowell, 1989 and Dowell and Long, 1998), my main goal was to develop HCI Engineering to make good the deficiencies, which characterized Cognitive (Psychology) Science’s application to design.

We were pursuing different answers, the nature of which I will address presently.

2.1. Defining the HCI discipline

Long’s keynote address at the Nottingham conference, the paper was written with John Dowell, contrasted three conceptions of HCI as a discipline – craft, applied science, and engineering (Long and Dowell, 1989). Long and Dowell define discipline as a particular body of knowledge supporting specific practices directed at solving a general problem. I like very much the approach of trying to be explicit as possible about definitions. This work bluntly proposes a boldly minimalist schematization of discipline.

Comment 11

Carroll confuses ‘minimalist’ conceptualization with ‘high level’. Dowell and Long’s (1989 and 1998) attempt to instantiate a discipline of HCI (Cognitive Engineering) is far from minimalist. See also the application of the conceptions by others (Hill, 2010; Salter, 2010; and Wild, 2010).

Long and Dowell define the general HCI problem as ”the design of humans and computers interacting to perform work effectively” (p. 13). They characterize and analyze three disciplinary conceptions of HCI. First, they suggest that HCI can be pursued as a craft practice, relying on implicit, informal, and experiential knowledge. They discuss the example projects of Prestel videotex (Buckley, 1989) and of the Ded display editor (Bornat and Thimbleby, 1989) as paradigmatic. They argue that a craft practice of HCI cannot be effective: Because its knowledge is implicit, informal, and experiential, its knowledge cannot be operationalized: ”it cannot be directly applied by those who are not associated with the generation of the heuristics or exposed to their use” (p. 18). Moreover, because craft knowledge is heuristic, ”there is no guarantee that practice applying HCI craft knowledge will have the consequences intended” (p. 119). In other words, craft practice is ineffable and unreliable.

Comment 12

Again, Carroll’s assertion here is not incorrect. However, it omits the positive aspects of Craft HCI, identified by Long and Dowell (1989). It is worth quoting their conclusion in full: ‘In summary, although the costs of acquiring  its (Craft’s) knowledge would appear acceptable and although its knowledge, when applied by practice sometimes successfully solves the general problem of designing humans and computers interacting to perform work effectively, the craft discipline of HCI is ineffective, because it is generally unable to solve the general problem. It is ineffective, because its knowledge is neither operational (except in practice itself), nor generalisable, nor guaranteed to achieve its intended effect – except as the continued success of its practice and its continued use by successful craftspeople.’ ‘Unreliable’ – yes; but ‘ineffable’ no.

Second, they consider a disciplinary view of HCI as applied science, relying on knowledge in the form of theories, models, and principles used to formulate and investigate hypotheses, predictions, and explanations. They concede that sciences, like psychology, can be applied to HCI, giving the example of the role of confirmatory feedback in guiding sequences of behavior. However, they argue, such general scientific principles cannot be directly and deductively applied in a specific design because they do not ”prescribe the feedback required . . . to achieve effective performance of work” (p. 20). In other words, the context in which the scientific principles were formulated and developed are necessarily different than the ones that arise for a particular design application, and specifically so with respect to supporting effective work outcomes in the context of the design.

Comment 13

The ‘context’ also includes the differences between the knowledge, practices and general problem of Science and Engineering.

Long and Dowell also consider the related strategy of constructing guidelines/principles which are themselves grounded in scientific theory. They discuss Hammond and Allinson’s (1988) computer assisted informal learning system with respect to the theory-based design principle ”provide distinctive multiple forms of representation” as an example. Long and Dowell argue that such a principle cannot directly guide design, since neither it nor the theories that underwrite it are defined, operationalized, or generalized with respect to effective performance of work activity.

More generally, Long and Dowell argue that although HCI as an applied science describes knowledge more explicitly and more generally, and supports derivation of theory-based guidelines, it ultimately fails in the same way craft practice fails as a disciplinary model for HCI: applied science does not describe how to support particular work activity effectively. As a consequence, the use of science in design must always be empirically mediated by implementation, evaluation, and iteration.

Long and Dowell turn finally to their preferred disciplinary model for HCI: engineering. They state that engineering distinctively solves design problems ”by the specification of designs before their implementation” (p. 24). They describe engineering knowledge as principles that allow ”designs to be prescriptively specified for artifacts, or systems which when implemented, demonstrate a prescribed and assured performance” (p. 24). Finally, they state that engineering can deal systematically with complexity: ”Designs specified at a general level of description may be systematically decomposed until their specification is possible at a level of description of their complete implementation” (p. 24).
Long and Dowell are optimistic about the engineering conception, but provide few details. Indeed, they concede that engineering principles of the sort they require do not exist (as of 1989). They give two examples that they consider promising: Dix and Harrison (1987) and Dowell and Long (1989). Curiously, they do not mention the engineering models of Card et al. (1983). Further, they suggest – actually quite like Card et al. (1983) – that the most promising niches for this disciplinary model to be realized would be in highly practiced, expert performance of relatively low-level tasks domains in which ”human behavior can be usefully deterministic” (p. 27).

Comment 14

The example of human behaviour, ‘usefully deterministic to some extent’ is actually driver behaviour in response to traffic system protocols – practised and expert; not in the least ‘low level’. Further, the whole issue of the specifiability of designs and the determinism of human behaviours, and their relationship to ‘hard’ and ‘soft’ HCI design problems (including their relationship to the possible formulation of HCI design problems) is fully explored in Dowell and Long (1989). In particular, their Figure 2 shows a classification of design disciplines, which plots discipline practices against discipline knowledge with respect to the ‘hardness’ or ‘softness’ of general design problems.

One of the key constructs that differentiates the engineering disciplinary model of HCI from the applied science and craft models is that the former incorporates an explicit model of the application domain. The craft practice and applied science disciplinary models have no notion of boundary conditions, applicability, or context built into them. Nevertheless, Long and Dowell are confident that engineering principles of the sort they imagine for HCI would be generalizable knowledge, that application of the principles would be direct and indeed specifiable, and effective.

Comment 15

Since 1989, initial HCI design principles have been proposed by Stork (1999) for the domain of domestic energy management and Cummaford (2007) for the domain of business-to-customer electronic commerce. Principles are derived by: (1) diagnosing instances or classes of design problem (as expressed by Dowell and Long (1989) – ‘users not interacting with computers effectively’); (2) by prescribing (and testing) design solutions to those problems; and (3) by identifying and integrating the common elements of the design solutions.

2.2. Problems with Long and Dowell’s conception of discipline

The best thing about the Long and Dowell paper is that it clearly and forcefully sets forth definitions, makes sharp distinctions, and reasons toward a clear programmatic conclusion and recommendation for how HCI ought to organize itself as a discipline. I will return to this point in closing, but let me say now that I regard all that as very important and constructive. HCI is still much in need of clarifying its foundations, and making progress on that will require definitions, distinctions, and argumentation. I believe that Long and Dowell’s paper provides intellectual scaffolding to construct such a debate. Although, as I will presently make clearer, I feel that the paper did not achieve it goals, I think the goals are valuable and that the general versions of the questions raised are still quite open to debate and in need of investigation.

Comment 16

The goals of Long and Dowell (1989) were to develop the HCI’89 conference theme as ‘ the theory and practice of HCI’. To achieve this goal, they first defined disciplines in general. Second, they identified the scope of HCI as a discipline. Third, they proposed a framework for different conceptions of HCI. Last, they identified three alternative conceptions of HCI and assessed their effectiveness.

It is unclear how the paper failed to achieve its goal, as claimed by Carroll. However, it is pleasing to see its contribution to the ongoing debate on these matters recognised. Twenty-one years is a long time in HCI and much water has passed under the bridge in that time. Long and Dowell (1989) seems to have survived rather well and continues to be of interest, even maybe of use.

Long and Dowell’s specific argument, however, fails in important ways. First, its general conception of discipline is unwieldy, and the authors do little to ease this concern. They present their conception of discipline in the most general terms, and then hurry onwards without actually describing the HCI discipline in any empirical detail. This omission matters, because the second failure of their paper is that the scoping of HCI they presume is more narrow than the reality of HCI. Moreover, the manifold ways in which HCI has broadened since 1989 make it ever more difficult to see how to extend the Long and Dowell analysis to contemporary HCI. Third, the characterizations and arguments directed at the craft and applied science models for an HCI discipline are dismissive and inadequate. They just do not make the case that these paradigms are inappropriate or necessarily ineffective. Finally, the argument for an engineering disciplinary model of HCI rests on an academic idealization of what engineering is like. No wonder they could not find examples of it.

Comment 17

In general, these claimed failures of Long and Dowell (1989) by Carroll are rejected. Detailed argumentation is associated with individual claims, as they arise.

As is often true, the rub comes with unpacking the details. Surely disciplines codify and use knowledge through practices addressing disciplinary problems. So one would have to say far more to have said anything. For example, we might assume that ”practices” is a union of typical and/or critical workflows, information flows, roles and divisions of labor, and other social arrangements in and around work activity. Indeed, this seems a rather minimal notion of practices. But even this is more than encyclopedic in scope. Such an amorphous and expansive conception of practices is a burden for all would be framework makers, and not just for Long and Dowell. However, the challenge Long and Dowell more uniquely inflict on themselves is that they insist that the job of HCI is to render practices in an explicit specification. They do not address this explicitly, but it seems to me that they would have to require that the practices be made explicit through some kind of hierarchical task analysis. In 1989, this would have been seen as reasonable, perhaps even as too obvious to belabor. But this is no longer true. Comprehensive task analysis, outside of restricted safety critical interactions, is moribund (Carroll, 2002). Even its most enthusiastic proponents concede that it is not used (Diaper, 2002).

Comment 18

According to Long and Dowell (1989), discipline knowledge can assume many forms, for example, it can be: ‘ tacit, formal, experiential, codified etc’. It may also be maintained in a number of way, for example, ‘in journals, learning systems, procedures, tools etc’. Taken together, they would seem to contradict Carroll’s claim that Dowell and Long ‘insist that the job of HCI is to render practices in an explicit specification’. This contradiction renders Carroll’s subsequent references to hierarchical task analysis difficult to understand and indeed is inappropriate.

More pointedly, such a conception of practices rendered explicit through tedious recursive decomposition is empirically wrongheaded. Studies of technical practices in general (Latour, 1987; Latour and Woolgar, 1986; Orr 1996) and of practices in HCI settings specifically (Bentley et al., 1992; Heath and Luff, 2000; Suchman, 1987) refute the possibility that actual domain practices could ever be meaningfully specified in this manner. Practices cannot be specified a priori because at any reasonable level of complexity they depend on the improvisations of people and the local culture of groups. Neither of these can be analyzed a priori or generally. Any programme that requires this level of specification has stumbled on the starting blocks.

Comment 19

Not in the least. Levels of specification for practices (as well as knowledge) vary with the type of possible HCI discipline concerned. See also Comment 18.

Finally, it is important to emphasize that discarding brittle and a priori notions of discipline, practice, and even knowledge does not leave us paradigmatically impaired. Quite to the contrary, empirical approaches have already taught us much. If we want to develop a set of models for disciplines, we should study what scientists, engineers, designers, and other technical persons do, and how they do it (Latour, 1987; Latour and Woolgar, 1986). It is true that empirical approaches do not easily lead to simple type contrasts, such as Long and Dowell’s three paradigms for disciplinary projects, but they do produce exemplars that can be used as models.

Comment 20

Indeed, as demonstrated by the research of Newman (1994) – see Comment 5. However, he found more inappropriate (pre-paradigmatic) HCI examples, than appropriate (paradigmatic) ones. This finding sets limits on the use of (empirical) HCI examples to develop future models for HCI.

Finally, it is important to emphasize that discarding brittle and a priori notions of discipline, practice, and even knowledge does not leave us paradigmatically impaired. Quite to the contrary, empirical approaches have already taught us much. If we want to develop a set of models for disciplines, we should study what scientists, engineers, designers, and other technical persons do, and how they do it (Latour, 1987; Latour and Woolgar, 1986). It is true that empirical approaches do not easily lead to simple type contrasts, such as Long and Dowell’s three paradigms for disciplinary projects, but they do produce exemplars that can be used as models.

Comment 21

In summary, Long and Dowell’s (1989) characterisation of disciplines is hardly controversial. Carroll’s objections are either unclear or inappropriate. Of course, craft and engineering knowledge and practices vary as to their explicitness. However, the differences reside in the conceptions of craft and engineering themselves. Long and Dowell characterize disciplines: (1) with respect to disciplines other than HCI, giving them a more general meaning and reference; (2) at a level of detail sufficient for alternative conceptions of HCI to be usefully contrasted. Lower levels of detail for knowledge and practices are expressed by the conceptions themselves. Carroll’s point, concerning explicitness, would be better made in the context of the latter.

2.3. Problems with Long and Dowell’s scoping of HCI

Long and Dowell spend quite a bit of discussion on the importance of identifying general disciplinary problems. They assert that each discipline has one, though it may be decomposable into subproblems with corresponding sub-disciplines. The general problem of HCI is stated as ”humans and computers interacting to perform work effectively” (p. 13). In their text, it is clear that Long and Dowell recognized that this was a very general general problem, essentially a cross-product of all topics involving humans and their organizations, computers – including embedded and networked devices, and all work activities in which the former employ the latter.

Comment 22

All topics, if and only if, they contribute to a design problem, either concerning its diagnosis, the prescription of its solution or both. Otherwise, not all topics; but only those topics, which actually do contribute to the specification of a design problem.

Long and Dowell explicitly identify HCI as a design domain. They give an alternate wording of their own general HCI problem highlighting this: ”the design of humans and computers interacting to perform work effectively” (p. 13). The general problem is
decomposed into ”the design of humans interacting with computers” and ”the design of computers interacting with humans”; they associate the former general problem with Human Factors or Ergonomics, and the latter with Software Engineering. All of these restatements and decompositions of the general problem focus on the effective performance of work activity; the phrase ”to perform work effectively” ends each version of the general HCI problem.

Long and Dowell’s conception, sweeping though it may be, is far too narrow. HCI in the reality of its practice addresses many activities that are not work. Indeed, it is rarely pointed out, but obviously true, that the importance of play, leisure, education, and myriad other non-work activities to HCI undermines many of the 1980s conceptions of HCI that came from ergonomics and human factors, but also from Activity Theory, work psychology, and other continental frameworks that are singularly focused on the workplace. Work is important, but all work and no play would have made HCI a far more dull enterprise. Fortunately, it didn’t turn out that way.

Comment 23

It is the case, that in Long and Dowell (1989), Section 2.2, the concept of ‘work’ is not conceptualized, other than as that, which can be performed ‘effectively’. On this evidence alone, Carroll’s claim, that their proposed scope of ‘work’ is too narrow, excluding ‘play, leisure, and education’, might appear reasonable. However, in Section 3.3, Conception of HCI as an Engineering Discipline, Long and Dowell make clear that, ‘The behaviours of an interactive worksystem intentionally effect and so correspond with transformation of (domain) objects. Objects are physical and abstract and exhibit the affordance for transformation, arising from the state potential of their attributes. A domain of application is a class of transformations afforded by a class of objects’.

In the case of education, an (educational) interactive worksystem would transform a pupil from ‘uneducated’ (for example, not being able to perform mental arithmetic) to ‘educated’  (for example, being able to perform mental arithmetic). Note, for this to be the case, the pupil is both the ‘user’ of the ‘computer’, and so part of the worksystem, and an object in the domain, whose object/attribute/state (education) is being transformed. Concerning play (for fun), a (fun) interactive work(play)station would transform a player from an ‘unpleasured’ state to a ‘pleasured’ state (for example,  the enjoyment associated with scoring higher than last time at some competitive game). Again, the player is both part of the worksystem and an object in the domain of the worksysyem (see also Long, 2010).

Dix takes this point well, when he cites Long (1996), as claiming that work is ‘any activity seeking effective performance (see also Dix, Comment 8 and Wild, Comment 32). Of course, if Craft, Engineering and Applied Science conceive of ‘work’ (as used by Long and Dowell) as in lay /natural language, then Carroll’s point, concerning its limitation, would hold.

In their scoping of HCI, Long and Dowell seem to over focus on a particular methodological challenge of the 1980s. During this early period, designers and developers were often construed as customers or recipients of HCI methods and techniques. And it was generally believed that these counterparts wanted and needed HCI methods that were so well specified that they could be effectively put into practice by people not trained or experienced in HCI itself, namely, the designers and developers. This kind of boundary–mediated relationship still exists in some consulting arrangements, but it is no longer an appropriate view of HCI or its goals. Some of these boundaries have dissolved as HCI has come to directly include a greater diversity of professionals, and to produce professionals with more diverse skills. But perhaps more fundamentally, the goal of codifying formal methods that could be applied ”knowledgefree” to crank out good systems is misguided. We return to this point below.

Comment 24

It is the case, that according to Long and Dowell (1989), ‘The conception of HCI engineering principles assumes the possibility of a codified general and testable formulation of HCI knowledge (both substantive and methodological – see Cummaford (2007)), which might be prescriptively applied to designing humans and computers interacting to perform work effectively. Such principles would be unequivocally formal and operational.

However, elsewhere Dowell and Long (1989) are also quite clear, that ‘It is not supposed that the development of effective systems will never require craft skills in some form and engineering principles are not seen to be incompatible with craft knowledge, particularly with respect to their instantiation (Long and Dowell, 1989). At a minimum, engineering principles might be expected to augment the craft knowledge of HF professionals. This is hardly the ‘knowledge free’ cranking out of good systems, as portrayed here by Carroll.

It is also worth noting, that ‘soft’ design problems, that is which cannot be fully specified (Long and Dowell, 1989 – Figure 2) could not be the object of engineering principles and could only be solved (if at all) by craft knowledge and practices.

The issue of rescoping the general problem of HCI to include non-work activity has a more specific problematic consequence for Long and Dowell. Once we broaden the definition of the general problem of HCI – appropriately – to include play, leisure, education, and so on, the very important qualifier effectively becomes much more difficult to understand. Yet, as we will see, Long’s conception of HCI, and his position regarding the question of what approaches to HCI could be appropriate, had everything to do with understanding and operationalizing effectiveness.

Comment 25

Indeed, this is the case. Effectiveness of interactive worksystem performance, with respect to its domain, is central to Dowell and Long’s (1989) conception. It is precisely these concepts, taken together, which support the expression of the HCI design problem. However, there is no difficulty in conceptualizing the effectiveness of play, leisure and education interactive worksystems (see also Comment 23). Using the education and play examples, cited earlier, the effectiveness of a worksystem is expressed by ‘Task Quality’, how well the education/play is performed and the Resource Costs (cognitive, conative and affective) incurred, in performing education/play that well.

For example, an effective educational worksystem would transform a pupil from ‘uneducated’ (not knowing any mental arithmetic) to ‘educated’ (knowing some forms of mental arithmetic, for example, addition and subtraction), that is, to desired high ‘Task Quality’ at acceptably low Resource Costs. A less effective worksysyem might result only in the acquisition of addition mental arithmetic skills at an undesired lower level of ‘Task Quality’ and at unacceptably high ‘Resource Costs’. The effectiveness of play might be comparably expressed. Carroll’s point is, thus, rejected.

2.4. Problems with Long and Dowell’s conception of craft

The defining characteristic of a craft discipline is that craft knowledge is implicit, informal and acquired from experience. Long and Dowell did not give an example of traditional craft practice, but this would have been useful. Craft practice is the most developed paradigm for technology development in human history and there are many examples of it.

Comment 26

This is a rather general claim, which may (or may not) be true over the whole history of technological development. However, it seems less plausible, when applied to: manufacturing; construction; means of transport (land, sea and air); agriculture etc over the last 1-200 years. During this period, both science and engineering have made important contributions to technology development, along with craft.

An excellent example is George Sturt’s (1923) book ”The Wheelwright’s Shop”. Sturt inherited a shop, and decided to learn why the wheelwrights made wheels and carts as they did. One of the design features for which he sought rationale was that traditional English carts have slightly bowl-shaped wheels, mounted so that the portion below the axel is perpendicular to the ground. Sturt was surprised to find that he got several different answers from his expert wheelwrights. He was told by various master wheelwrights that such wheels better accommodate the cooling of iron tires, that they have a smaller turning radius, that they better tolerate sideward swaying of a cart, and that they allow the cart body to be wider at the top, and thereby allow larger loads. This example illustrates how implicit, informal, experiential knowledge can guide a practice.

Comment 27

Long and Dowell (1989), in fact, agree that implicit, informal, experiential knowledge can guide practice – see Comment 24 for more details.

Long and Dowell take a skeptically rationalist view that such codification of knowledge is problematic, perhaps because it cannot be conveyed in written form. But it is arguable that this is a superior form of technical practice. Sturt’s account suggests how multiple converging rationales might make practices more robust as they are passed from expert to apprentice. The wheelwright craft that Sturt investigated had been maintained successfully for more than century. It was manifestly effective.

Comment 28

The wheelwright craft, as described by Carroll, would seem to be effective, that is, relative to the absence of craft knowledge, for example, in the case of an isolated wheelwright, working on his or her own. However, the practice of such a craft would still take the form of ‘trial and error’ and not ‘specify then implement’. If increased density of loads constituted a design problem, causing unacceptable sideward swaying of the cart, it is unclear how the ‘tried and tested’ (but not ‘known’ in the scientific or engineering sense) practice of wheelwright craft would be able to prescribe a design solution, other than by ‘implement and test’. This practice would be less effective than one of ‘implement then test’, assuming the design problem to be ‘hard’, that is, fully specifiable (Dowell and Long, 1989, Figure 2).

The advantages of the written form in the codification of HCI knowledge, both substantive and methodological, over implicit/informal/experiential forms are many. First, explicit specification makes possible a conception of the (design) problem of HCI (Dowell and Long, 1989). Second, the conception can be tested against relevant criteria, for example, completeness, coherence, and fitness-for-purpose (Long, 1997). Third, conceptualisation becomes the basis for its operationalisation, test and generalisation (Long, 2001). Fourth, taken together, these latter activities constitute validation of the knowledge in question (Long, 1997). Fifth and last, the written form, with its associated advantages, can be widely disseminated, so making it possible for HCI researchers and practitioners to build on each other’s work. In this way, the HCI community not only grows (see Carroll, Section 1 Introduction); but also progresses its design knowledge. It is for this reason, that written codification of HCI knowledge is important, not because it is written per se (see also Newman. 1994).

Another perspective on alternative HCI paradigms is to ask what sorts of specific affordances a craft practice entails that applied science and engineering do not. One answer is that knowledge is objectified less and proceduralized more in a craft practice. Sturt’s wheelwrights had one another; they worked collectively. They were less in need of engineering standards or explicit theories and guidelines. Their practice had evolved through generations to a level of design refinement not considered by Long and Dowell. The features of the cart wheels could not be derived linearly from single guidelines or sources of rationale, they were over-determined.

Comment 29

It is unclear, who would claim that ‘cart wheels could not be derived linearly from single guidelines or sources of rationale’? Certainly, not Dowell and Long (1989) – see also Comment 28.

Indeed, this line of thinking seems more relevant than ever to the contemporary configuration of HCI. Many more designers participate in HCI than did in the late 1980s. The paradigm of design work is not evolutionary like wagon wheels, but it is a craft practice. It is true that design is typically taught in a studio paradigm, through participation and enactment as opposed to lecture and discussion. And it is true that design knowledge consists in heuristic concepts and techniques rather than deductive principles and laws. Long and Dowell take this as a crippling epistemological limitation, but they argue from an a priori conception of knowledge and the use of knowledge that requires specification and logical derivation.

Comment 30

Long and Dowell’s (1989) summary of their conclusions, concerning craft knowledge is as follows: ‘In summary, although the costs of acquiring its knowledge would appear acceptable and although its knowledge, when applied by practice sometimes successfully solves the general problem of designing humans and computers interacting to perform work effectively, the craft discipline of HCI is ineffective, because it is generally unable to solve the general problem. It is ineffective, because its knowledge is neither operational (except in practice itself), nor generalisable, nor guaranteed to achieve its intended effect – except as the continued success of its practice and its continued use by craftspeople. By no stretch of the imagination can this be understood ‘as a crippling epistemological limitation’. See also Comments 28 and 29. Readers are left to develop their own opinion further on this point.

Such a requirement would only make any sense if we could be assured that such an option exists. Although Long and Dowell hope it might exist, they are unable to give even one example.

Comment 31

This claim would have been correct in 1989. The examples of engineering principles, provided by Long and Dowell (1989), are all taken from other disciplines. Since 1989, however, research on the development HCI engineering principles has progressed and initial principles have been proposed by Stork (1999) for the domain of domestic energy management and by Cummaford (2007) for the domain of electronic commerce. The claim, then, is no longer tenable.

And in any case, even if we could define a positivistic programme for design along these lines, why would we want to, given that a successful paradigm for the design profession already exists and is already contributing broadly to HCI design?

Comment 32

 

Long and Dowell (1989) did indeed take cognizance of actual design practice at the time and found it wanting, compared with other scientific and engineering disciplines. Both Dix (2010) and Wild (2010) continue to urge HCI to acquire and to validate more ‘reliable’ HCI design knowledge. The situation, then, has not changed.

I will suggest later that it might make sense to take cognizance of actual HCI design practices in conceptualizing models for a discipline of HCI.

Long and Dowell’s specific critical analysis of craft is directed most specifically at Bornat and Thimbleby’s (1989) development of the early display editor Ded. In contemporary terminology, Bornat and Thimbleby employed evolutionary prototyping, a method in which designers create a running system and then successively revise their design to respond to user experience. In hindsight, this example is unfortunate with respect to Long and Dowell’s case that craft practice cannot produce generalizable knowledge. For indeed, Ded is an early instance of a design paradigm for text editors that became utterly pervasive throughout the world. It would be difficult to find a better example of design knowledge that proved generalizable, applicable, and effective.

Comment 33

It would be hard to deny that text editors have become pervasive; but there is no necessary connection between their spread and the knowledge acquired by Bornat and Thimbleby (1989). To substantiate the latter claim, it would be necessary to identify the particular design knowledge and text editor design problem, to which their acquired knowledge prescribed a solution. In addition, to be validated the particular knowledge would have needed to be tested further, on other text editors and generalized.

Neither Carroll nor Bornat and Thimbleby identify such a relationship between their particular design knowledge and their particular text editor development. Theirs is not, then, the ‘better example of design knowledge’, as claimed by Carroll.

Long and Dowell assume that if a system design is iteratively developed that ipso facto it can only make use of implicit, informal, and experiential knowledge, and cannot be based at all on explicit science or principles. This is bizarre, and all the more bizarre because Bornat and Thimbleby clearly label some of the ideas they articulated and refined in this project as ”theories”. We should take them at their word rather than insist, with Long and Dowell, on a false dichotomy between designs that have an iterative process and designs that embody explicit general knowledge (or for that matter heuristic knowledge). This dichotomy is not consistent with design practice. Indeed, it sets an impossibly high standard for the successful use of knowledge in design, namely that the knowledge must be applied a priori through logical derivation and never be wrong, never need to be adjusted. It is important to keep the severity of this standard in mind because it seems likely that no use of knowledge in the history of HCI, and perhaps any complex design domain, has ever attained this standard.

Comment 34

Carroll confuses the requirements for engineering design knowledge, in the form of principles (whose practice would be ‘specify then implement’), as proposed by Dowell and Long (1989) and craft and applied science design knowledge, in the forms of guidelines/heuristics/ etc (whose practice is ‘implement and test’). This is further addressed earlier in Comments 28, 29, and 30.

2.5. Problems with Long and Dowell’s conception of applied science

Long and Dowell argue that the general problem of scientific disciplines is to predict and explain phenomena, not to specify designs that support working effectively. Thus, scientific knowledge can help us predict and explain, but not prescribe designs. This seems an unwarrantedly limited view of the bounds of knowledge and creativity.

Comment 35

Not at all. It is, in fact, quite the opposite. It is a sensible ‘division of labour’ and scope between different disciplines.

If by prescribing designs, Long and Dowell really do mean logically derive designs, then one would have to say that scientific knowledge also cannot serve prediction or explanation, since these applications of scientific knowledge, even safely restricted to basic science discourses, are invariably creative, and not purely mechanical endeavors.

Comment 36

However, more mature disciplines have a consensus, concerning certain conceptions, operationalisations, tests and generalisations (Long, 1997). Otherwise, how would they be able to validate their knowledge, of whatever sort? It is precisely this kind of validation, which is currently noticeable only by its absence in HCI.

Moreover, as I will emphasize in the immediately following section, there is no known way to prescribe designs in this limited sense, thus failing to prescribe designs must be seen as a vacuous failure.

I am of course aware that the ascription of positivism is now regarded as discourteous, but I do think that Long and Dowell are venturing into the hoary traditions of positivism. Applying knowledge is frequently a creative endeavor, in science or anywhere else. Knowledge does not come with rules of application, rather these are argued for and constructed as knowledge is put into use. In contemporary epistemology, we alter our conceptions about knowledge and its application when confronted with insights and successes. We should not turn away from insights and successes because they fail to follow a priori rules. We should alter the rules.

Comment 37

I will not rise here to the bait of being labeled a positivist – see my response in Long (2010). It is good to see that Carroll accepts that knowledge and its application have ‘rules’. Long and Dowell (1989) can be understood, as an attempt to express such rules, as concerns HCI, in the form of design principles.

In their particular analysis of Hammond and Allinson’s theorybased design of a computer-aided learning system, Long and Dowell rather freely admit that the sophisticated use of psychological theory in the design ”might have been expected to modify learning behavior towards that of the easier recall of materials” (p. 22). This seems to contradict their general stance that science cannot prescribe designs, but nevertheless I agree with them. However, they go on to make an interesting distinction. They say that the theories Hammond and Allinson appealed to do not directly ”address the problem of the design of effective learning” (p. 22), and that resultingly Hammond and Allinson’s design might support more effective recall (the specific consequence that the theories did address), but still fail to support effective learning. This is cutting things pretty finely. Most psychologists would take better recall to be a learning achievement, though it is true that better recall is not necessarily indicative of better learning in every sense, or more specifically, of effective learning with respect to particular criteria or learning objectives. Accordingly, Long and Dowell conclude, Hammond and Allinson’s design work would necessarily have to progress via prototyping, evaluation, and iteration.

Comment 38

It is hard to see how Hammond and Allison might have proceeded otherwise.

This argument is exceedingly peculiar. By granting that the theory-based design approach could reasonably expect to realize certain specific consequences for users (enhanced recall), Long and Dowell are essentially granting exactly what Hammond and Allinson claimed. Namely, they are granting that science can be applied in design; that designs can embody principles derived from scientific theory, and that consequences for users of the design can be anticipated from the theory. What they are balking at is the claim that scientific theory could completely specify a design, including all of its detailed consequences for users (cf. ”the design of effective learning”), a rather bold claim that, to my knowledge, no one ever made.

The shortcoming of Hammond and Allinson’s theory-based approach, namely, not being able to precisely specify the design of effective learning, entails that they must augment theory-based guidance with direct empirical approaches – prototyping, evaluation and iteration. But this is precisely how theory-based design in HCI has always worked (Johnson et al., 1989). Given that there is no way to prescribe designs, and that all design of any complexity must be empirical in just this sense, then Long and Dowell’s critique of the disciplinary model of applied science vis-à-vis HCI design is an argument about a straw man.

Comment 39

The argument is not in the least about a straw man. As made clear, science and engineering are different disciplines with different knowledges and different practices (Dowell and Long, 1989; Long and Dowell, 1989). Any relationship between the two, concerning HCI, needs to be rationalized and justified, then put to the test, that is, validated. Which validated scientific theories or models have solved which HCI design problems? We must be told.

2.6. Problems with Long and Dowell’s conception of engineering

A touchstone example of software engineering in the real world is Brooks’s (1975) ”Mythical Man–Month”, essays written primarily about one of the largest software engineering projects in history, IBM Operating System 360. On Long and Dowell’s view of engineering we might expect to read about how the OS 360 design was successively decomposed until its subsystems could be completely specified, about how the system was explicitly and completely specified before it was implemented, and about how engineering principles were used to ”prescribe and assure” its performance before it was designed and implemented. But as everyone knows, the design of OS 360 did not work like that.

Comment 40

If Software Engineering knowledge includes principles, which provide design solutions to ‘hard’ HCI design problems (Dowell and Long, 1989), then they would expect the development of OS 360 to have included the application of such principles. For the rest, a range of software engineering knowledge and practice would be expected to have been used, exactly as supposed by Dowell and Long (1989). As they argue: ‘Engineering principles are not seen to be incompatible with craft knowledge….at a minimum engineering principles might be expected to augment the craft knowledge of professionals. See also Comment 24.

The complex and iterative nature of actual engineering processes is over-determined, as Brooks describes and as many subsequent studies have elaborated. For example, in a top–down framework, requirements guide the development of early designs. However, in practice, early designs typically cause requirements to be further developed, altered, and even abandoned. Design also helps to identify new requirements that were not part of the original project mandate, but which may be quite essential once they are identified. Brooks emphasizes that these iterative relationships obtain not merely because requirements happened not to be noticed, but because they cannot be identified until the early design enables their discovery. Today, none of this is shocking. Linear waterfall models of design and development have been succeeded by models including feedback and iteration.

Comment 41

If requirements cannot be identified, and so not specified, then they cannot express a ‘hard’ design problem, nor be the object of engineering principles to satisfy them (Dowell and Long, 1989 – Figure 2, Section 1.4, Human Factors Engineering Principles). Other types of HCI design knowledge and practices would be required. See also earlier Comments 24 and 40.

Again turning the clock back to the mid-1980s, the apparent disconnect between Long and Dowell’s view of engineering, and Brooks’ case study of engineering practice in a large software development project, becomes more comprehensible. Early methodological conceptions of HCI were frequently oriented to system development processes that had traditionally overlooked consideration of users and other human stakeholders. Ironically, because HCI was not integrated into these processes, and often had no standing in the organizations practicing these processes, HCI methods were developed with an inadequate understanding of their ultimate context of use. Brooks’ view of OS 360 is an insider’s view, and in many respects an exposé. Part of the reason that Brooks’ book is a classic is that when published it was so iconoclastic, refuting core conceptions about software engineering. Long and Dowell were addressing engineering from an external view. They were trying to conceptualize an HCI discipline that could effectively contribute to system engineering processes as officially described.

Comment 42

The engineering examples, used by Long and Dowell (1989), to illustrate their conception for HCI engineering, were taken from more traditional engineering disciplines. The latter did not include Software Engineering, whether ‘officially’ or ‘unofficially’ described. Carroll’s point here, then, is misguided.

There is, of course, an engineering paradigm for HCI. It developed in the 1990s, and it is interesting to observe how it differs from the conception of Long and Dowell. Usability engineering is one of the core clusters of method and process in HCI (Nielsen, 1994; Rosson and Carroll, 2002). Notably the sense of engineering in this stream of activity is systematic but it is fundamentally empirical. Its primary focus is methods to directly involve users via participatory design and analysis, and to assess user experiences through surveys and interviews, many kinds of scenario exercises, and direct evaluations, including thinking aloud studies, throughout every stage of the system development process, from
requirements identification through to documentation design. It uses models, for example like GOMS, but in relatively limited ways.

Comment 43

This ‘engineering paradigm for HCI’, as described by Carroll, might well have been accepted by Long and Dowell (1989) as ‘craft engineering’, providing the ‘system development process’, solved design problems of ‘users interacting with computers to perform work effectively’. There is no claim here that either Nielsen’s (1994) or Rosson and Carroll’s (2002) work has been validated by others, that is: conceptualised, operationalised, tested and generalised. As such, it can be applied as part of an ‘implement and test’ practice.

3. A hundred flowers that blossomed

In closing their paper, Long and Dowell (1989) considered whether craft practice, applied science and engineering could function together as a mutually supportive ensemble of disciplinary models. They point out that the three paradigms use and produce knowledge and results that are not always mutually intelligible. However, they counter-argue that the three paradigms together would better exploit what is known and what is practiced in HCI, and that integrating the three disciplinary models could encourage an HCI community ”superordinate to any single discipline conception” (p. 30). While I have tried to scrutinize many of the technical points and arguments in the paper, I think Long and Dowell ultimately were led to a larger truth, one that is strongly evidenced in the HCI we see today.

Comment 44

On this point, Carroll appears to be in agreement with Long and Dowell (1989) – see, for example, Comment 30 earlier. However, the following distinction might throw light on a range of disagreements, identified for example, by Comments 2, 7, 18, 24, 39, and 41.

Long and Dowell are primarily concerned about the nature of the actual (Craft and Applied Science) and the possible future (Engineering) discipline(s) of HCI, in terms of their knowledge and practices, how they differ and their relative effectiveness. They are, of course, aware that such disciplines are constructed and practised by associated communities, ‘superordinate to any single discipline conception’. However, they consider that for their purposes of comparison and evaluation, the more specific concept of discipline, with its emphasis on knowledge and practices, is preferable to the more general concept of community with its wider, for example, social connotations (see also Dix (2010)). Carroll, in contrast, seems more comfortable with, the concept of community, rather than that of discipline. This point is illustrated in Comment 45.

During the 20 years since Nottingham, HCI has changed in many ways. The lens of the Long and Dowell paper highlights three vectors of change that are intriguing and challenging, and that help to sound echoes of the Long and Dowell paper. First, it has become ever clearer that craft is the primary source of innovation in HCI. The primary role of science in HCI is to help us understand these innovations after they occur. Second, applied science in HCI provides explicit foundation for engineering models. These first two points emphasize how we need to link up the disciplinary models for HCI, instead of examining and evaluating them separately as competitive paradigms.

Comment 45

More agreement, here, apparently. Craft innovates (or maybe invents). Science seeks to understand these innovations. Engineering can adopt aspects of that understanding.

The problem arises in any attempt to ‘to link them up’ or in Long and Dowell’s (1989) terms, how ‘one conception might be usefully but indirectly informed by the discipline knowledge of another’. Of course, this linking/informing might naturally occur, during community activities, such as conference attendances. However, such linking/informing requires a ‘reflexive act’ involving intuition and reason. Thus, contrary to common assumptions, the craft, applied science and engineering conceptions of the discipline of HCI are similarly reflexive with regard to the general design problem. The initial generation of albeit different discipline knowledge’s requires in each case the reflexive cognitive act of reason and intuition. In other words, linking up/informing is not just a matter of joining up the different discipline knowledge’s and practices; but rather recruiting the ones to inform the others.

For example, Stork (1999), in the domain of domestic energy management and Cummaford (2007), in the domain of electronic shopping, attempt to formulate initial engineering design principles. In each case, design problems (‘users interacting with computers, where actual performance was less than desired, expressed as Task Quality and User (Resource) Costs’) were diagnosed and design solutions specified. The initial, putative HCI engineering design principles were developed from the commonalities (and non-commonalities) between the design solutions with respect to the design problems.

The point here is that the initial individual design problems were diagnosed and solved ‘empirically’ (in terms of Salter’s (2010), Figure 8), that is, by trial and error (‘implement then test’), using craft, applied science and engineering (models and methods – see Long, 2010) knowledge and practices. However, future validation of the initial design principles (as conceptualised, operationalised, tested and generalized (Long, 1997)) cannot be construed as the validation of the reflexive cognitive act concerning, or the recruitment of, these knowledge’s and practices.

 

Third, the science foundation for HCI is incredibly rich and fragmented, more than anyone expected in the mid-1980s, and perhaps with more to come. We live in a time when the nature of science itself has been deeply questioned. Some of the debates occurring now in HCI with regard to its proper footing in science make the mid-1980s theory crisis seem mild indeed.

Comment 46

For Long and Dowell, the crisis remains very much the same as it was in 1989. What is (are) the HCI discipline(s) now and in the future, as concerns their knowledge and practices? How to increase their effectiveness/reliability (Dowell and Long, 1989)? How to develop the consensus, required for researchers to build on each other’s work to achieve such effectiveness/reliability (Stork, 1999; Cummaford, 2007)? The relative richness and fragmentation of ‘the science foundation for HCI’ is not central to providing the wherewithal to answer these questions.

3.1. Craft innovations drive HCI science

The original vision for HCI was that cognitive science theory would produce or guide cognitive engineering of better systems. This programme was affirmed and pursued zealously throughout the 1980s, and through to the present. Good examples of this paradigm do exist, I believe. The Hammond and Allinson paper that Long and Dowell deconstructed is a wonderful example. However, it is also easy to be skeptical of ”science based design” as a general disciplinary model for HCI. Long and Dowell were articulating that skepticism.

Comment 47

Long and Dowell (1989) do not doubt that the Hammond and Allinson (1988) guidelines might help practitioners design better interactive systems; but neither do they doubt that they might not so help.  They are, however, convinced that without validation (as conceptualization; operationalisation; test; and generalization – Long, 1997), the guidelines can only support ‘trial and error’ design practices. The guidelines are simply not known to be sufficiently reliable (as required by both Dix and Wild (2010)) to guarantee better interactive systems. Their science origins (via some unspecified, informal transformation) would be no warranty for such reliability or guarantee (see Salter, (2010) – Figure 8, empirical derivation and validation). HCI scientists (psychologists; sociologists etc) might be satisfied by this state-of-affairs, as it provides a market for their wares; but it is doubtful that hard-pressed HCI practitioners, battling for a position in the IT design marketplace, would share their view.

Throughout the history of HCI the truly game-changing innovations have tended to be craft based. The pivotal design concept of direct manipulation, and the early scientific accounts of it are a case in point (Shneiderman, 1983; Hutchins et al., 1985). Direct manipulation is the style of computer interaction in which a person manipulates data and functions through gestures with display objects, for example, pointing and clicking in windows with a mouse — as contrasted with referring to data and functions by name in typed command strings. The principle object of these analyses, the point-and-click graphical user interface was developed years before the original scientific accounts, and indeed, direct manipulation is still being theorized and further developed (e.g., Plouznikoff et al., 2005). The graphical user interface as a design concept was developed pretty much through craft innovations during the decade from the mid-1960s through the mid-1970s (e.g., Buxton et al., 2005; Myers, 1998).

Comment 48

These technological innovations and inventions accrue almost entirely to the credit of HCI craft knowledge and practice. There is no reason to believe this situation will change, as ‘invention’ is a ‘soft’ problem’ (Dowell and Long, 1989), which cannot be explicitly and completely specified. Other forms of HCI knowledge and practice need to adapt/accommodate to this source and manner of innovation.

It is interesting that even the early theories of direct manipulation emerged long after the technology innovation itself. Thus, far from determining or even guiding the technology development and user interface design, the theories served the purpose of interpreting, consolidating, and abstracting the lessons from craft innovation to help move HCI research and development work forward in a more explicit and deliberate manner. This is very valuable; it allows for an engineering practice to be codified from the more implicit craftwork.

Comment 49

Of course, codifying craft design knowledge, to support engineering knowledge and practice, is an idea worthy of development. However, how might such codification be carried out? Carroll later concedes that (craft) designs are ‘difficult to read’. Some examples are sorely needed here to support Carroll’s claim, that craft knowledge can be codified. Note that in the work of Stork (1999) and Cummaford (2007), craft design is not codified directly into engineering principles, as its effectiveness is not known. However, it is recruited informally, via its support for the solution of individual design problems, to the codification, expressed as design principles.

I describe the direct manipulation example because it is and was so central to the development of HCI. However, similar patterns can be seen in the development of HCI science and theory for other key concepts and techniques in HCI design. For example, Blackwell (2006) provides a vivid exegesis of how craft and science have shaped the use of metaphor in HCI through the course of three decades. The history of HCI science is one of technology innovations mysteriously popping out of craftwork, and then eventually being noticed, analyzed, and codified in models and theories.

Comment 50

It would be of interest to have, here, a codified (engineering) example of craft metaphor invention/innovation (see also Comments 48 and 49).

3.2. Applied science provides explicit foundation for engineering models

The most ambitious articulation of the relationship between cognitive science and its application is that of a reciprocal relationship at the level of models (Norman, 1982). This is to make a distinction between specific (one-off) design applications of cognitive science, like the Hammond and Allinson computer assisted learning system, and systemic applications in which the science is a foundation for an engineering model that can be more generally applied. Card et al. (1983) development the Model Human Processor (MHP) and Goals, Operators, Methods and Selection rules (GOMS) model for analyzing routine human–computer interactions is a very early example of applied cognitive science theory in HCI that also provided direct guidance for a set of engineering models.

Comment 51

These different claims prompt the following questions. First, what is the difference between the MHP Model and the GOMS Method as: 1. Cognitive Science Theory; 2. Applied Cognitive Science Theory; and 3. Engineering Models and what is the relationship between them? Second, by changing which aspects of these relationships would HCI design knowledge  and practice be made more reliable/effective? Third, how might such changes be brought about? For Long and Dowell’s (1989) answer to these questions – see Comment 45.

As I briefly noted above, these models were somewhat narrowly scoped, but seen in the context of cognitive science ca. 1980, this was some of the most comprehensive applied science work ever attempted. The model explicitly integrated many components of skilled performance – perception, attention, short-term memory operations, planning, and motor behavior – to produce predictions about expert performance in real tasks.

The MHP/GOMS model, taken as an engineering model, was an advance over prior human factors approaches in that it explicitly described the cognitive structures underlying manifest behavior. In other words, it was an engineering model directly and explicitly grounded in scientific understanding of information processing psychology. But as a scientific account, this model was a huge step forward also: Cognitive science models and theories up to that point had not attempted such a level of integration. The comprehensiveness of these early models vis-à-vis cognitive science can be seen as directly caused by their purpose vis-à-vis cognitive engineering. In order to generate detailed quantitative predictions about user performance in realistic contexts, HCI models must make explicit assumptions about a wide range of human characteristics.

Comment 52

There is no disagreement, that the MHP/GOMS model was novel and an advance or that it had much in common with information processing psychology models of the time. However, its subsequent development and validation as a ‘cognitive science model’ has been at best modest and at worst noticeable only by its absence. It is unclear, that currently it has any status as a cognitive science model in the process of being validated (as conceptualized; operationalised; tested; and generalized) in terms of its understanding, that is, explanation and prediction, of HCI phenomena.

 

Not all engineering models in HCI are as focused and narrow as MHP/GOMS. For example, the various developments of usability engineering could be considered a collection of engineering models. The usability engineering textbook I wrote with Rosson (Rosson and Carroll, 2002) quite explicitly presents a system development lifecycle engineering model, a wide range of systematic methods that usability engineers can follow to better assure that their designs are useful and usable to people. Our conception of engineering is heavily influenced by Brooks; we emphasize many approaches to prototyping, and emphasize throughout the book that one of the primary mistakes to avoid is premature commitment: thinking that the first passably-acceptable solution generated is the stopping point for design. If I were to write that book today I would significantly broaden its treatment of nuances of quality in the user experience. We emphasized satisfaction, but qualities like fun and engagement deserve more consideration. Of course this would make usability engineering even more dependent on applied science, and even more empirical.

Comment 53

It is unclear, why a more empirical engineering (of qualities, like fun and engagement) would (necessarily) be more dependent on applied science. Empirical derivation and validation of client requirements and artefact, as well as the relations between them (see Salter, 2010 – Figure 8) has no necessary relations with applied science (Figure 3).

Just recently, the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (http://eics-conference.org/2009/) was launched, but again the notion of engineering in this conference series is far broader than that of mechanical derivation of solutions with predetermined properties.

Comment 54

How does Carroll propose to increase the reliability and effectiveness (as required by Dix, Wild, Hill, Salter and Long (all 2010)) of HCI knowledge and practice in the total absence of ‘pre-determined properties’ (see also Comments 45 and 50, also Salter (2010), Figure 8)?

3.3. The science foundation for HCI is incredibly rich and fragmented

The science foundation of HCI in the early 1980s was cognitive science, chiefly cognitive psychology. But this quickly changed. Even by the end of the 1980s, many other approaches with roots in social psychology, sociology and anthropology were moving to the center of HCI. A technological reason for this was that collaborative systems were emerging and raised many questions that could not be articulated in a cognitive paradigm. More importantly, HCI was reaching out to understanding technology in use, and actually contexts of use nearly always involved multiple people, work practices, organizational structures and myriad factors beyond the purview of individual cognition.

Suchman’s (1987) study of photocopier use was iconic. She described a variety of usability problems with advanced photocopier user interfaces. The problems she identified were fairly typical of HCI studies of the time (Carroll, 2003). But her approach and analysis were distinct in important ways. First, she studied people doing real work in a workplace context, not in a laboratory setting, as was typical at the time. Indeed, many of her participants were scientists at Xerox PARC going about their research work, and occasionally struggling to make copies. Second, she analyzed the interaction between the person and the machine as a sort of conversation that can fail when the actions of the participants are not intelligible to one another. This raised the level of analysis to that of human agency, as opposed to operating characteristics of the mind-as-a-computer (memory limitation, incorrect rules, etc.). Third, she directed her analysis to very fundamental issues in cognitive science. Thus, based on the amount of creative improvisation she observed in people trying to fathom and use photocopiers, she concluded that the concept of plans as causal accounts of human action was fundamentally flawed. Plans might be resources for action, but could not determine action in circumstances of any significant complexity.

Comment 55

Suchman’s research (1987) is indeed novel and interesting. However, following Dowell and Long (1989), it does not necessarily move social psychology, sociology, and anthropology to the centre of HCI (see also Comment 53). Suchman’s highlighting of ‘real’ work, human agency and creative improvisation might contribute to the relative softness or hardness of design problems (perhaps increasing the former and decreasing the latter – Dowell and Long, 1989). If so, more (or even all) craft engineering and les (even no) principles engineering would be required for their solution.

By the end of the 1980s, HCI had become an international research community. Threads of work that had been going on in the United States and in Europe were increasingly brought together through Europeans visiting the mostly-American Association for Computing Machinery (ACM) CHI Conference and Americans visiting the mostly-European International Federation of Information Processing (IFIP) INTERACT Conference. For example, Bjerknes et al. (1987) published their collection on participatory design. Although they do argue that involving users directly in the inner sanctums of design deliberation is technically effective, the primary theme and argument in their book links participatory design to democracy, and asserts that sharing power with users in the design process is a better moral choice. Issues of self-determination in the workplace and power sharing and participation in software design cannot be articulated in a purely cognitive HCI.

Comment 56

Indeed. However, ‘issues of self-determination in the workplace and power sharing and participation in software design’, must necessarily be specified explicitly or implicitly, to figure (explicitly or implicitly) in any design solution.

These developments contributed to a scientific foundation far more rich, far more diverse than the starting points of the early 1980s. By the end of the 1990s, HCI looked quite different. Social psychology concepts like production blocking, conformity, social loafing, and social pressure had become as commonplace as memory capacity, consistency, and index of difficulty in the 1980s.

Activity theory and distributed cognition were established paradigms in theory, in many ways eclipsing information processing psychology as the establishment in theory. Ethnographical fieldwork had become a methodological touchstone for understanding usability.

Comment 57

Comment 55, which highlighted the possible contributions of social psychology, sociology and anthropology to HCI design problem specification, is equally applicable here to Activity Theory, Distributed Cognition, Information Processing Psychology and Ethnography. The reasoning is the same in both cases.

HCI theory during this period was highly successful, in the sense of producing explanations and principled descriptions of human– computer interaction contexts (themselves largely produced through craft innovations) that have had great impact on cognitive science. Just as the MHP/GOMS model had led cognitive science in the 1980s, Suchman’s analysis of situated actions, distributed cognition and activity theoretic models, and studies of computer-mediated collaboration had substantial influence throughout cognitive science. For example, in 1993 a special issue of Cognitive Science, the field’s flagship journal, was addressed to reconsideration of Suchman’s 1987 book on the field of cognitive science. Similarly, a special issue of the Journal of the Learning Sciences, the flagship cognitive science journal in learning, was directed to reflection on Suchman’s contribution in 2006.

Comment 58

There is no doubt that Suchman’s research had the potential to increase the scope of cognitive science. However, its potential for contributing to the validation of cognitive science knowledge and practice is less clear. The evidence for such validation would appear to be, at best, thin.

 

3.4. Design rationale as theory: the task-artifact framework

My own contribution to the Nottingham conference was more similar to Long’s than I realized at the time. Like Long, I also sought to formulate a programme for HCI.

Comment 59

Long and Dowell (1989) present an analysis of HCI in terms of three alternative conceptions of a possible discipline of HCI – Craft, Applied Science and Engineering. Dowell and Long present a Conception for HCI Engineering. The latter might be termed a ‘programme’. However, no equivalent ‘programme’ is proposed for Craft and Applied Science HCI, unless it be the common expression of the HCI design problem, that is, ‘users interacting with computers to perform work effectively’.

Like Long, I had concluded that HCI could not comprehensively be constructed as applied cognitive science. In my paper (Carroll, 1989), I suggested that the most effective role for science in HCI design might be to interpret designs-in-use, to codify the knowledge implicit in designs so that it could be used more explicitly in future designs. I suggested that bringing this interpretative work into the design process itself might be the closest we could get to theory-based design. I saw this as augmenting the paradigm of craft practice to demystify the how and why, quite analogous to what George Sturt was trying to with the craft of wheelwrights.

Comment 60

Carroll’s ‘programme’ for HCI certainly appeals to many HCI researchers and is clearly worth pursuing. However, he needs to recognize that ‘codified knowledge implicit in designs’ would need to be subject to either the empirical or formal derivation and validation cycles (or both), as set out by Salter (2010 – Figure 8) or some such.

Much like Sturt, I suggested that designed artifacts ought to ”read” as theories, that system design and development outcomes should be directly leveraged as knowledge outcomes (Carroll and Campbell, 1989). Implemented designs have nice properties, considered as knowledge outcomes: they are precise and complete; that is, they are complete enough to run and do whatever it is that they do, and they cannot leave things vague or make unrealistic simplifying assumptions the way a discursive theory can.

Comment 61

However, one not very ‘nice’ property of implemented designs, as design solutions, is ignored by Carroll. That property is the explicit (or implicit) design problem, for which the implemented design is a solution. In the absence of such a specification, the precision and completeness of designs can contribute little to the construction of theory. It is for this reason, that Long and Dowell (1989) assign such importance to the specification of the design problem of HCI, in terms of performance and especially effectiveness, both for the acquisition and validation of design knowledge and for establishing a consensus within HCI, which would permit the testing of alternative (knowledge-based) design solutions against the same design problem. In this way, researchers could build on each other’s work, evaluate the effectiveness of such work and increment the knowledge and practice of the discipline, as encouraged by Newman (1994).

Designs must take a stand on every issue they encounter. Designs seamlessly integrate ideas from many sources and from many levels of analysis; that is, a design may embody ideas from GOMS with respect to keystroke-level interactions, ideas from Activity Theory about leveraging cultural practices in new work designs, and ideas from social psychology about how collaborators can quickly come to trust one another. Discursive theories are notoriously bad at spanning levels of analysis; indeed, many take it as given that levels of analysis can never be spanned. Finally, designs are also unavoidably testable, that is, when people use them, their use generates consequences, vivid, concrete and often poignant.

The main thing that makes designs poor as theories is that they are difficult to read. The propositions of an artifact’s theory are implicit, after all; they must be constructed by an analyst. And there’s the rub. How can we identify the theory that is implicit in an implemented design? My answer was design rationale, the documentation traditionally generated in the design process describing the issues, decisions, choices, and consequences that were considered, pursued, abandoned, and/or implemented. Of course, because design rationale is documentation, it is often viewed as tedious, boring, and bureaucratic. Moreover, because most designs ultimately fail in one way or another, creating an explicit and thorough design rationale is analogous to carefully leaving your fingerprints all over a spot you know will most likely be a crime scene.

Comment 62

Carroll does not make explicit here, whether ‘design rationale’ constitutes ‘codified craft knowledge’ as theory. Eitherway, he still has to specify whether design rationale corresponds to the empirical or formal derivation and validation cycles (or both), presented in Salter’s (2010) Figure 8 (see also Comment 60).

In Longian terms, I wanted to imagine ways to better integrate HCI craft practices with applied social and cognitive science so as to do the least violence to the manifest effectiveness of HCI craft practices, but at the same time help those practices to be more deliberative, more auditable, and more manageable.

Comment 63

If HCI craft design practices are manifestly effective, then presumably HCI craft design knowledge is equally manifestly effective in supporting these design practices. In the latter case, Carroll is wise to guard against applied social and cognitive science doing them violence (sic), while attempting to make craft design practices ‘more deliberative, more auditable and more manageable’. If ‘design rationale’ is codified craft knowledge (see Comment 62), then it needs to clarify its formal and empirical derivation and validation cycles (or both), as required by Salter’s Figure 8 (2010).

The key criterion for me was intelligibility to the practices and values of HCI designers. Thus, I eventually built my own methodological prescriptions out of scenario-based design, emphasizing the practical utility of narrative representations as well as their analytic utility in suggesting and contextualizing rationales (e.g., Carroll, 2000).

During the years after Nottingham, and to some extent caused by prodding from Long, I kept at this line of thinking, eventually reaching the programmatic claim that in HCI design rationale is the theory (Carroll and Rosson, 2003). During the early 1990s, I had many enjoyable interactions with Long. My recollection is that Long accepted that my approach integrated craft and applied science, but, perhaps not surprisingly, he felt our ”design rationale as theory” programme was underspecified as a disciplinary model, and that, in particular, it needed to be more explicit about defining effectiveness. Our discussion never came to an ending.

Comment 64

Carroll’s view of our interactions since 1989, and my view of his work, are both fair and about right. The importance of requiring a more explicit expression of effectiveness, in Carroll’s craft and applied science disciplinary model, resides in its being a pre-requisite for expressing design problems. Without the possibility of expressing a design problem explicitly, it is unclear how ‘design rationale as theory’ is able to diagnose design problems, for which it provides a solution and to know, indeed, that it is a correct (that is, ‘sound’ in Carroll’s own words later). Expressing design problems explicitly, in turn, is a pre-requisite for HCI researchers to use alternative ‘theories’ to solve the same (consensus) design problem and so compare the effectiveness of alternative ‘theories’, as required by Newman (1994) – see also Comment 61). My discussion with Carroll never came to an (agreed) ending and indeed it would appear, thanks to the Festschrift and the present commentary, an ending at all. I am not unhappy at this state of affairs. All is not unwell, that does not end unwell.

3.5. Making sense of HCI

In 1989, many in the field felt that HCI needed a better-defined paradigm, or as Long and Dowell termed it, a better-defined disciplinary model. For even in 1989, HCI was plainly a thriving and growing socio-technical endeavor that was diversifying much more than it was converging. Perhaps framing disciplinary models is just the human impulse to closure, to create figure from ground. Perhaps it is just researchers doing what they do. With another 20 years of hindsight, we can see more plainly now that HCI continues to diversify more than to narrow. Not only do Long and Dowell’s original contenders live on, but we have many variants of each.

Long and Dowell’s technical analysis wound up being more nihilistic than they most likely intended. I think this was because they faithfully and energetically applied overly rigid and a priori models of all three of their disciplinary models for HCI. To me this is the tragic pattern of positivism, which I take here as a paradigm– defining concern with how propositions are generated (discovery procedures, predictive models), warranted (usually in observable empirical phenomena), and logically related (e.g., by derivation or generalization). Positivism, as far as I can tell, is motivated by good and noble impulses: Escape subjectivism and capture universal truths, provide empirical foundation for knowledge statements, produce systematic, cumulative, and integrative knowledge-generating practices and knowledge descriptions and explanations (e.g., science) and so forth. The problem for positivism, and reason I see it as tragic, is that subjectivism is inescapable. Knowledge depends on context, on point of view, on history, on meaning making practices that are partially ineffable, and on levels of analysis that are incommensurable. Indeed, as emphasized by every philosopher since Kuhn (1962) science is a social institution, and what is regarded as sound, even what is regarded as true, is socially constructed.

Comment 65

This is not the appropriate place to engage in a deep debate about ‘positivism’ itself, but rather to address issues, raised by Carroll, perhaps prompted by positivism, about the matters in hand. I have already addressed these issues in my HCI Reflections (2010). I would, however, add here the following. Science is, indeed, a social institution, or perhaps better, a social (professional) community. What the social community (as in a discipline) considers sound is indeed socially constructed. The social construction, however, includes criteria for its soundness. Long and Dowell (1989) proposed a discipline conception for the HCI community. Dowell and Long (1989) also proposed a conception for HCI (Engineering) soundness. The two conceptions offer clear criteria by which their effectiveness can be judged, as illustrated by Hill’s and Wild’s papers (2010) and the Design Research Exemplars (Figure 8) of Salter (2010). Carroll ought to be delighted. However, whether or not this constitutes an ‘escape from subjectivism’ is left for him alone to judge.

My personal construction of this is that positivism belongs to that interesting category of wrong ideas we need to value. Positivism so overworries methodological issues that it produces results that are unproblematic but of little consequence. Still we are well advised to orient to positivist objectives. We should do so knowing that to take these objectives too seriously, too rigidly, will lead to paradigmatic dead ends. Throwing positivist cautions away can lead to empirical programmes for which we cannot know what methods were actually employed, what data were gathered, or what the results are really about.

Comment 66

Elsewhere, Carroll has strongly supported the need for empirical progress in HCI. Here, however, he seems to recognize associated dangers – the same dangers, which Long and Dowell’s (1989) HCI Discipline and Design Problem Conceptions are intended to combat. Design Rationale, as theory needs comparable defences.

Long and Dowell, I think, became snarled in an intellectual trap of their own design in characterizing the three disciplinary models for HCI. They characterized models that no one followed, and that no one ever has followed. They laid down positivistic criteria for disciplinary models that I suspect cannot be satisfied, and that, in any case, do not accord with what anyone actually does or has done in the craft, science, and engineering of HCI.

Comment 67

Carroll’s claim here is patently false. The craft knowledge and practice model is consistent, albeit at a high level of description, with, for example, the development of the graphical user interface. The applied science knowledge and practice model is consistent, albeit at a high level of description, with, for example, Carroll’s own Design Rationale, as theory research. It is true, that in 1989, there were no examples of the principles engineering model at any level of description. Exemplars of this model, however, are now available in the work of Stork (1999) and Cummaford (2007).

But they were in good company. On my side of the Atlantic, Allen Newell articulated an interestingly comparable disciplinary programme in his 1985 opening plenary address at the ACM CHI Conference (Newell and Card, 1985). Newell’s talk presented a vision of extending the early GOMS work into a much more comprehensive paradigm for science in HCI. He memorably said that psychology might be ”driven out” of HCI in the future if it were not pursued in a quantitative modeling paradigm (aka, ”hard science”). The talk provoked much controversy, discussion, and new research. It led to alternate proposals, modified proposals, replies and rejoinders (Carroll and Campbell, 1986; Newell and Card, 1986). Again, through the benefit of hindsight, we can see that Newell overstated the ”hard science” threat. His programme was ignored, yet psychology continues to thrive in HCI. As I look back, I think that what Newell really wanted was a science of HCI in which psychology (and other cognitive sciences) could play a central role. Newell’s real worry, I think, was not that the psychology of HCI might take a qualitative turn. He was worried about the interdisciplinary power balance between computer science and its human science partners. I have elaborated this historical reflection in Carroll (2006).

In my view, Newell and Long’s contributions must be taken in historical context. They were addressing the still-current threat of methodological fragmentation. They had helped to found HCI in the 1970s, and were trying to ensure that HCI could continue to prosper as it had in the early 1980s. Both proposed to achieve this through normative disciplinary frameworks. And both went a bit too far in this regard.

Comment 68

Long and Dowell’s (1989) concern was not so much ‘methodological fragmentation’; but how to make HCI design knowledge and practice more effective (or ‘reliable’ (Dix, 2010) or ‘sound’ (Carroll, 2010)). Their frameworks or conceptions (of the HCI discipline and design problem) are intended to address this concern.

The conclusion I take away is that we should regard HCI as a sort of meta-discipline. I call it a community formed around the ever-expanding concept of usability (Carroll, 2009), because I think it is really just this shared pre-theoretic interest and commitment that causes HCI to cohere at all. HCI has no single disciplinary problem or specified set of practices, and certainly no single conception of effectiveness. Instead, the boundaries of HCI have expanded as the notion of usability became richer.

Comment 69

It is essential to distinguish the HCI community – a social and professional entity, from the HCI discipline – the knowledge and the practice of that community, although the two are related. The membership of the community, as indeed the scope of the discipline, may change over time. Either change might be by intent. Similarly, it might be possible to change, that is, to increase the reliability (Dix (2010) and Wild (2010)) and soundness (Carroll, 2010) or effectiveness (Long and Dowell, 1989) of HCI knowledge and practice. In this way, do disciplines progress, as well as grow. Such progress, however, requires some consensus among researchers, as to what the discipline is about; otherwise incrementation of the HCI discipline knowledge and practice would not be possible. ‘Usability’ has brought some consensus and may be sufficient to support future developments. Long and Dowell (1989), however, do not think so. They adopt the concept of ‘usability’, which they express as ‘user (resource) costs’ or ‘workload’; but add to it the concept of ‘task quality’ – how well the task is performed. Differences between actual and desired task quality and user costs, which together express effectiveness, constitute design problems, which HCI design knowledge and practice are developed to solve. If ‘usability’ can be the basis for shared ‘pre-theoretic interest and commitment’, so can ‘effectiveness’. Researchers can then choose as to which their (or some other) knowledge and practice apply, in their efforts to increment them, as required by Newman (1994).

Usability was originally articulated naively in the slogan ”easy to learn, easy to use”. The blunt simplicity of this conceptualization gave HCI an edgy and prominent identity in computing. It served to hold the field together, and to help it influence computer science and technology development more broadly and effectively. However, inside HCI the concept of usability has been reconstructed continually, and has become increasingly rich and intriguingly problematic. Usability now often subsumes qualities like fun, well-being, collective efficacy, aesthetic tension, enhanced creativity, support for human development, and many others. The trajectory of its core concept explains how and why the HCI community has continued to grow and diversify. It also explains why a priori frameworks, such as those articulated by Long and by Newell, tend to look dated almost before they are formulated. More importantly, a dynamic view of usability suggests that what we have seen for the past three decades may just continue. Perhaps usability will always develop as our ability to reach further toward it improves.

Comment 70

The same, of course, can be said of effectiveness – see Comment 69.

This picture of HCI as a diverse community orienting to a concept whose meaning changes through time is unsettling. It implies that the methodological fragmentation Long addressed in his 1989 Nottingham keynote is endemic to HCI, not so much a problem to be remedied, but a characteristic to be accepted and leveraged, or at least coped with.

Comment 71

Agreed. The distinction between ‘hard’ and ‘soft’ design problems (Dowell and Long, 1989, Figure 2) accepts and ‘copes’ with the differences between HCI discipline models.

In either case, I think it remains useful to try to articulate disciplinary models – models that are evidenced in current practices, models from areas neighboring HCI that arguably could address current challenges in HCI, if they were to be adopted, and perhaps even models that we just invent. Some of these models could be the pure types that Long and Dowell worked with, perhaps too rigid to be implemented, but useful as analytic tools to characterize the more hybrid forms that can actually be observed.

Comment 72

Maybe; but the work of Stork (1999) and Cummaford (2007) on the development of engineering design principles suggest otherwise.

Acknowledgements

This paper draws on several my own previous meditations on the history and foundations of HCI: Carroll, 1997, 2002, 2006, 2009. I thank the editors for organizing this project. I especially thank Peter Wright and two anonymous reviewers for their cheerfully insightful deconstructions of my essay. I believe that moments of reflection on the contributions of those who have helped to lead us in the recent past are not only appropriate celebrations, but also directly useful for us toward making sense of what we have done, and what we are doing now. Finally, I want to acknowledge and thank John Long, with whom I had a very stimulating debate at various conferences and workshops during 1988– 1993. These interactions were very helpful to me in motivating and focusing my own thinking and writing. Frankly, I think I benefited more, though John always seemed ready for another round. As far as I can remember and reconstruct, no minds were changed in these debates, but I recognize more clearly now that that is not necessarily the most important outcome of such a debate. The writing of this essay was supported in part by the Edward M. Frymoyer Chair Endowment.

References

Bentley, R., Hughes, J.A., Randall, D., Rodden, T., Sawyer, P., Shapiro, D., Sommerville, I. 1992. Ethnographically-informed systems design for air traffic control. In: Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work, Toronto, Ontario, Canada, November 01–04, 1992. CSCW ’92. ACM, New York, NY, pp. 123–129. DOI: http://doi.acm.org/10.1145/143457.143470.

Bjerknes, G., Ehn, P., Kyng, M. (Eds.), 1987. Computers and Democracy — a Scandinavian Challenge. Aldershot, Avebury, VT.

Blackwell, A.F., 2006. The reification of metaphor as a design tool. ACM Transactions on Human–Computer Interaction 13 (4), 490–530.

Bornat, R., Thimbleby, H., 1989. The life and times of Ded, text display. In: Long, J.B., Whitefield, A.D. (Eds.), Cognitive Ergonomics and Human–Computer Interaction. Cambridge University Press, Cambridge.

Brooks, F., 1975. The Mythical Man–Month. MIT Press, Cambridge.

Buckley, P., 1989. Expressing research findings to have a practical influence on design. In: Long, J.B., Whitefield, A.D. (Eds.), Cognitive Ergonomics and Human–
Computer Interaction. Cambridge University Press, Cambridge.

Buxton, W., Baecker, R., Clark, W., Richardson, F., Sutherland, I., Sutherland, W., Henderson, A., 2005. Interaction at lincoln laboratory in the 1960’s: looking forward – looking back. In: CHI ’05 Extended Abstracts on Human Factors in Computing Systems, Portland, OR, USA, April 02–07, 2005. CHI ’05. ACM, New
York, NY, pp. 1162–1167.

Card, S.K., Moran, T.P., Newell, A., 1983. The Psychology of Human–Computer
Interaction. Erlbaum, Hillsdale, NJ. Carroll, J.M., 1989. Feeding the interface eaters. In: Sutcliffe, A.G., Macaulay, L.A. (Eds.), People and Computers V. Cambridge University Press, Cambridge, pp. 35–48.

Carroll, J.M., 1997. Human–computer interaction: psychology as a science of design.
Invited Chapter for the Annual Review of Psychology 48, 61–83.

Carroll, J.M., 2000. Making Use: Scenario-Based Design of Human–Computer Interactions. MIT Press, Cambridge, MA.

Carroll, J.M., 2002. Human–Computer Interaction. Encyclopedia of Cognitive Science. Macmillan/Nature Publishing Group, London.

Carroll, J.M., 2003. Situated action in the zeitgeist of human–computer interaction.
The Journal of the Learning Sciences 12 (2), 273–278.

Carroll, J.M., 2006. Soft versus hard: the essential tension. In: Galletta, Dennis,Zhang, Ping (Eds.), Human–Computer Interaction in Management Information Systems. In: Zwass, Vladimir (Ed.), Advances in Management Information Systems Series. Armonk, NY, M.E. Sharpe, Inc., pp. 424–432.

Carroll, John M., 2009. Human computer interaction (HCI). Interaction-Design.org, <http://www.interaction-design.org/encyclopedia/human_computer_interaction_ hci.html>. (Retrieved 19.05.09).

Carroll, J.M., Campbell, R.L., 1986. Softening up hard science: reply to Newell and Card. Human–Computer Interaction 2, 227–249.

Carroll, J.M., Campbell, R.L., 1989. Artifacts as psychological theories: the case of human–computer interaction. Behaviour and Information Technology 8, 247– 256.

Carroll, J.M., Rosson, M.B., 2003. Design rationale as theory. In: Carroll, J.M. (Ed.), HCI Models, Theories, Models and Frameworks: Toward a Multidisciplinary Science. Morgan-Kaufmann, San Francisco, pp. 431–461.

Diaper, D., 2002. Scenarios and task analysis. Interacting with Computers.

Dix, A.J., Harrison, M.D., 1987. Formalizing models of interaction in the design of a display. In: Proceedings of Intract’87 Second IFIP Conference on Human–
Computer Interaction. Elsevier Scientific, Amsterdam, pp. 409–413.

Dowell, J., Long, J., 1989. Toward a conception for an engineering discipline of
human factors. Ergonomics 32, 1513–1535.

Grudin, J., 2005. Three faces of human–computer interaction. IEEE Annals of the
History of Computing 27 (4), 46–62.

Hammond, N.V., Allinson, L.J., 1988. Development and evaluation of a CAL system for non-formal domains: the hitch-hikers guide to cognition. Computers and Education 12, 215–220.

Heath, C., Luff, P., 2000. Technology in Action. Cambridge University Press, Cambridge. Hutchins, E., Hollan, J., Norman, D., 1985. Direct manipulation interfaces. Human–
Computer Interaction 1, 311–338. Johnson, J., Roberts, T.L., Verplank, W., Smith, D.C.,

Irby, C.H., Beard, M., Mackey, K., 1989. The xerox star: a retrospective. Computer 22 9 (11–26), 28–29. Kuhn, T., 1962. The Structure of Scientific Revolutions. University of Chicago Press, Chicago.

Latour, B., 1987. Science in Action: How to Follow Scientists and Engineers Through
Society. Harvard University Press, Cambridge, MA.

Latour, B., Woolgar, S., 1986. Laboratory Life: The Construction of Scientific Facts, second ed. Princeton University Press, Princeton, NJ.

Long, J.E., Dowell, J., 1989. Conceptions of the discipline of HCI: craft, applied science and engineering. In: Sutcliffe, A., Macaulay, L. (Eds.), People and Computers IV, Proceedings of the Fifth Conference of the BCS HCI SIG. Cambridge University Press, Cambridge, pp. 9–32.

Myers, B.A., 1998. A brief history of human computer interaction technology. ACM Interactions 5 (2), 44–54.

Newell, A., Card, S., 1985. The prospects for psychological science in human– computer interaction. Human–Computer Interaction 1, 209–242.

Newell, A., Card, S., 1986. Straightening out softening up: response to Carroll and Campbell. Human–Computer Interaction 2, 251–267.

Nielsen, J., 1994. Usability Engineering. Morgan Kaufmann, San Francisco. Norman, D.A., 1982. Steps toward a cognitive engineering: design rules based on analyses of human error. In: Proceedings of the 1982 Conference on Human Factors in Computing Systems, Gaithersburg, Maryland, United States, March 15–17.

Orr, J.E., 1996. Talking About Machines: An Ethnography of a Modern Job. Cornell University Press, Ithaca, NY.

Plouznikoff, N., Plouznikoff, A., Robert, J.-M., 2005. Object augmentation through ecological human–wearable computer interactions. In: IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, 22–24 August, WiMob’2005, pp. 159–164.

Rosson, M.B., Carroll, J.M., 2002. Usability Engineering: Scenario-Based Development of Human–Computer Interaction. Morgan-Kaufmann, San Francisco.

Shneiderman, B., 1983. Direct manipulation: a step beyond programming languages. IEEE Computer 16 (8), 57–69.

Sturt, G., 1923. The Wheelwright’s Shop. Cambridge University Press, London.

Suchman, L.A., 1987. Plans and Situated Actions: The Problem of Human–Machine Communication. Cambridge University Press, New York.