This specialized course concentrates on evaluation methodologies for research and innovation. The course applies a user centric design (UCD) methodology. At each stage, appropriate methodologies will be introduced with a human perspective. The course will address conventional interfaces (e.g., visualization) and also frontier, less conventional methods (e.g., Augmented Reality, 3D User Interfaces)
The student taking this course will learn through experience to:
More importantly, from experience, the student will learn how formulate research questions and hypotheses, how build evaluations to test those hypotheses, how to record and collect evidence, analyze it, derive knowledge from it, and report findings from evaluations.
10% statistics assignment 30% participation in user study at lab 20% critique of peer project 40% project topic
The course is project based. For each class students take turns to moderate and take part in participatory design. For the course, the student will:
Continuous evaluation serves as a research methodology in many fields of Computer Science. This lecture is intended to familiarize you with a range of methods applied when continuous evaluation is used in research, desing and innovation. The following is a list of the main textbooks that will be referenced throughout the lecture. You can borrow them from the university library, or come to Eduardo's office to browse them (make an appointment).
Foundations for Designing User Centred Systems. Ritter, Baxter, Churchill, is a thorough reference for the UCD methodology. It dedicates a good portion of the book to understanding people, human factors, behavior and cognition. The chapter on evaluation provides a good, general entry point to the different aspects of evaluations. In the lecture, this book serves as reference for UCD but also for considerations about the user capabilities. Link
Discovering Statistics Using R (Andy Field, Jeremy Miles, Zoe Field) is our main reference to the implementation of statistics methods used for analyses of evaluation results. It is a thorough reference with example implementations in R, covering the major aspects of research and analysis of evaluation results introduced in this lecture. It is a recommended read, which will be heavily used in weeks 3-6, and will surely accompany the students beyond the lecture in their carreer.
Quantifying the User Experience: Practical Statistics for User Research (Jeff Sauro and James R Lewis)
Human-Computer Interaction:An Empirical Research Perspective (I. Scott MacKenzie)
Doing Psychology Experiments. David Martin
How to Design and Report Experiments. Andy Field, Graham Hole
Research Methods in Human-Computer Interaction (Lazar, Jinjuan, Hochheiser)
Human Computer Interaction (Dix, A., Finlay, J., Abowd, G., and Beale, R. )
Information Visualization 3rd Ed.: Perception for Design (Colin Ware)
The Design of Everyday Things (Donald Norman)
Emotional Design: Why We Love (or Hate) Everyday Things (Donald Norman)
Sketching User Experiences: Getting the Design Right and the Right Design (Bill Buxton)
Sketching User Experiences: The Workbook (Saul Greenberg, Sheelagh Carpendale, Nicolai Marquardt, Bill Buxton)
This course is not an introduction to HCI nor to visualization. For general courses on the topics covered visit:
This course will not deal with web usability, as that topic is extensively covered by Keith Andrews course on Information Architecture and Web Usability. Instead the course concentrates on providing a solid background on the different evaluation methods as applied in a UCD continuous evaluation process. This course is no replacement for an advanced statistics lecture. Descriptive and inferential statistics methods will be used to guide the preparation of data, the performance of appropriate statistical tests, their interpretation and reporting of results.