This specialized course concentrates on evaluation methodologies for research and innovation on interactive systems. The course applies a participative methodology based on project prototypes. A number of novel projects have been selected to address individual aspects of the empirical methodology. Projects chosen for WS2015/16 include: Glove for gesture and haptic feedback, augmented reality visualizations, recommendation of visualizations, interactive topic analysis.
The student taking this course will learn through experience to:
As a student, you will acquire skills to formulate, design and validate technology by conducting and participating in a critical evaluation process. The skillset target of this lecture will let you re-think and plan research and also product development processes in terms of evaluations and their results.
30% being present and active in the lecture 40% evaluator credits 30% participant credits
The course is project based. For each class students take turns to moderate and take part in evaluations. For the course, the student will:
Continuous evaluation serves as a research methodology in many fields of Computer Science. This lecture is intended to familiarize you with a range of methods applied when continuous evaluation is used in research, desing and innovation. The following is a list of the main textbooks that will be referenced throughout the lecture. You can borrow them from the university library, or come to Eduardo's office to browse them (make an appointment).
Foundations for Designing User Centred Systems. Ritter, Baxter, Churchill, is a thorough reference for the UCD methodology. It dedicates a good portion of the book to understanding people, human factors, behavior and cognition. The chapter on evaluation provides a good, general entry point to the different aspects of evaluations. In the lecture, this book serves as reference for UCD but also for considerations about the user capabilities. Link
Discovering Statistics Using R (Andy Field, Jeremy Miles, Zoe Field) is our main reference to the implementation of statistics methods used for analyses of evaluation results. It is a thorough reference with example implementations in R, covering the major aspects of research and analysis of evaluation results introduced in this lecture. It is a recommended read, which will be heavily used in weeks 3-6, and will surely accompany the students beyond the lecture in their carreer.
Quantifying the User Experience: Practical Statistics for User Research (Jeff Sauro and James R Lewis)
Human-Computer Interaction:An Empirical Research Perspective (I. Scott MacKenzie)
Doing Psychology Experiments. David Martin
How to Design and Report Experiments. Andy Field, Graham Hole
Research Methods in Human-Computer Interaction (Lazar, Jinjuan, Hochheiser)
Human Computer Interaction (Dix, A., Finlay, J., Abowd, G., and Beale, R. )
Information Visualization 3rd Ed.: Perception for Design (Colin Ware)
The Design of Everyday Things (Donald Norman)
Emotional Design: Why We Love (or Hate) Everyday Things (Donald Norman)
Sketching User Experiences: Getting the Design Right and the Right Design (Bill Buxton)
Sketching User Experiences: The Workbook (Saul Greenberg, Sheelagh Carpendale, Nicolai Marquardt, Bill Buxton)
This course is not an introduction to HCI nor to visualization. For general courses on the topics covered visit:
This course will not deal with web usability, as that topic is extensively covered by Keith Andrews course on Information Architecture and Web Usability. Instead the course concentrates on providing a solid background on the different evaluation methods as applied in a UCD continuous evaluation process. This course is no replacement for an advanced statistics lecture. Descriptive and inferential statistics methods will be used to guide the preparation of data, the performance of appropriate statistical tests, their interpretation and reporting of results.