706.712: Evaluation Methodology

This specialized course concentrates on evaluation methodologies for research and innovation. The course applies a user centric design (UCD) methodology. At each stage, appropriate methodologies will be introduced with a human perspective. The course will address conventional interfaces (e.g., visualization) and also frontier, less conventional methods (e.g., Augmented Reality, 3D User Interfaces)

The student taking this course will learn through experience to:

More importantly, from experience, the student will learn how formulate research questions and hypotheses, how build evaluations to test those hypotheses, how to record and collect evidence, analyze it, derive knowledge from it, and report findings from evaluations.

NEWS

27.09.2020

This class will be held online, via WebEx. After enrolling, you find material on the Teach Center.

28.09.2020

This class involves participating and carrying out studies. Studies will take place at the HCI Center at TUG following hygiene requlations regarding COVID-19. Students should consider that they have to i) participate in a study, ii) carry out a study.

Class Times

Monday, 12:00 - 14:00

Timetable

WEEK 1: Introduction 05.10.2020
  • General Information about Experiments and Evaluations,
  • Experimental research, variables and measurement, research questions and hypotheses,
  • Sampling methods: between/within, qualitative, quantitative, performance measures
  • Reporting methods,
WEEK 2: Measurement Methods in (Human) Computing Systems 12.10.2020
  • Performance, time to complete,
  • Human Attention: eye-tracking, pupil size, fixations, saccades,
  • Workload: heart rate, heart-rate variability,
  • Precision, recall, F-Measure, RSME,
WEEK 3: Experiments and Research 19.10.2020
  • During this time you have to participate in a user study and write a report about it.
WEEK 4: Project Proposal 09.11.2020
WEEK 5: Models, Assumptions and Relationships 16.11.2020
  • Summarizing data, descriptive Statistics, frequencies
  • Central tendency, disperssion
  • Independent errors, homogeneity of variance,
  • Normally distributed, multicollinearity,
  • Covariance, Pearson's correlation coefficiante,
WEEK 6: Comparing means 23.11.2020
  • Modeling and the general linear model,
  • T-Test, Mann-Whitney U,
WEEK 7: Comparing means II 30.11.2020
  • Between groups and repeated measures designs,
  • ANOVA, Friedman, Kruskal-Wallis
  • Pairwise comparisons
WEEK 8 : Project Time 07.12.2020
  • This time is reserved for groups to carry out studies
WEEK 9 : Project Presentations 14.12.2020
  • Present the current status of your project. At this stage, at least pilot studies should have been carried out.
WEEK 10 : Putting it all together 11.01.2020
  • Wrap-up,
  • Choosing evaluation methods,
  • Ethics, Reporting
WEEK 11: Qualitative Data Analysis 18.01.2020
WEEK 12: Project Presentations II 25.01.2020
  • Final project presentation
OVERFLOW : Modern approaches
  • Problems with Null-Hypothesis Significance Test,
  • Effect sizes, Bayesian approaches,

Eduardo Veas, Prof.:

Eduardo is Professor for Intelligent and Adaptive User Interfaces at the Institute of Interactive Systems and Data Science, Graz University of Technology (ISDS TUGraz) and he is also Research Manager of the Knowledge Visualisation group at Know-Center GmbH. His interests lie in the field of intelligent interactive systems, in particular studying human aware computing and cognitive aspects of data analytics, virtual and augmented reality interfaces. Eduardo obtained his PhD in Computer Science in 2012 from Graz University of Technology, his Msc in Information Science and Technology from Osaka University, and holds a degree in Software Engineering from National University of Technology, Argentina.

Granit Luzhnica. PhD:

Granit is Post-Doctoral Researcher in the Intelligent User Interfaces Group. His research interests lie in the areas of Haptics and Wearable Computing. Granit obtained his PhD in Computer Science in 2019 from Graz University of Technology.
`

Grading

10% statistics assignment 30% participation in user study at lab 20% critique of peer project 40% project topic

The course is project based. For each class students take turns to moderate and take part in participatory design. For the course, the student will:

  • choose a project topic to design and evaluate from concept to prototype
  • choose a project topic to moderate (play the role of user for peer projects)

Readings

Continuous evaluation serves as a research methodology in many fields of Computer Science. This lecture is intended to familiarize you with a range of methods applied when continuous evaluation is used in research, desing and innovation. The following is a list of the main textbooks that will be referenced throughout the lecture. You can borrow them from the university library, or come to Eduardo's office to browse them (make an appointment).

User Centric Design and Human Factors

Foundations for Designing User Centred Systems. Ritter, Baxter, Churchill, is a thorough reference for the UCD methodology. It dedicates a good portion of the book to understanding people, human factors, behavior and cognition. The chapter on evaluation provides a good, general entry point to the different aspects of evaluations. In the lecture, this book serves as reference for UCD but also for considerations about the user capabilities. Link

Formal Evaluation: Statistics for Analysis

Discovering Statistics Using R (Andy Field, Jeremy Miles, Zoe Field) is our main reference to the implementation of statistics methods used for analyses of evaluation results. It is a thorough reference with example implementations in R, covering the major aspects of research and analysis of evaluation results introduced in this lecture. It is a recommended read, which will be heavily used in weeks 3-6, and will surely accompany the students beyond the lecture in their carreer.

Quantifying the User Experience: Practical Statistics for User Research (Jeff Sauro and James R Lewis)

Human-Computer Interaction:An Empirical Research Perspective (I. Scott MacKenzie)

Research Methodology

Doing Psychology Experiments. David Martin

How to Design and Report Experiments. Andy Field, Graham Hole

Research Methods in Human-Computer Interaction (Lazar, Jinjuan, Hochheiser)

Human Computer Interaction (Dix, A., Finlay, J., Abowd, G., and Beale, R. )

Information Visualization 3rd Ed.: Perception for Design (Colin Ware)

Desing and Critique

The Design of Everyday Things (Donald Norman)

Emotional Design: Why We Love (or Hate) Everyday Things (Donald Norman)

Prototyping

Sketching User Experiences: Getting the Design Right and the Right Design (Bill Buxton)

Sketching User Experiences: The Workbook (Saul Greenberg, Sheelagh Carpendale, Nicolai Marquardt, Bill Buxton)

What this course is NOT

This course is not an introduction to HCI nor to visualization. For general courses on the topics covered visit:

This course will not deal with web usability, as that topic is extensively covered by Keith Andrews course on Information Architecture and Web Usability. Instead the course concentrates on providing a solid background on the different evaluation methods as applied in a UCD continuous evaluation process. This course is no replacement for an advanced statistics lecture. Descriptive and inferential statistics methods will be used to guide the preparation of data, the performance of appropriate statistical tests, their interpretation and reporting of results.

Who should attend this course

This course is particularly suited for students (Msc.) and researchers (PhD.) investigating or developing novel metaphors to interact with computers and machines. It is also suited for researchers trying to understand the user behaviour and how it is influenced by technology. This includes psychology majors as well as educators. Students in the final year of their Bak. may benefit from this course too.