Quality In Interaction
The quality of an interface can be evaluated for two separate purposes:first to improve the design, and second to accept the design.
In this evaluation type, the research perspectives of HF (Human Factors) and CE (Cognitive Ergonomics) have become prevalent. In these disciplines, the research problem is formulated so that a controlled experiment can be conducted in which different interface types represent independent variables. The correlation between certain interfaces and good human performance is recognized, and a conclusion about the quality of the interface is drawn.
Practical Challenges in Evaluation Usability
There are four practical challenges that the current approaches in usability studies face.
Task Analysis
Usability evaluation starts with acquiring knowledge about the domain and context for which the solution is designed. This is done to understand what kind of tasks the application is supposed to support, and what kind of results the tasks are supposed to produce-that is to sya, when the task has been successfully conducted
The task is defined as a goal and the steps leading to achieving it. The end result of the task analysis is a normative description of what users do in order to reach the goal.
While this approach is tempting for interaction design purposes, we see two negative consequences in these types of task analysis.
Problems
- When task analysis is conducted in a normative way, it takes the current way of carrying out the task using the tools as a starting point. This means that task analysis cannot reach the functionality of the new tool, nor the essence of the new task in which the tool will be used.
- As the model of the task is sequential, and thus situation-specific, the result of the analysis can only be generalized intuitively. In the end, the evaluators cannot konw how well the evaluation has covered the future user profile of the new tool.
Data Collection Methods
Data collection methods determine what kind of information about the usage activity is available for the researchers.
The typical methods of gathering user-system interaction data include observations, verbal protocols, software logging, eye-tracking and subjective evaluations.%BR%
There are several characteristics of usage activity that cannot be revealed with the above-mentioned methods. For example, the observation method cannot go deepley into why something is happening, because it typically concentrates on describing what is happening in the test situation.
Usability Measures
The usability measures that are used in the test situation are a key issue whenever a usability evaluation is conducted. The selection of measures is connected to task analysis and the selection of data collection methods.
Inferences Concerning the Interface
In a traditional usability evaluation, the outcome of the test is a prescriptive assessment of the system and the output is a a list of usability problem and possibly also a set of correction proposals. The task description created in the begining of the evaluation is used as a reference. This type of assessment of the usability of the interface reveals whether the new tool can replace the existing one, but it doesn not tell how the new tool will shape human activity and create new possibilities for acting.
Systems Usability
The concept of system usability has been developed in empirical studies of work in complex industrial environments.
Theoretical Bases for the Concept
Tools with systems usability are such that they promote the development of good work performance.
There are concepts to define the good work performance: activity, core task and practice.
Functions of a Complex System User Interface
Vygotsky has been made the distinctions between two functions of artifacts- instrumental and psychological tool.
A tool is a basic concept of the activity theory and, a medium is a basic concept of the media theory.
Riickriem distinguishes three functions of the tool medium.
- Instrumental function: It relates to the aspects of effectiveness and efficiency.
- Psychological function: It refers to the tool's ability to support human use.
- Communicative function: It relates to how meaningfull the tool is in the particular work.
We can make use of the definition of the three functions and claim that a tool with high systems usability is able to fullfill all these three functions of a tool in actions within an activity system. The prevalent methods to evaluate usability of an interface concentrate mainly on the first function of the instrumental function.
Activity System
The tools mediate the relationship between the subject and the object, rules and norms of the relationship between subject and community and the division of labor in the relationship between community and object. The relationships include tensions that become overt in various disturbances and problems in the system.
How to Evaluate Systems Usability
The core-task analysis theory and approach has been used in the construction of an evaluation method for validation of complex information and control systems. The method has been labeled the CASU (Contextual Assessment of Systems usability) method. The CASU method is developed for use in the integrated system validation of nuclear power control room modifications.
Integrated system validation is an evaluation using different types of performance-based evaluations to ensure that the design is consistent with performance requirements and acceptably supports safety operation of the plant.
The essence of the CASU method:
- The modeling phase-outlines the basis for the evaluation by producing a reference.
- The data collection phase, in which the actual simulator run inobserved and the video and the interview data are collected.
- The analysis phase. Analysis aims at taking differeng perspectives to the collected data.
- THe assessment of the interface. The assessment is made by combining three points of view:process measures, the tools' ability to promote appropriate work practice, and interface quality.
Details of the grounded analysis
* Literature Involvement
* Theoretical Sampling
* Interviewing Procedure
* Coding procedure and style
* Tools Reporting Style
* validation
Characterizations of UCD (User Centered Design)
ISO 13407
ISO 13407 starts by defining that "Human-centered design is an approach to interactive system development that focuses specifically on making systems usable." The objective of the standard is to provide "guidance on human-centered design activities throughout the life cycle of computer-based interactive systems." The standard describes uer-centered design from four different aspects:
- Rationale for UCD: describes the benefits that usable systems provide.
- Planning UCD: identifies four general principles (User involvement, Allocation of function, Iteration, Multi-diciplinary design)
- Principles of UCD: provides guidance in fitting user-centered design activities into the overall system-development process.
- Activities of UCD: is the description of user-centered design activities.
The description of principles and activities are analyzed from two different viewpoints:
- Presentation: is analyzed to understand how to describe the principles.activities.
- Contents: are analyzed to find places for refinments in the UCD substance.
The KESSU 2.2 Model
Activities description differ more or less from the ones of ISO 13407 depending on the activity.
The differences are the revisied set of activities, and visually illustrated outcomes.
Remote Usability Evaluation
The relevant dimensions we have identified for analyzing the different methods for assessing remote usability evaluation are:
- The type of interaction between the user and the evaluator
* Remote Observation
* Remote Questionnaires
* Critical Incidents Reported by the User
* Automatic Data Collection
- The platform used for the interaction
* Desktop applications
* Vocal applications
* Mobile
- The techniques used for collecting information about the users and their behavior
* Servicer side logging
* Proxy-based logging
* Client side logging
* Eye-trackers
* Webcam/audio recorders or microphones
* Sensors
- The type of application considered in terms of implementation environment (Web, java-based, .NET, etc.)
- The type of the evaluation results
* Task-related information
* Qualitative information
* Presentation-related data
* Quantitative cognitive-physiological information