Preece+et+al.+Informaton

Observing users for tomorrow's session …. Vg.
Preece, Rogers & Sharp –some key points on How to observe :- · Observation in usability tends to be objective, from the outside. The observer watches and analyzes what happens · Combinations of video, audio and paper records, data logging, and diaries can be used to collect observation data. .(386)   · Analyzing video and data logs can be difficult because of the sheer volume of data. It is important to have clearly specified questions to guide the process and also to access appropriate tools. .(386)   · Another solution to ‘think aloud technique is having ‘two people work together so that they talk to each other. Working with another person is often more natural and revealing because they talk in order to help each other.( p. 368) · Decide how to record events- i.e. audio, video, notes or a combination af all the three.(p. 369) · Be prepared to go through your notes and other records as soon as possible after each evaluation session to flesh out detail and check ambiguities with other observers or with people being observed. This should be done routinely as human memory is unreliable. A basic rule is 24 hours, but sooner is better.(369) · As you make and review your notes, try to highlight and separate personal opinion from what happens.(p. 369). · Consider working as a team. This can have several benefits; for instance, you can compare your observations. Alternatively, you can agree to focus on different people or different parts of the context. Working as team is also likely to generate more reliable data because you can compare notes among different evaluators.(p. 370) · Consider checking your notes with an informant or members of the group to ensure that you are understanding what is happening and that you are making good interpretations.

** NEW! - November 28, 2010 ** Chapter 18 from RWDG&U (or whatever it is!) has some useful information for this stage of design:


 * http://www.usability.gov/pdfs/chapter18.pdf**


 * I've added some more quotes/notes to this page - we don't need it all right now, but it should be useful to have the references all in one spot when we go to do the final write-up.**

Some Preece, Rogers and Sharp quotes & notes:

p. 586 - **Why evaluate?** "evaluation is needed to check that users can use the product and that they like it" "users look for much more than just a usable system, they look for a pleasing and engaging experience" "designers get feedback about their early design ideas; major problems are fixed before the product goes on sale; designers focus on real problems rather than debating what each other likes or dislikes about the product"

p. 586 - bottom - **What to Evaluate -** there is a "range of features that evaluators must be able to evaluate" Points gleaned from p. 587: - is the application easy for users to use and learn navigation? - is navigation through the application straightforward and well supported? - is the simple design of screens attractive to users?

p. 589 - **When to Evaluate** - this type of evaluation is formative - evaluation of a working prototype

p. 590 - **Evaluation approaches and methods** p. 591 - usability testing -- "to ensure consistency in navigation structure, use of terms, and how the system responds to the user" - "involves measuring typical users' performance on typical tasks" - "generally done by noting the number and kinds of errors that the users make and recording the time that it takes them to complete the task" -"as the users perform these tasks, they are watched and recorded on video and their interactions with the software are recorded" -"user satisfaction questionnaires and interviews are also used to elicit users' opinions" -"the test environment and the format of the test is controlled by the evaluator" -"Quantifying users' performance is a dominant theme in usability testing" -"Optimal performance levels and minimal levels of acceptance are generally specified and current levels are noted. Changes in the design can then be implemented. This is called 'usability engineering'.

p. 626 - "Well-planned evaluations are driven by //goals// which aim to seek answers to clear //questions,// which may be stated explicitly, upfront, as in usability testing" p. 626 - **Determine the goals** - check the user-interface and navigation pathways - -
 * Chapter 13 - Evaluation Framework**
 * DECIDE Framework:**

p. 627 - **Explore the questions** -is the application easy to navigate? -are the buttons clear and consistent? -are users able to complete tasks in an acceptable amount of time? -are users able to successfully navigate through tasks? -do users seem to enjoy using the application?

p. 628 - Choose the approach and methods -observation -questionnaire -interview -informal discussions "Each type of data tells the story from a different point of view. Together these perspectives give a broad picture of how well the design meets the usability and user experience goals that were identified during requirements gathering" (pp. 628-629).

p. 630 - Don't think we need to worry about **Identifying the practical issues or ethical issues - it is what it is**


 * Chapter 14 - Usability Testing**

p. 646 - "Usability testing is an approach that emphasizes the property of being usable, i.e. it is the product that is being tested rather than the user"

"The goal is to test whether the product being developed is usable by the intended user population to achieve the tasks for which it was designed" (Dumas and Redish, 1999, as cited in Preece et al., 2007).

"Key components are the user test and the user satisfaction questionnaire" "The user test measure human performance on specific tasks" "Examples of tasks include reading different typefaces, navigating through different menu types, and information searching" "The user satisfaction questionnaire is used to find out how users actually feel about using the product, through asking them to rate it along a number of scales, after interacting with it." "The combined measures are analyzed to determine if the design is efficient and effective" "Quantitative performance measures are obtained during the tests that produce the following types of data (Wixon and Wilson, 1997, as cited in Preece et al.): p. 647 - "It is considered that 5-12 users is an acceptable number to test in a usability study"
 * time to complete a task
 * number and type of errors per task
 * number of users making a particular error
 * number of users completing a task successfully"

"Feedback is about sending back information about what action has been done and what has been accomplished, allowing the person to continue with the activity." (p 31)

We should consider what would indicate success for our user. For example, if they successfully added audio to their story, would there be an auditory success cue signalling this? A pop-up bubble congratulating our user on completing a story? Preece outlines a number of feedback options - audio, tactile, verbal, visual. What's going to make sense for our user? Let's throw a couple of scenarios out on Dec 6 and see what our users prefer.

"Identifying needs and establishing requirements is itself an iterative activity in which the subactivities inform and refine one another." (p 474)

Does this quote help explain why we are doing what we are doing? In a way, the iterative process has been prescribed by Michele - she gave us an outline at the beginning of the year. Now I guess we have to justify why she's prescribed it in this way?

"The first step in getting a concrete view of the conceptual model is to steep yourself in the data you have gathered about your users and their goals and try to empathize with them." (p 540)

So our goal is to empathize with users

"Immersion in the data and attempting to empathize with the users...will, together with the requirements, provide information about the product's user experience goals, and give you a good understanding of what the product should look like." (p 543)

"Card-based prototypes may be shown to users to gain informational feedback." (p 564)

And a little more from Jakob Nielsen:

Alertbox August 5, 2001 Retrieved from http://www.useit.com/alertbox/20010805.html Usability? Don't Listen to Users Summary: To design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior. To discover which designs work best, **watch users as they attempt to perform tasks** with the user interface. This method is so simple that many people overlook it, assuming that there must be something more to usability testing. Of course, there are many ways to watch and [|many tricks] to running an optimal user test or field study. But ultimately, the way to get user data boils down to the **basic rules of usability**:
 * Watch what people actually do.
 * Do not believe what people //say// they do.
 * Definitely don't believe what people predict they //may// do in the future.

Your best bet in soliciting reliable feedback is to have a captive audience: Conduct formal testing and ask users to fill out a survey at the end.

Usability is a **quality attribute** that assesses how easy user interfaces are to use. The word "usability" also refers to methods for improving ease-of-use during the design process. Usability is defined by five quality components:
 * From http://www.useit.com/alertbox/20030825.html - Usability 101**
 * **Learnability**: How easy is it for users to accomplish basic tasks the first time they encounter the design?
 * **Efficiency**: Once users have learned the design, how quickly can they perform tasks?
 * **Memorability**: When users return to the design after a period of not using it, how easily can they reestablish proficiency?
 * **Errors**: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?
 * **Satisfaction**: How pleasant is it to use the design?

**And a little from usability.gov** **http://www.usability.gov/methods/test_refine/learnusa/index.html** 

Usability testing is a technique used to evaluate a product by testing it with representative users. In the test, these users will try to complete typical tasks while observers watch, listen and takes notes. Your goal is to identify any usability problems, collect quantitative data on participants' performance (e.g., time on task, error rates), and determine participant's satisfaction with the product.

**What You Learn** You will learn if participants are able to complete identified routine tasks successfully and how long it takes to do that. You will find out how satisfied participants are with your Web site. Overall, you will identify changes required to improve user performance. And you can match the performance to see if it meets your usability objectives.