Thursday, April 28, 2011

Updated Experiment Info--Pilot Testing

I'm interested in testing how device size, specifically screen & input size affects a search & selection task. This could have important implications for which tasks we choose to perform on which devices.

[
Hypotheses]
1. Small-screen users will use more category heuristics than large-screen users.
2. Small-screen users will be more satisfied with their selection because they had fewer choices.
3. Large-screen users will perform better, as rated by independent raters.
4. Small-screen users will feel like they need more time more than large-screen users

[Methods]
See Google doc for pre & post task survey as well as task instructions.

[Materials]
I am using a iphone 4G as my 'small screen' device and 27" monitor as my large-screen.
I am using safari web browser for both conditions.

[Measurements]

1. The number of heuristics that the subject used to refine the search. (HOW besides watching?)
2. How confident the user is about his/her selections
3. How well the selections perform, as rated by independent raters*
4. Whether the user feels like he needed more time

*I intend on asking mechanical turkers to rate the dress and shirt selections. I will give them the same prompts that the subjects received. I will show them two at a time and ask which one seems more appropriate? (or a better choice?)

[Calculations & Results]

I don't have meaningful results because I haven't run anything on mechanical turk yet. But I intend on using chi square tests once I have my independent raters.

I will also use chi square to calculate confidence, category heuristics and time.

My pilot results:
Average small-screen confidence: 2.5
Average large-screen confidence:
3.6

[Further Questions]
How can I track the ways in which the subject refined his/her search besides watching?
How should I have mechanical turkers rate the selections? (ask ‘more appropriate’, ‘better’, ‘Jamie/Matt will like more’?)
Do I have them rate between two? (Strict ordering does not give me as much information)
Should I have both conditions fill out the pre & post task on the big monitor? Or do all of it on the small device?
Should I use a laptop/monitor instead of phone/monitor to help control for processing & network speed?



Thursday, April 21, 2011

Experimental Methodology

Hypothesis 1: User's will persist longer (perform more clicks) in a search if they feel as though they are on the right track.

Hypothesis 2: Even if users believe that they are on the right track, there is a somewhat universal limit to the number of clicks a user will perform before aborting search.

Hypothesis 3: User's will be more frustrated in non-scented searches than searches requiring more clicks.

Control Search: ??

Scented Search: Require the user to perform X number of clicks, where the search path is very clear and the user is confident that he is on the 'right track.'

Non-scented Search: Require the user to perform 2-3 clicks, but the correct search path is very unclear and the user is most likely not confident.

PROCEDURE:

1) Pre-task survey: gather demographic information, gather emotional/mood/task confidence information
2) Place subject in one of 3? experimental groups
3) Experimental Task
4) Post-task survey: gather emotional/mood/task confidence information

Experimental Task: This really still needs fleshing out but I think I want to ask user's to look up information about jobs posted on the CDC---I will provide them the date that the job was posted and tell them that they cannot 'search' for the job.

I think the non-scented task will be a search on imdb.

Since this is within-subjects I don't have as many variables to worry about. I will need to ask user's familiarity with both interfaces and also how attractive they perceive each interface to be (since perceived attractiveness of the interface affects search perseverance).

Background Information

In Peter Pirolli's book Exploring and Finding Information, he defines 'information scent' as "The user's use of environmental cues in judging information courses and navigating through information spaces."

In my experiment, for my 'scented search' condition I want to use what I will call "strong scent." I am defining this to include searching through lists organized by one of the five ways to organize information: category, time, location, alphabet and hierarchy (Universal Principles of Design or Information Anxiety). Of course, some of these are more strongly scented (alphabet or numeric) than others, such as category.

Additionally, I want my scented search interface to be what Jef Raskin defines as a "zooming interface paradigm (ZIP)" in The Humane Interface (pg 153). In fact, in chapter 6-2, he neatly describes the two types of interface I would like to test (I think).

The non-scented: He describes navigating through the interface like a maze--"I often find, deep in a submenu, a command or a check-box that solves a problem I am having...We are not good at remembering long sequences of turnings, which is why mazes make good puzzles and why present navigational schemes, used both within computers and on the web, often flummox the user." (152)

The scented: "The antithesis of a maze is a situation in which you can see your goal and the path to get there, one that preserves your sense of location while under way, making it equally easy to get back."

Thursday, April 14, 2011

Search Perseverance & Design Attractiveness

I found an interesting article that claims that "perceived attractiveness of web interface design" has a positive significant effect on search perseverance. See paper here

This is something that, even if I do not directly investigate, I need to keep in mind as the design attractiveness could therefore be a mediating factor in search perseverance.

Ground Zero

I'm still trying to hone in on exactly what I want to research for the next 7+ weeks. I am interested in the claims that Jared Spool makes in The Scent of Information. He states, "Users expect each click to lead to information that is more specific. They do not mind clicking through large numbers of pages if they feel they are getting closer to their goal with each click" (pg 25). And in fact, in his experiments, he measures users' confidence--claiming that users gain confidence at each step if they feel as though they are on the right track.

My curiosity wants to unpack this claim with the aim of investigating how feedback affects this confidence and search perseverance. I envision using different forms of feedback, varying the amount of information, in order to see how that affects search perseverance.

I am also interested in exploring users' emotions beyond 'confidence.' How is their frustration level affected by more clicks/steps? Does this depend on whether or not users have confidence in that click? For example, are users more frustrated in performing an extra click when they know it will take them closer to their goal or when they are unsure?