Thursday, May 12, 2011

First Round Run

This week I ran 30 participants. I've done some preliminary analysis of the data and I am finding almost no difference between the two conditions in confidence or time. I won't have my 'performance' measure until I put the selections on Mechanical Turk, and I plan to run a handful more participants first.

This is exciting and interesting because despite my hypotheses being wrong, there are two outcomes that are most probable at this point.

1. The large screen outperforms the mobile device users and the story is: "Mobile device users are just as confident as large screen users but they perform worse.."

OR

2. There is no difference between the two and users can perform equally well on this type of task on a mobile device.


Checklist of 'What Needs to Be done'

1. I'd like to run about 10 more participants and I will do this in the next handful of days
2. Upload selections on mechanical turk
3. Run data analysis

2 comments:

  1. Great progress this week. How many subjects did you get for each condition again? I hope you got some ideas in class about the next step of performance quality evaluation. I think it would be fine to use the subjects themselves as the experts; I think you could also Turk it (in addition to the "experts") to see what you find.

    ReplyDelete
  2. Interesting results! Are you keeping track of when the users find the clothing that they ultimately choose? I.e. do you find that they make their decision early on and just browse for the remaining time, or do they keep browsing until time is almost up before making a decision?

    If a user makes their decision very quickly (such as within 30 seconds), it could be that you are giving them "too much" time, which allows the mobile users to "catch up" to the desktop users. I have no idea if this is what is happening though, since five minutes seems like a very reasonable time limit, but it could be worth checking out?

    ReplyDelete