Methodology (such as it was)
After doing some research, we decided that we wanted to try a "think aloud" scenario study. We designed 15 scenarios of tasks for which students might typically use the Library website. We touched on what we thought were all of the important areas and functions of the site. For each scenario, we identified Desired Actions, Alternative Actions, and Criteria for Success.
Our studies were conducted in a private room with a study participant and a facilitator. We set up a laptop, with Camtasia software and a headset to record mouse movements and any comments the participant made as they worked their way through the scenarios. This made note-taking much easier for the facilitator who could then pay more attention to body language, etc.
Participants filled out a brief Participant Survey to begin, completed the 15 scenarios, and then completed an online survey asking questions about the site, with lots of opportunities for comments and suggestions. It was in this survey that we asked questions more specific to the LibGuides interface (colors, boxes, columns, etc.)
We recruited two colleagues to act as facilitators for the study. Our research recommended that facilitators *not* be the librarians or the designers of the site, for obvious reasons. Two people from our Learning Center volunteered. We equipped them with a script to read to each study participant, a recording form, and their own set of the scenarios to read to the participants. Facilitators were also given guidelines on how not to prompt the participant too much if they got stuck.
Our research indicated that a small group of participants would give us enough information about the site's usability. Recurring themes would quickly appear. Our goal was five students. We advertised with signs around campus and on our Facebook page. We offered students a $25 gift certificate to the college's bookstore for their participation and approximately an hour and a half of their time. Five students volunteered, although there were additional inquiries after the study began.
Participants had the option of having the scenarios read aloud to them and/or reading to themselves from larger printed cards that we placed by the computer.
A test run of the study was highly recommended, so we coerced a very kind faculty person to be our guinea pig. Time well spent. We worked out some glitches in our "carefully crafted" scenarios, cleaned up some links on the site, and began - even at this early stage - to get really useful feedback on the site's usability. It also helped give us a sense of how long each study might take.