Notes from User Evaluation Session
Facilitator: Sue Crockford-Peters, Head of Access Services, Yale University Libraries
Recorder: Kitty Chibnik, Head, Access and Support Services, Avery Fine Arts Library, Columbia University Libraries
- Is it being done, and if so, by what methods and with what frequency?
- Access specific evaluation, or part of larger library-wide or university-wide effort?
- Are the results useful, and what changes have been made or are being contemplated by the user evaluations?
Crockford-Peters opened the session by describing Yale's user evaluation efforts, with the purpose of establishing baselines for activities and helping staff to understand the results.
The main problem that institutions have encountered with user surveys was the intense effort needed to create and carry out the surveys versus the "result" yield. Problems also encountered were multi-survey requests (patron fatigue-the reluctance to fill out multiple surveys during survey times at multiple locales), targeting patron groups, finding out about non-users, and having the resources and staff to interpret and manage the results (Yale had carved out a position specifically for this task). It was recommended that the survey be designed to answer questions one needed answered, rather than having them emerge as a by-product of the data collected.
In general, attendees found "particularized" surveys more useful than general surveys. Survey instruments used included paper surveys, observational surveys, and Web-surveys. Focus groups were also favored as a way of yielding "focused groups" whose concerns could be addressed.
Those who had done User Evaluations had results that emphasized users as valuing highly "reliability of library services." In turn, this led to internal surveying to see how the services "behave" and thus how reliability can be improved and maintained. Furthermore, it emphasized how important the marketing of our services are, as well as the need for clarity in what can and cannot be done for the patron. The goal then can become the arrangement of services to fulfill expectations.
Issues brought up included who's doing the survey (the individual library?, the libraries overall?, the general administration?) and how can one determine constituent groups and then reach them.
Crockford-Peters concluded the session by saying that the Yale user evaluations had had a positive staff impact.