The most valuable thing for me, as I had already predicted, was the day long pre-conference on assessment. This nicely covered different aspects of library assessment. When I asked Lisa Hinchliffe how much overlap there was going to be with the Library Assessment conference in Charlottesville, VA in September, she said almost none.
The introduction was given by Fred Heath of University of Texas - Austin Libraries. He was one of the developers of LibQual and was able to give us the background of ServQual, the theory behind it, and how LibQual grew out of that. The perceptions of our users are important - their perceptions of their needs and how the feel their institution provides for this need. I now understood why it was important that this can't be done by a single institution alone. Results from numerous institutions need to be gathered, so you can see how your institution rates compared to others. It was interesting to hear all this from the horse's mouth, so to speak.
Dave Baca from the University of Arizona had just defended his PhD thesis on using interviews for assessment purposes and was able to tell us the ins and outs of working with interviews. The hardest and most tedious part of the process is the coding of the responses, but it provides a wealth of in-depth information. This process can also be used for focus groups.
Lisa Hinchliffe from the University of Illinois at Urbana-Champaign was engaging and informative, as usual. She talked about collecting and using instruction data to improve library instruction.
The most practically useful presentation for me was the one on developing surveys. David Consiglio from Bryn Mawr College gave good suggestions on how to plan, create, test and implement a survey. I was intrigued by some of the simple guidelines he gave in creating good survey questions. It was fund to look at the various assessment surveys we got that day and throughout the conference and find what could have been improved.
Usability testing was covered by Brian Quigley from Univ. of California Berkeley Engineering Library. None of the information was new, but a good refresher on what usability testing should be like. It was interesting to see some of the user specific idiosyncrasies on their site. Since this is for the engineering library, they listed the acronyms for all of the different engineering departments. The rest of us might not know them, but they do, and for outsiders they provided the full name in a mouse-over, but it kept the home page clean and manageable.
Peggy Johnson from the University of Minnesota told us about assessing collections. She mostly talked about the different statistics one can gather to make collection development decisions and that LibQual will always show that some (mostly faculty) think we don't have enough.
No comments:
Post a Comment