Meeting on the ledge

(or why I don't get out much…..)

Catalogue quality

Catalogue quality has been on my mind over the past few days, probably  because I’ve been working on a file to send to LibraryThing so that we can use their wonderful subject clouds in our OPAC.  Quality is one of those amorphous words, but I suppose in this context it could refer to a number of things: how easy and productive your users find the OPAC, how full the catalogue records are, or how free of inaccuracies the catalogue records are. Of course all these viewpoints work together: if your catalogue records are inaccurate or overly brief the users can’t find the items they want so it doesn’t meet their needs.

In my case my library’s catalogue, like most, has been through a long history with different cataloguing standards and a number of data conversions, and time and cost were criteria as well as accuracy. Hence there are a proportion of ‘inaccurate’ records, some of which I’m picking up in my LtFL listing. I expect that Tim, Casey and co have some ‘sanity-checking’ built in as part of their import routines, but I’m fixing the ones I discover as I go along – hopefully it will help out some students in times to come.  Most seem to be data conversion issues, with things like extra spaces having crept into various parts of the marc records.  It would be difficult to write a strict data definition for, say a 245 (title) field, as the potential content that most fields can hold is enormous, so with current MARC records and LMS structures it isn’t feasible to put more than broad rules in place.

Hopefully the problem will diminish as we download an increasing proportion of our records from external sources, as recommended by the recent Research Information Network report.  WORM (Write Once Read Many times) makes sense in a library context as much as IT. But I suspect most libraries have a similar problem of some lesser-quality older records hanging around in their databases. Maybe this will become of diminshing importance as time goes on, as undergraduate students these days seem to rely only on a small core of textbooks which are largely re-published regularly, but it will remain an issue for researchers. There is an undercurrent of dissatisfaction with the current data structures of  marc, and its successors are already being looked at. Maybe this is a something we can look at when we all come to FRBR-ize our catalogues when they get moved to XML?

Advertisements

July 15, 2009 - Posted by | Libraries | ,

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s