The case for Google.

Not that Google needs my, or anyones help.

I was early to adapt Google applications. I've had my Gmail address for years and years, made great use of GoogleDocs all the way through graduate school.

As we start another semester, I wonder why more academic institutions don't make better use of the tools that Google offers. For sure library instruction includes at least an introduction to Google Scholar, including how to have licensed resources linked from the Scholar results. Why haven't academic institutions made more use of other Google tools?

Just a push toward students signing up for a Google account for "anywhere" access to Google documents would create less need for staff to deal with access issues. With a student simply signing into their account and getting into their document there would be no need for answering

"Why won't the computer read my flash drive?"
or
"Why can't I open my Word Perfect document?" [yes, I've heard this recently] questions.

Sometimes (most of the time) libraries and academic institutions over think and don't use the simplest, most cost-effective solutions available. This is one time that we SHOULD outsource a task.

Implementation and configuration of WorldCat Local

WorldCat Local (WCL) is a resource discovery tool developed by OCLC that localizes OCLC FirstSearch content for library users.

WCL eases user discovery in the web-based discovery paradigm by giving users:

  • A single search box
  • Relevancy ranking of search results within each tier
  • "Faceted browse capability" that allows searchers to limit results sets or redirect searches to other content
  • Citation formatting options
  • Additional content to help users evaluate items, e.g., cover art, reviews, etc.

Combining a Google-like search function, along with Web 2.0 functions, WCL gives users the ability to find resources in their local library, consortium libraries, or via interlibrary loan, any world library that fulfills the request. 

As part of the Boston Library Consortium, the Healey Library began implementing WCL in the Spring of 2009. Prior to implementation I worked with the Fenway Libraries Online (FLO) staff (at the time FLO was managing our catalog server) on the library's first large scale OCLC record reclamation in over a decade. This updated the OCLC numbers in our cataloging records, as well as re-synced our holdings with those in the OCLC database.

Once the reclamation took place, WCL was configured to display library branding as to display links to our local print and link resolver holdings. 

Since WCL was first opened to use by the library community I have configured our WCL to display results from  localized databases as negotiations between OCLC and various vendors have allowed database results to be displayed.

         list of currently available databases to be searched in addition to OCLC WorldCat

eBooks, eJournals and accessibility.

Academic librarianship has turned from books and paper journals to content provided through electronic subscriptions to databases, indexes, journals and eBooks.

Part of the mission of any academic library is to provide the service to all members of the academic community. This includes providing access to those with any type of disability. Under the auspices of fulfilling requirements of the American's with Disabilities Act, I was charged with investigating accessibility to the libraries digital content.

With the assistance of Kenneth Elkind of the university's Adaptive Technology Lab, I began testing all databases and indexes of electronic content. This was done via the University of Illinois at Urbana-Champaign's Functional Accessibility Evaluator (FAE) FAE tests the HTML and other coding of web portals for screen reader accessibility. Results of testing where then compared by using a tester created rating system. Databases that received a failing or otherwise failing grade where then tested using the JAWS screen-reader and if deemed unaccessible, vendors where contacted so that links to accessible content or other solutions for accessibilty could be put in place. 


Digital content created by the university library was also investigated. EReserve material, which was generally PDFs created by library staff, was found to be unaccessible. At the time, library staff ran PDFs through Adobe Professional's accessibility function on demand for disabled students. The Adobe tagged content so that it could be read by screen readers. Testing these PDFs with JAWS and other leading screen readers found that any text within the tagged PDFs that was smudged or otherwise blurry was not tagged and therefore not accessible to screen-readers. As many PDFs where 2nd or greater generation scans, many supposedly accessible documents where not in fact readable by screen readers. 


I investigated the leading OCR software and upon deliberation with library and university staff, as well as members of the Universal Accessibility Interest Group, I found that ABBYY Finereader was the best scanning OCR/scanning software for the library's needs and am presently working to provide training and network access to this product for processing all library created PDFs so that they are screen-reader compliant.

Library administration and the campus disability office wanted to provide a screen-reader that could be made available to those in the university community that where not provided with one by the state or other disability services. This might include those with low-vision or learning disability that a reader designed for the blind would be difficult to operate. Investigation and conversation with colleagues led me to TextHelp's Read Write Gold product. Read Write Gold is not only a screen reader, but also a text-to-speech generator and gives users the ability to export PDF and other text sources to mp3. This additional functionality provides needed services for all members of the library community. I am presently working with library IT staff to provide a method for access to this program for both campus and distance learning members of the university community.



eBooks and Ex Libris Voyager

eBooks are an easy way for libraries to add valuable content at (usually) a discounted price from print monographs. They are particularly useful for areas of the collection, like computer science, that seem to be continually updated. My library has had a very strong eBook collection, that due to a mis-assigned professional, had been cataloged and uploaded (by hand!) incorrectly  for entry into our OPAC.

For those unfamiliar with the Voyager ILS the catalog record consists of three pieces. The BIB record that includes all the bibliographic data in MARC format, the HLDG record that displays local location and call number data, and finally the ITEM record that includes location data as well as an individual items barcode.

Library administration felt that the easiest way for collection development to keep up with the changes in academic disciplines as well as patron demand was to set up a Patron Driven Access (PDA)  plan for eBooks from the largest vendor that we subscribe too [~70000 eBooks.]

With PDA plans a library will purchase limited access to a title that become a permanent purchase once the title is accessed a preset number of times. 

This created a difficulties in that we would need the ability to add or delete large number of records to our catalog. First off the eBooks already in our catalog had been created with all three levels of record present. Voyager does not allow bulk deletion of records with ITEM records. We decided that all eBooks would need displayed in a similar fashion so that all ITEM records attached to eBook holdings, no matter the vendor, would need to be deleted. This was done by our in-house developer/server wizard who simply altered CSS on the OPAC display so that any record without an ITEM record would be displayed in a similar fashion.



Within the record users will find a link created from a URL within the MARC record 856 field directing them to the desired content. 

To delete 120,000 ITEM records a list was compiled using Georgia State's Voyager Reporting System (VRS) of all the ITEM records that had been given DUMMY as a barcode. These where then entered into the Voyager Cataloging module using a MacroExpress job that deleted the ITEM and HLDG records.

[Since this initial job took place a script was developed that does the same job on the server, which is much faster than MacroExpress.]

Records where then downloaded from the PDA vendor's website. Massaged using MARCedit, so that they fit library standards and had proper proxy credentials, and uploaded to the OPAC via the bulk import method. To ease processing of these titles, I used the vendor identification present at the end of the URL as the MARC 035 [other identification number] in place 

Since we did this initial load of PDA books, budgetary concerns led the library administration to decide that a cap on the price of PDA books would be set. This made it necessary to delete all the updated PDA records as the vendor could not provide a detailed list of changed titles. This was done by extraction of the vendor ids from the 035 field and then matching these numbers with the BIB ids created by the previous bulk importation.These ids where then imported into Gary Strawn's Cataloger's Toolkit to delete the affiliated BIBs. The toolkit allows manipulation of Voyager records on a greater scale than the 10000 limit suggested for bulk changes using the Voyager Server Bulk Utility. 

Once all BIBs where deleted, an updated list was downloaded from the vendor, manipulated via MARCedit, split into manageable pieces and uploaded to the server. 

I negotiated with the vendor to manipulate future updates so that the 856 URL is pre-populated and the vendor id is present in the 035 field. The small month updates and deletions can now be managed via the Voyager Server Update due to field matching rules set up in the upload rules.

Chameleon plugin



Like many libraries we have felt the pain of budget cuts. In our library this usually means cuts in the development budget, especially the print journals. This means that the print periodical collection for our library is a bit strange. Looking in our catalog a user can find strange runs of journals with breaks in coverage that coincide with budget shortfalls.
This leads to confusion for users of not only the catalog, but also the SFX link resolver. SFX needs limits of coverage in order to direct users to print full text. To do this for print titles for the library would not only require massive amounts of exported coverage data to be uploaded to the SFX KB, but would be limited for titles that have split dates of coverage. 


In early 2010 a plugin was developed to integrate the Innovative or Voyager ILS with SFX so that coverage data for print journals is used in the display logic for SFX. The Chameleon plugin creates a TARGET in SFX 


that uses a PARSE PARAM 

url=http://voyager.lib.umb.edu/ & use_isbn_OR=$$USE_ISBN_OR &version=Voyager7 

altering the Z39.50 query of the Voyager catalog so that it searches holding data and displays a link to holdings if the citation in question is owned  in print. 
Otherwise the Interlibrary Loan request link is displayed.

This Book is Overdue!

I picked up a copy of Marilyn Johnson's This Book is Overdue!: How Librarians and Cybrarians Can Save Us All at ALA Mid-Winter in Boston. While I enjoyed Johnson's look at the modern librarian and how important we are in the 21st century, I value the book more for helping explain exactly what I do on a daily basis. Family and friends know that I am a librarian ("Hey, you must read  a lot of books!,")  but don't really understand what I really do. 


"Hi, I manage a link resolver and OPAC database!" doesn't really excite people, or explain what that is with out a lot of additional conversation in library-ese. Suggesting this title to my family has helped them understand a bit more of what I do, and show why libraries are STILL important in this age of Google and ability to go to the web to find the most minute of facts. Plus, as an added bonus they know someone in the book!



Reporting errors via SFX

Our SFX knowledge base is finally getting in tip top condition. Regretfully, when we exported data from our previous link resolver the dates of coverage and subscriptions where not satisfactory. We've solved many problems by running coverage reports from vendors against data in the SFX KB.

We've relied on librarian searches and patron inquiries to inform us of any irregularities that are still present in our knowledge base.

Initial thinking led us to believe that the link present in SFX would be used by patrons that needed help finding research articles. This research link only led to a handful of email requests over the first two years SFX was in place. Reviewing Wakimoto, et al's "The Myths and Realities of SFX in Academic Libraries" led to me to rethink how patrons perceive SFX. Wakimoto found that patrons only want to use a link resolver to get Full Text so that they can easily fulfill their research requirements. We had relied on patrons to either email us with KB problems or passed on from the Reference or Interlibrary Loan staff.

With a slight change in language and changing an email link


has led to an influx in inquiries about dates of coverage and subscription levels, as well as eased my trouble shooting due to email including metadata from the SFX popup the email originated from.