Wednesday, June 15, 2011

Report and Recommendations of the U.S. RDA Test Coordinating Committee Executive Summary

Report and Recommendations
of the
U.S. RDA Test Coordinating Committee
Executive Summary
Coordinating Committee’s charge
The senior management at the Library of Congress (LC), National Agricultural Library (NAL), and National Library of Medicine (NLM) charged the U.S. RDA Test Coordinating Committee to devise and conduct a national test of Resource Description & Access (RDA).  The Coordinating Committee would evaluate RDA by testing it within the library and information environment, assessing the technical, operational, and financial implications of the new code.  The assessment would include an articulation of the business case for RDA, including benefits to libraries and end users, along with cost analyses for retraining staff and re-engineering cataloging processes.  The Coordinating Committee began its work by reviewing RDA’s stated goals.

RDA Goals

Based on the test findings, the summary statements below indicate whether or not the goals were met.  The body of the report presents the findings that led the Coordinating Committee to these conclusions.

  • Provide a consistent, flexible and extensible framework for all types of resources and all types of content.
    • This goal was met.
  • Be compatible with internationally established principles and standards.
    • This goal was partially met.  The Coordinating Committee looks forward to increased harmonization efforts among JSC, ISBD, and ISSN communities.
·         Be usable primarily within the library community, but able to be used by other communities.
o        The test did not cover this goal.  The Coordinating Committee is aware that other library communities are exploring the use of RDA.  The Semantic Web and Dublin Core communities are developing application profiles based on RDA.
  • Enable users to find, identify, select, and obtain resources appropriate to their information needs.
    • This goal was partially met.  User comments on RDA records indicate mixed reviews on how well new elements met user needs.  The test did not fully verify all the user tasks above.
  • Be compatible with descriptions and access points in existing catalogs and databases.
    • This goal was mostly met.  The descriptions are compatible with existing catalogs and databases, as are most access points.  There will need to be community input on how to resolve some differences in access points.
  • Be independent of the format, medium, or system used to store or communicate the data.
    • This goal was met.
·         Be readily adaptable to newly emerging database structures.
o        The test did not verify this goal, although there is evidence that RDA data are sufficiently granular to enable new types of displays and better integration with other data sources. 
  • Be optimized for use as an online tool.
    • This goal was not met.
  • Be written in plain English, and able to be used in other language communities.
    • This goal was not met.
  • Be easy and efficient to use, both as a working tool and for training purposes.
    • This goal was not met.

The Coordinating Committee believes that RDA should be accountable to its own goals and has drafted a plan that postpones implementation until these goals are substantially met.  This belief and the Committee’s analysis and assessment of the test data lead the Committee to make the following recommendations regarding the implementation of RDA by the three U.S. national libraries.

Contingent on the satisfactory progress/completion of the tasks and action items below, the Coordinating Committee recommends that RDA should be implemented by LC, NAL, and NLM no sooner than January 2013.  The three national libraries should commit resources to ensure progress is made on these activities that will require significant effort from many in and beyond the library community.

To achieve a viable and robust metadata infrastructure for the future, the Coordinating Committee believes that RDA should be part of the infrastructure.  Before RDA is implemented, however, the activities below must be well underway.  In order to allow sufficient lead time for these actions to occur, the Committee recommends that RDA implementation not proceed prior to January 2013.  Timeframes in these recommendations assume a start date of July 1, 2011 and represent the Coordinating Committee’s best estimates.  Many of the activities must occur simultaneously.  The timeframes given are for each individual task.  Therefore the timeframes given are not sequential.

·         Rewrite the RDA instructions in clear, unambiguous, plain English.
o        Work with JSC to prioritize which chapters should be addressed and completed first.  Prioritization should be based on comments gleaned during the U.S. RDA Test as identified by the U.S. RDA Test Coordinating Committee.
o        Identify and engage, in collaboration with JSC and the Committee of Principals, a writer to undertake rewrites.
o        Rewrite chapters identified as priorities.
o        Confirm readability of initial chapter rewrites.
o        Timeframe for completion:  within 18 months.
·         Define process for updating RDA in the online environment.
o        Timeframe for completion: within three months.
·         Improve functionality of the RDA Toolkit.
o        Forward to ALA Publishing enhancements needed as gleaned during the U.S. RDA Test and work with ALA Publishing on a timeline for changes.
o        Working with ALA Publishing, identify a process for ongoing usability testing of RDA Toolkit enhancements.
o        Timeframe for completion: within three months.
·         Develop full RDA record examples in MARC and other encoding schemas.
o        Work with ALA Publishing to integrate examples into the RDA Toolkit.
o        Include examples for special communities (e.g., serials, rare books, music).
o        Timeframe for completion: within six months.
·         Announce completion of the Registered RDA Element Sets and Vocabularies.  Ensure the registry is well described and in synchronization with RDA rules.
o        Timeframe for completion: within six months.
·         Demonstrate credible progress towards a replacement for MARC.
o        Announce planning statement.  (Done; see Appendix M.)
o        Identify the stakeholders, key players and experts needed.
o        Identify tasks and timeline for development.
o        Ensure development is underway.
o        Timeframe for completion: within 18-24 months.
·         Ensure and facilitate community involvement.
o        Prioritize needed updates to practices, decisions, and documentation.
o        Prioritize and submit changes to JSC for RDA content.
o        Determine community involvement in the process, e.g., the role of Program for Cooperative Cataloging (PCC), OCLC, special interest communities, etc.
o        Determine best method(s) to share decisions with community.
o        Timeframe for completion: within 12 months.

·         Lead and coordinate RDA training.
o        Prioritize training focus and schedule led by LC.
o        Engage PCC, Association for Library Collections and Technical Services (ALCTS), and other bodies.
o        Timeframe for completion: within 18 months.
·         Solicit demonstrations of prototype input and discovery systems that use the RDA element set (including relationships).
o        Identify groups/organizations/vendors that could provide models.
o        Determine availability of funding to support prototype efforts.
o        Engage and produce initial prototypes.
o        Utilize demonstrations in education and training efforts about the library community’s new metadata infrastructure.
o        Timeframe for completion: within 18 months.

Business case
The test revealed that there is little discernible immediate benefit in implementing RDA alone.  The adoption of RDA will not result in significant cost savings in metadata creation.  There will be inevitable and significant costs in training.  Immediate economic benefit, however, cannot be the sole determining factor in the RDA business case.  It must be determined if there are significant future enhancements to the metadata environment made possible by RDA and if those benefits, long term, outweigh implementation costs.  The recommendations are framed to make this determination prior to implementation.

The Coordinating Committee wrestled with articulating a business case for implementing RDA.  For the reasons that are presented in this Executive Summary and other sections of the report, it is, nevertheless, the decision of the Coordinating Committee to recommend implementing RDA.  The recommendation to implement is premised on the expectation that the problems uncovered by the test will be addressed as part of the preparation for implementation.  The business case for implementing RDA is further based on the community’s need for a descriptive cataloging standard that:

·         lends itself to easy use in the changing environment in which libraries and other information producers and users operate
·         allows the relationships among entities to be expressed with few or no impediments
·         enables greater use and flexibility in the digital environment
·         better describes formats beyond printed monographs and serials
·         enables the descriptive metadata created to be used in a linked data world
·         supports labeling of data elements for ease of data sharing, within and beyond the library community
·         is non- or less Anglo-centric
·         allows existing metadata to be readily re-used.

The U.S. RDA Test demonstrated that RDA can fulfill some of these needs.  In some instances, the promise of fulfillment is greater than the reality of what RDA can currently offer.  At present, several factors impede RDA’s meeting all the above needs.  These factors include constraints of today’s environment, e.g., systems and the carrier format.  They also include constraints within RDA itself.  This report will more fully address these impediments and propose how to resolve them as part of the path to RDA implementation.

The test generated widespread interest in the U.S. and international cataloging communities as evidenced by the more than 95 institutions that applied to be testers, high attendance at RDA Test update sessions during ALA conferences, and traffic on discussion lists.  Many institutions reported feeling privileged to be part of the test and noted energized staffs as well as other benefits.  While the Coordinating Committee had no way to determine the effect of participating in the test on the opinions about RDA reported by test participants, some positive results of the test due to the effects of being a participant cannot be ruled out.

The U.S. RDA Test amassed an unexpectedly huge amount of data that provided the Coordinating Committee a wealth of RDA records and survey responses to analyze.  This wealth of data helped to inform the ultimate decision to recommend that the three U.S. national libraries implement RDA no sooner than January 2013.  The data collected will be posted for sharing with the library and information communities for possible further research.  The 26 test partners (including LC, NAL, and NLM) created 10,570 bibliographic records and 12,800 authority records.  More than 8,000 surveys were submitted.

A key question was asked of each test partner institution, each record creator, and anyone in the U.S. community who wished to complete a survey: “Do you think that the U.S. community should implement RDA?”  Answers from institutional test partners were as follows: 34% “yes”; 28% “yes with changes”; 24% “ambivalent”; 14% “no.”  Record creators were somewhat more negative: 25% “yes”; 45% “yes with changes”; 30% “no” (“ambivalent” was not offered as a choice for record creators).  Those who responded via the survey that was open to all in the U.S. community whether or not they had taken any RDA training or created any RDA records were the most negative: 12% “yes”; 10% “yes with changes”; 34% “ambivalent”; 44% “no.”

The findings are summarized below.  The full findings are in the body of the report.

                Record Creation
Findings on record creation include analyses of time needed to create RDA records for titles in participants’ normal workflows (Extra Original and Extra Copy Sets) and comparative times for creating AACR2 and RDA records as part of an artificial record set cataloged by all participants (Common Original Set).

Record creation times were self-reported and likely subject to a variety of personal approaches to counting and recording time.  The overall average time to create an original RDA bibliographic record for the Extra Original Set, exclusive of consultation time and authority work time, was 31 minutes.  The range of times reported, however, was from one to 720 minutes.  A considerable decrease in record creation time was noted when the Coordinating Committee compared record creation times for the first ten RDA records produced by record creators with record creation times for the 20th record and above.

The overall rate of variance between RDA records was roughly comparable to the overall rate of variance between AACR2 records.  RDA records, on average, contained more data elements than did their AACR2 counterparts.  Discernible error patterns in both RDA and AACR2 were frequently related to the complexity of the resource cataloged.  There were notable patterns of errors around some RDA concepts and instructions, however, such as providing access points for works and expressions manifested, when required.  Comments from catalogers indicated that many lacked confidence in their ability to find and interpret all relevant RDA instructions.

                Record Use
In a survey of library users, most (85%) spoke favorably of the RDA record.  They particularly liked the record’s clarity and completeness, the elimination of abbreviations and of Latin terminology, and the abandonment of the rule of three and increased number of access points.  While there was praise for the RDA record, the overwhelming criticism by the 65% of respondents who had negative comments focused on the dropping of the general material designation and its replacement by the media/carrier/content types whose terminology is difficult to understand. There is a lack of knowledge among many library staff and users as to the options that may be available for translating and displaying these elements on public catalog screens.

                Training & Documentation Needs
Many training methods were available to RDA test participants.  All of the institutions that responded to the question regarding training methods presented their staff with at least three different types of training methods.  The staff at five institutions offered as many as seven different training methods.

Of the institutions responding to a question about creating or modifying local documentation for use with RDA, less than half had created documentation to record local policy decisions although some provided information about the test itself and/or about RDA.  Some participants noted that any local documentation written in the context of AACR2 or any other content standard would need to be revised if RDA is implemented or even if the library only accepted for purposes of copy cataloging any RDA records created by others.  Some participants noted the opportunity to simplify their local documentation.

Although 75% of those responding said that updating documentation would have a “large” or “very large” impact, only 12% of those responding to a question asking if updating documentation would be a benefit or a barrier to implementing RDA said that it would be a “major barrier.”

The three national libraries indicated that they had extensive local documentation to be reviewed and revised; much of LC’s local documentation is also national documentation.  Various specialized cataloging communities and the utilities were considering their documentation plans.

                Use of RDA Toolkit
There were several positive comments related to the RDA Toolkit.  The overall impression from the comments, however, was that users struggled to use the Toolkit effectively.  Many respondents found the Toolkit to be clunky and difficult to navigate.  Respondents were not pleased with the organization (although it was at times unclear if this was the organization of the rules themselves or how they were presented in the Toolkit).  Attempting to navigate to particular rules in the text via the table of contents confused many users.

The workflows present in the Toolkit were seen as useful in creating initial records because they are written in straightforward language and ease the burden of the FRBR-based arrangement of RDA by ordering the rules by MARC/ISBD area.  While there is potential for development of specific workflows at the local level and by format-specific cataloging communities, it would be a mistake to use the workflows to overcome the shortcomings of RDA and the Toolkit.

                RDA Content
The text of RDA was compared with AACR2, ISBD, and the CONSER Cataloging Manual using two common readability tools (Flesch Reading Ease and Flesch-Kincaid Grade Level).  The comparison indicated that RDA text was the least readable.

Subjective reactions to the RDA content were mixed.  Some participants liked the emphasis on transcription, cataloger judgment, and the new content/media/carrier types, as well as the elimination of abbreviations.  A few described the text as "elegant or "well-written."  A larger number of participants reported confusion about the structure, organization, and vocabulary in RDA and commented that the order of the rules in RDA did not match current cataloging workflows.  The text was described as redundant, circular, and complicated, rather than being a simplified set of rules.  Suggestions for improving the text came from both those who had positive and negative reactions to the content.

While 54% of respondents to the Common Original Set survey indicated encountering difficulties with the RDA content or options, the percentage encountering these difficulties dropped to 14.5% for the Extra Original Set, indicating that over time participants gained a better understanding of RDA.  There was little difference reported in difficulties encountered by different levels of staff.  Participants working in non-textual formats, however, reported a much higher number of difficulties.

                Systems, Metadata, and Technical Feasibility
There were no reported problems in systems ingesting and storing RDA records.  While existing systems can import and store RDA-based MARC 21 records, respondents indicated that substantial local configuration changes would be needed for indexing and record displays for the public.  Many survey respondents expressed doubt that RDA changes would yield significant benefits without a change to the underlying MARC carrier.  Most felt any benefits of RDA would be largely unrealized in a MARC environment.  MARC may hinder the separation of elements and ability to use URIs in a linked data environment.  While the Coordinating Committee tried to gather RDA records produced in schemas other than MARC, very few records were received.

                Local Operations
A majority of test partner institutions anticipate some negative impact on local operations in acquisitions, copy cataloging, original cataloging, and bibliographic file maintenance.  Nevertheless, a majority of test partner institutions felt that the U.S. community should implement RDA.

One unanticipated result of the test was that at least three institutions trained all or most of their cataloging staff in RDA and decided to continue creating RDA records after the test. This result increased the impact of a mixed RDA and AACR2 rule environment.

                Costs and Benefits
Costs of implementing RDA occur in various areas: subscription to the RDA Toolkit, development of training materials and creation/revision of documentation, production time lost due to training and the learning curve, and impact on existing contracts.  Many institutions indicated they did not yet have information to know the costs.  Freely-available training materials and documentation would reduce some of the costs.

Institutions noted various benefits to be weighed against the costs.  These included a major change in how characteristics of things and relationships are identified, with a focus on user tasks; a new perspective on the use and re-use of bibliographic metadata; and the encouragement of new encoding schema and better systems for resource discovery.

In conclusion, the Coordinating Committee believes that the high level of community interest in the test and test results demonstrates the value of evidence-based decision making in the library community.


Popular blogs devoted to Cataloging and Metadata

Resource Description & Access (RDA)

Resource Description & Access (RDA)

Planet Cataloging

Cataloging – The Library Herald

Organization of Cataloging Units in Academic Libraries

Bibliographic Wilderness

Coyle's InFormation

First thus

Cataloguing Aids Blog

Bib Blog

Books and Library stuff

Cataloging Futures

CILIP Cataloguing and Indexing Group




Lorcan Dempsey's Weblog

Metadata Matters

Mod Librarian

The Cataloguing Librarian

Problem Cataloger

The Feral Cataloger


RDA Toolkit Blog

Catalogue & Index Blog

Dublin Core Metadata Initiative