Library of Congress

Program for Cooperative Cataloging

The Library of Congress > Cataloging, Acquisitions > PCC > PCC Standing Committee on Automation > PCC Standing Committee on Automation (SCA)

3rd Task Group on Journals in Aggregator Databases

FINAL REPORT

August 2004

Task Group Members:

Matthew Beacom (Yale), Ruth Haas (Harvard), Les Hawkins (LC liaison), Jean Hirons (LC liaison), Oliver Pesch (EBSCO), John Riemer (UCLA), Chris Roberts (Ex Libris), Adolfo R. Tarango (UCSD) -- Chair, Jina Choi Wakimoto (CSU Northridge)

Introduction

The PCC SCA 3rd Task Group on Journals in Aggregator Databases was originally charged to continue the work of its predecessors to investigate the need for additional vendor record sets for journals in aggregator database and to assist vendors in the creation of those sets. Additionally, the Task Group was to investigate methods to promote use of these records sets among libraries. However, in light of a changed environment, and needs expressed by the library community and serial management companies, the Task Group's charge was rewritten. The Task Group was newly charged to create and test a mechanism by which separate electronic version records might be machine generated from existing records which could then be added to the CONSER database, and if successful, recommend the means for employing the mechanism.

Progress Summary

The Task Group's initial work was to identify data elements from existing serials records to see which could be transferred as is, which would need to modified and how, and which would need to be added so as to end up with at least a minimal CONSER level record. This recommend data element set was presented for review to CONSER members at the May 2003 CONSER Operations meeting. Incorporating feedback from that review, the Task Group asked Robert Bremer, of OCLC, to develop a macro which would, using the data element set as a guide, take an existing OCLC serial record and create a separate electronic version serial record. With minimal human review, this record could be then be added to the OCLC and CONSER databases. (Current version of the data element set given in Appendix A.)

While Robert worked on creating the macro, Task Group members investigated using titles from Lexis-Nexis to test the macro, but discovered that, though they were not CONSER records, there already existed separate electronic version records for a very large percentage of the Lexis-Nexis titles. The Group then polled the CONSER membership and ended up selecting titles from Ingenta for their testing.

Upon delivery of the macro from Robert, Task Group members did some preliminary testing of the macro. Results were discussed at the ALA mid-winter conference in San Diego and additional modifications were made to the macro. Live testing in OCLC, with feeds into the CONSER database followed. Results and a demonstration of the macro was given at the May 2004 CONSER Operations meeting. A few minor additions were recommended. At the Task Group's meeting during ALA annual conference in Orlando, the macro was declared completed and ownership of the macro turned over to CONSER. Robert Bremer agreed to create a similar macro for OCLC's Connexion client which would also be turned over to CONSER.

Next steps

The developed macro should prove to be a valued tool for CONSER libraries, prospectively helping them with the workload of creating separate electronic version records. However, this tool does not help CONSER libraries convert existing separate electronic version records. As the Task Group's review of the Lexis-Nexis and Ingenta titles revealed, many separate electronic version records exist in the OCLC database, they just happen not to be CONSER records. The Task Group recommends that a future CONSER group be established to investigate methods by which those non-CONSER records could be efficiently converted into CONSER records, perhaps using a combination of encoding level, 042 coding, and human review, as was done for the macro, to indicate level of authentication and description and insure quality and integrity of the CONSER database.

As Task Group members reviewed the Lexis-Nexis file, we discovered some monographs were included. This was not unexpected, but it brings up the fact that the electronic version landscape is not limited to serials. Specifically, we recall, and reaffirm, the 1st Task Group on Journals in Aggregator Databases' "Next Steps" recommendations numbers 2 and 4:

2. Make a list of desirable sets of human-created analytics and recruit WorldCat Collection Sets contributors from among the OCLC membership.

4. Determine if the same specifications for serials are suitable for full-text monographs in aggregator databases, e.g. netLibrary.

With regards to number 2 above, note that the University of California, San Diego, currently is working on 13 such collection sets, some in partnership with other libraries (for listing see http://orpheus.ucsd.edu/disc/worldcat.htm )

As a final recommendation, in cases where no record exists in the OCLC database for any version of a given title, the Task Group recommends that catalogers make it priority to contribute either an e-version record or a tangible version record that can be cloned.

Appendix A

Data Element Set

The Third PCC Task Group on Journals in Aggregator Databases recommends the following actions regarding fields in MARC bibliographic records for deriving base records. These base records will be added to the CONSER database. Certain fields are accepted as is from the record, others are manipulated, and others are added. The details are below.

Deriving base MARC Records for Electronic Resources

Fields taken from the source record

The following fields will be taken from the source record as is. The source record may be the record for the print, the microform, or the CD-ROM version of the title. Fields not listed here or in the "Fields modified or added" chart below will not be carried forward from the source record.

034 041 043 055 100 110 111 245 246 250 255 260 310 321 362 440 490 500 504 505 507 514 515 518 520 521 522 525 546 550 580 600 610 611 630 650 651 700 710 711 730 740 780 785 800 810 811 830

Fields modified or added

For fields modified or added, here is what is done.

Field Name Action Example
Leader All values are either system generated or taken as is from source record except bytes 17 (Encoding level) and 18 (Descriptive cataloging form).

If byte 17 is blank or 1 in source record, byte will be coded 1; all others coded as 2.

Byte 18 will be coded as "a"

001 Control Number System generated when record added to OCLC database 48321608
003 Control Number Identifier System generated when record added to OCLC database OCoLC
006 Additional material characteristics Add for computer file format. Use following values:

Byte 00 = m

Byte 05 = blank

Byte 09 = d

Byte 11 = transfer value from 008 Byte 28

All other byte values are blank

m d f
007 Electronic Resource control field Add for computer file format. Add only listed subfields with the following values:

$a = c

$b = r

$d = u

$e = n

$f = u

cr unu
008 Fixed-Length Data Elements Transfer data as is except for following:

Byte 20 = blank

Byte 23 = s

Byte 39 = c

010 LCCN Assigned by cataloger 2003-356983
022 ISSN number Transfer number in $a to $y, retain other numbers as is $y 1234-5678 $y 2345-6789
040 Cataloging Agency Add appropriate codes in $a and $c for agency creating record $a OCoLC $c OCoLC
042 Authentication Code Code dependent on whether source record is CONSER or not If CONSER:

$a lcd

If not CONSER:

$a msc

050, 060 LC and NLM Call numbers Change indicator values to:

1st indicator = blank

2nd indicator = 4

Transfer only $a

_4 $a QC861.2
090 LC type call number Convert to 050 _4
130 Main Entry -

Uniform title

Create 130 field for uniform title if the record does not already have a 100, 110, 111 field. If 130 exist, add online designation.

Ind 1 = 0

Ind 2 = blank

$aTitle (online)

0 $a 19th century music (Online)
240 Uniform title If record had either a 100, 110, 111 field, Uniform title goes in field 240 not 130.

Ind 1 = 1

Ind 2 = 0

$aTitle (online)

245 Title Statement Remove pre-existing $h subfield. Insert "$h[electronic resource]" to follow subfields "$p, $n, $a" if they exist. Deal with punctuation where subfield inserted. (See note below.) 00 $a 19th century music $h [electronic resource].
500 (DOB) Description Based

on Note

Delete any present and add new note: Description based on print version record

Ind 1 = blank

Ind 2 = blank

$a Description based on print version record
530 Additional Physical Format Available

Note

Delete any present when deriving from a print record and add new note: Also issued in print.

Ind 1 = blank

Ind 2 = blank

$a Also issued in print.
776 Additional physical

form entry

Add with elements from source record if present:

Ind 1 = 1

Ind 2 = blank

$t = 130 or 245 $a, $n, $p

$x = ISSN

$w = OCLC number

$w = LCCN

776 1_ $t Journal of biomolecular chemistry $x 2345-6789 $w (DLC)2003067894 $w (OCoLC)56789102
856 Electronic Location

and Access

Delete any present in source record

Title field manipulation

The goal is to insert $h[electronic resource] into the right place in the title. The rules go as follows.

Placement of the subfield

Place after $p, if it exists

Else, place after $n, if it exists

Else, place after $a

Maintain punctuation integrity

If the subfield the $h is being inserted after ends in one of ,;:/= (comma, semi-colon, colon, forward slash, equals) then, move this puntuation to the end of the $h field.

If the subfield the $h is being inserted after ends in "." (period), then check to see if the last word is an abbreviation. If so, keep the period where it is, if not, move the period to follow the $h subfield data just added.

Before After
245 00$aAccent on living. 245 00$aAccent on living$h[electronic resource].
245 00$aAccess :$bthe newsmagazine of the American Dental Hygienists' Association. 245 00$aAccess$h[electronic resource] :$bthe newsmagazine of the American Dental Hygienists' Association.
245 00$aAging /$cFederal Security Agency. 245 00$aAging$h[electronic resource] /$cFederal Security Agency.

Uniform Title Processing

The uniform title normally goes in the 130 field unless the record has a 100, 110 or 111 field in which case the 240 field is used. If the record already has a 130 or 240, then this field is used and adjusted. If the title has trailing parenthetical data then, " : Online" is inserted.

Creating a 130/240 field if none exists

Check to see if the record has either fields 100, 110 or 111

If not, create the 130 else we are creating a 240 field

Set the indicators as follows:

o For 130

Ind 1 = 0

Ind 2 = blank

o For 240

Ind 1 = 1

Ind 2 = 0

Extract the title from the 245 field taking subfield $a, $n, and $p and place in the $a field of the new field

Check for trailing parenthetical data

o If parenthetical data at the end

See if " : Print" is part of the string, if so, remove it

Add " : Online" to the end

o If not parenthetical data

Add "(Online)" to the end of the title

Using existing 130/240 field

Check for trailing parenthetical data

o If parenthetical data at the end

See if " : Print" is part of the string, if so, remove it

Add " : Online" to the end

o If not parenthetical data

Add "(Online)" to the end of the title

Before After
130 0 $aAging (Washington, D.C. : 1951) 130 0 $aAging (Washington, D.C. : 1951 : Online)
245 00 $aAccent on living. 130 0 $aAccent on living (Online)
245 00 $aAlcohol health and research world /$cNational Institute on Alcohol Abuse and Alcoholism. 130 0 $aAlcohol health and research world (Online)`
130 0 $aJournal of European public policy (Print) 130 0 $aJournal of European public policy (Online)
130 0 $aNine (Edmonton, Alta. : Print) 130 0 $aNine (Edmonton, Alta. : Online)

Back to Top