Part One -- Part Two
The next step in automating the processing of illustrated book pages is to discriminate among those illustration types requiring different electronic treatments. By studying the statistical and morphological details of the various illustration types that must be discriminated, characteristic signatures in the electronic images of the captured example pages were sought that allow for the classification of a given illustration's type with some degree of accuracy. Although these methods have not been implemented in software during this project, the methods described are designed with such a future implementation in mind.
As discussed in the conclusions section below, a surprising result has been that very few distinctions among process types are needed, making some of the following discussion of academic interest.
There is no need to distinguish between illustration processes that will have the same image processing treatment. The limiting case is that only halftones need to be discriminated from other illustration types. Then we need only classify a given illustration as being halftone or other. A case short of that would further distinguish "hard" and "soft" process illustrations. In our study, "hard" process illustrations included the engravings, etchings, and halftones; "soft" process illustrations included photogravures, mezzotints, lithographs, and collotypes.
At moderate resolutions, the detailed image structures are largely obscured, leaving less information to use in discriminating among illustration types, but also lowering the need to make distinctions, with the softening of features allowing similar treatment for multiple illustration types.
As a general rule, at very high spatial resolutions many more distinctions can be made than at more moderate resolutions. But only the moderate resolution case is of current interest from the standpoint of economic viability and equipment availability. In fact, our investigation into the details of the structure seen in the various illustration process types was hampered somewhat by several related factors: (1) the so-called zoom images acquired with the Ektron camera were not at as high a resolution as desired owing to lens limitations; (2) the poor focus seen in those images (owing to having no live two-dimensional display during setup) made them of little use; and (3) no microscope was available with digital imaging capability.
Morphology refers to the shape of image structures. Much can be inferred from the subtle details of individual strokes or cuts in an illustration.
220.127.116.11 Scale and Texture
Different illustration processes have different dominant scales. The dominant scale is the distance over which its most significant structural feature exists.
A variety of techniques exist in the literature [19,20] for analyzing the scales over which given images have information content; much texture analysis involves comparison of the energy present at different scales.
The periodicity of the structures seen in halftones and machine gravure is very distinctive and amenable to automatic detection. The dimensionality (one- or two-dimensional nature) of the periodicity can assist in distinguishing between these two methods of illustration.
The granular nature of collotypes and etchings produces very distinctive textures. The scale or size of the grains gives clues to the process used. Statistical measures such as the ratio of perimeter to area of closed structures can be used to further characterize one process over another. Reticulating patterns can also be characterized statistically, with measures showing the scale of such a pattern's curvature or the circuitousness of its path.
Tonality is inherently difficult to quantify; just how many distinguishable shades are present in a given image?
The extent (in terms of the number of distinguishable shades) of swings in tonality and the spatial distance across which these may occur represent very pointed characterizations of the texture and subtlety of a pattern in an image.
The steepness of edges (in terms of the number of distinct transitional tonalities in going from fully on to fully off a stroke) in an image pattern is also a distinguishing feature.
Halftones have unique characteristics that are well suited to automated detection. Traditional halftones from the 19th and early 20th centuries (prior to the computer age which opened a wide range of variations ) have a periodic structure of varying size dark regions. The orientation of this periodic grid is at an angle to the bottom edge of the book page; this angle is almost always 45 degrees, only rarely are angles of 15, 30 or some other number of degrees used.
More about the method we used for locating halftone regions can be found in the user's manual for the utility found in Appendix 2.
Moderate capture resolutions (400 dpi, 8 bit) were recommended as a significant result of this study. This places illustrated book capture into an economically advantageous realm at the edge of today's commercial image capture abilities. While this permits excellent rendition of the essence and adequate rendition of the detail of most commercial book illustrations, it does not provide complete information about their structure.
Without the higher 1,000 or 2,000 dpi resolutions, insufficient information is present to enable most of the automated techniques for discriminating illustration process type discussed in this section to operate. Further, even if a special scanner arrangement could be constructed that permitted small sub-regions of the illustration to be sampled at the higher resolutions for the purpose of characterizing the illustration process type, the 400 dpi data available is insufficient to support the special processing envisioned in most automated techniques.
This sounds grim, so why did we recommend the 400 dpi? Because it works remarkably well. An interesting book by Becker  portrays side-by-side many original artists' drawings with the new embodiment they took when they became book illustrations rendered via the methods available in book publishing. Sometimes the original artist, but more often a completely different artisan, rendered the essence and often the detail of the original work into a new medium with a completely different structure.
In an analogous way, the 400 dpi capture performs an optical averaging function that more or less completely obscures the original structure (while also making it unavailable for specialized processing). That averaging function, operating much as does the human eye when presented with excessive detail, does a surprisingly good job of distilling the hidden structural features of the illustration process down into an image having the intended net effect on the viewer. The alternative would be painstakingly capturing a 2,000 dpi image, at huge expense, then running it through a variety of sophisticated algorithms aimed at detecting its structure and inferring its process type, then specially processing it to convert those structures to the best moderate-resolution digital equivalent. Our suspicion is that the latter result would not be significantly better.
In conclusion, at moderate resolutions where no process structure is fully preserved, one approach is that only halftone illustrations need to be treated differently - and therefore reliably detected - with other illustration process types lumped into the "Other" category.
Another possible approach says that the "Other" category needs to be split further into "hard" and "soft" illustration processes. The "hard" category includes only those hard-edged, cutting processes that have feature sizes and dominant scales equal to or larger than the 400 dpi capture resolution limit allowing their edges to remain sharp and well-defined. These images would benefit from an edge-sharpening operation, at least for the 400 dpi master, if not for its derivatives. Very fine-scale hard process illustrations would have poorly defined edges, beyond the roll-off of the modulation transfer function (MTF) and would thus generate a wide range of tonalities or shades, much as the soft process illustration types do. These would be grouped with the soft processes, which would not generally benefit from a sharpening operation.
Techniques exist for distinguishing halftone illustrations from other illustrations, although these have not been implemented in the halftone utility developed under this project. While the current halftone utility runs on 600 dpi data, it is expected that 400 dpi data offers sufficient information both to detect the presence of halftone data in an illustration region and to descreen it. Although the utility's current method for measuring halftone frequency has difficulty at the lower resolutions, it is expected that another approach, perhaps using frequency-domain methods, should be workable with such images.
The most appropriate electronic treatment of a given illustration type has two components. The first is the set of parameters that describe the electronic image (such as spatial resolution and grayscale bits per pixel). The second component comprises any additional processing that must be applied to that electronic image to place it into final form for either viewing on a computer monitor or for printing. These two destinations will likely require different processing steps.
When considering how to proceed with this study, it was assumed that very high-resolution images would be available. The first step was to "map" each physical content type (seen embodied on a page) to an electronic content type, i.e., a spatial resolution and a bit depth. Given that the very high-resolution images were not available and that (in any case) it would not be wise to assume they would be available in economically-viable mass conversion projects, the focus shifted to moderate resolution images. At more moderate resolutions, bit depths below 8 bits do not give the payback in space savings to offset the complications they create, so all multitonal content was assumed to be 8 bit (color not being considered in the project).
Possible image processing steps can include:
For each distinguishable class of illustration type (each member of which shares a set of attributes measurable in the originally captured electronic image), a set of appropriate processing steps was identified.
Some processing is applied to the master image. Additional steps are applied to create derivative screen access images and derivative print access images .
In the ultimately flexible system - a dynamic scanner - all images are captured at very high resolution and in full color and are analyzed to see how much detail is present, both spatially and tonally. Then a digital master image is developed that captures all that detail without any excess. If color data is present, the color representation is retained. If not, and grayscale content is present, a grayscale representation is retained. If no grayscale content is found, only a bitonal representation is stored. This could occur on a region-by-region basis.
In creating a derivative image, it may be desirable to convert one feature to another, representative one. An example is found in the case of preparing a grayscale image for bitonal printing. The grayscale representation is converted to a bitonal one through halftoning. Similarly, when taking a very high-resolution bitonal image down to a lower spatial resolution, conversion to a grayscale representation may be more appropriate.
Different illustration processes have different dominant scales. The dominant scale is the distance over which its most significant structural feature exists. An appropriate choice of span for a sharpening kernel could depend on knowing this scale.
Conservative brightening, which stretches the tonal range while clipping no values is an appropriate first step for the master page image containing any of the illustration types.
Deskewing using a bilinear or higher-order interpolation is an appropriate next step, particularly if the efficiency of subsequent steps can be improved (document understanding, etc.). It is worth noting that a mild low-pass filtering effect results from this step, so extremely small angle skews are best left untouched.
Halftones have their screen frequency measured and are then descreened, which involves two filtering steps. For more information, see the software user's manual in Appendix 2.
A digital master which is a fully grayscale image is most appropriate. The text regions should stay at the capture resolution or be scaled up, while the descreened halftone region could remain at the capture resolution or could be scaled down. Sharpening should be applied to the textual regions but not to the descreened halftone (which has already experienced a mild sharpening).
Screen access images would probably be prepared by recompositing the page, while scaling the two pieces of content down to the screen resolution, then compressing with a progressive method like progressive JPEG or interlaced GIF.
Print access images could be delivered as PDF files, where the text regions have been scaled up to the native resolution of the printer (600 or 1200 dpi), thresholded by a high-quality process and ITU Group 4 compressed and where the halftone regions have been left as grayscale and compressed using moderate compression JPEG (8:1 ratio). This allows the print driver or printer controller to decide how best to re-halftone the photo region in light of its knowledge of the print engine's brightening or darkening characteristics. If re-halftoning is performed before sending to the printer, a calibration method should be used to determine the appropriate brightening operation to perform prior to halftoning.
These processes have such small scale or subtly tonal features that a softened representation is key to their fidelity.
A digital master which is a fully grayscale image is most appropriate. The text regions should stay at the capture solution or be scaled up, while the illustration region could remain at the capture resolution or could be scaled down. Sharpening should be applied to the textual regions but perhaps not to the illustration region that is experiencing the moderate sampling resolution as a means of creating a softened content. It is possible that a different brightening operation could be applied to the illustration region than to the text regions.
Screen access images would probably be prepared by recomposing the page, while scaling the two pieces of content down to the screen resolution, then compressing with a progressive method like progressive JPEG or interlaced GIF.
Print access images could be delivered as PDF files, where the text regions have been scaled up to the native resolution of the printer (600 or 1,200 dpi), thresholded by a high-quality process and ITU Group 4 compressed and where the illustration regions have been left as grayscale and compressed using moderate compression JPEG (8:1 ratio). This allows the print driver or printer controller to decide how best to halftone the illustration region in light of its knowledge of the print engine's brightening or darkening characteristics. If halftoning is performed before sending to the printer, a calibration method should be used to determine the appropriate brightening operation to perform on the illustration region prior to halftoning.
This type of illustration is handled just like the soft process illustrations, since the moderate resolution has converted the illustration's hard, fine features to an impressionistic, softer appearance.
These illustrations have bold-lined primarily bitonal content which has large enough features to not be washed out by the moderate resolution capture process.
A digital master that is a fully grayscale image is most appropriate. The text regions should stay at the capture resolution or be scaled up, while the illustration region could remain at the capture resolution or could be scaled down. Sharpening should be applied to the textual regions and to the illustration region although the span of the sharpening filters may be different for the two regions. It is possible that a different brightening operation could be applied to the illustration region than to the text regions.
Screen access images would probably be prepared by recompositing the page, while scaling the two pieces of content down to the screen resolution, then compressing with a progressive method like progressive JPEG or interlaced GIF.
Print access images could be delivered as PDF files, where the text regions have been scaled up to the native resolution of the printer (600 or 1,200 dpi), thresholded by a high-quality process and Group 4 compressed and where the illustration regions have been left as grayscale and compressed using moderate compression JPEG (8:1 ratio). This allows the print driver or printer controller to decide how best to halftone the illustration region in light of its knowledge of the print engine's brightening or darkening characteristics. If halftoning is performed before sending to the printer, a calibration method should be used to determine the appropriate brightening operation to perform on the illustration region prior to halftoning.
Halftones are particularly difficult to capture in digital form, as the screen of the halftone and the grid comprising the digital image will often conflict with one another, resulting in distorted digital image files exhibiting moiré patterns at various scales on computer screens or printers. A method for satisfactorily converting halftones has been most pressing, as the halftone letterpress process became one of the most dominant illustration types used in commercial book runs beginning in the 1880s.
This project has resulted in the development of a practical, working utility to detect the location and characteristics of a halftone region on a page (known to contain a halftone) and appropriately process that halftone region independently from its surrounding text.
Since this utility is not embedded inside a specific scanner, but runs externally on a UNIX server, it may be used on data from any scanner that can supply the appropriate raw bit stream (e.g., unprocessed grayscale of a sufficient spatial resolution).
Below is an example of the utility locating the bounding rectangles of six different halftone regions on the same page, followed by an enlarged comparison of the unprocessed halftone.
Detecting Halftone Regions
Processing Halftone Regions
Raw Grayscale Capture
Processed Halftone Information
Some Portable Document Format (PDF) files also have been prepared, which show raw and processed halftone images next to one another, allowing easy experimentation with zoom levels and printing results. These PDF files are found at: http://www.picturel.com/halftone .
The left-hand page shows the automatically detected halftone region removed from a 600 dpi page, but without any descreening process applied. The right-hand page of each PDF shows the same area with the descreening process applied. For neither image has any scaling or compression been performed.
The Cornell University Library Department of Preservation tested and evaluated the prototype utility for halftone processing. Observations from that testing are listed in this section. Italicized paragraphs in this section represent comments by Picture Elements (the designers of the utility) on some of those observations.
The evaluation was performed using serial and book publications dating from the 1880s to the 1940s that contained a range of halftones, from 110 line to 175 line screens. With very few exceptions, these halftones represented screens rotated to a 45 degree angle. Some examples represented separate plates, and others were presented within a page of text.
As the example in the previous section illustrates, the halftone processing utility enables one to "see through" the screen of the halftone to the pictorial content beneath. This process worked equally well on a range of screen rulings, suppressing or smoothing the halftone screen but never entirely eliminating its presence. At higher screen rulings, there is a denser information base for the utility to interpret, and the result is less evident halftone "shadow" in the processed image. The following illustration demonstrates the halftone processing for a 120 halftone screen ruling, placed at 45 degrees. The utility worked equally well on halftones at other screen rulings placed at 45 degrees.
120 lpi Halftone
The suppression process was most successful on halftones placed at a 45 degree angle. Other angles proved more troublesome, but fortunately these occurred very rarely in 19th century commercial printing.
The utility currently assumes that all the halftones to be processed have a screen ruling at a 45 degree angle, owing to the overwhelming prevalence of this choice. While the designers have methods for detecting the screen angle, these are not included in the current version of the utility. Some residual diagonal texture is seen in the processed images. While it is not enough to cause moiré patterns, allowing the design objective to be met, it seems possible that additional work on the filter may reduce it still further.
Resampling halftone images introduces the likelihood of moiré patterning from screen frequency interference. This was evident when some of the full resolution images were scaled to 100 dpi to create derivatives for Web access, as illustrated in the following examples. Obviously image processing routines can be used to minimize the introduction of moiré patterns as derivative images are prepared, but typical processes use blurring filters, which do not discriminate between screen rulings, and the results can vary dramatically. Further, this blurring process degrades resolution, which is already compromised in the scaling effort. Note the comparison of 100 dpi scaled images - the one on the left was resampled without using a blur filter; the one in the middle was created using the standard blur filter; and the one on the right was created using the halftone utility.
The utility is well suited to the creation of derivative images. By descreening first, moiré patterns are not created during scaling. Rather than simply blurring, the descreen process attempts to filter out the dominant frequency of the halftone screen. This is done by cascading a low-pass filter with a high-frequency emphasis filter, lessening the blurring effect. For most of the halftones, the descreening algorithm produced images that can be sub-sampled at any frequency without moiré patterns. For others, including those placed at angles other than 45 degrees, only about 95% success can be claimed, since faint patterns still appear at a few frequencies, but they are much less pronounced than in the original images, and they do not occur at most frequencies.
Project staff printed the original image files and the processed image files to determine the effect on print quality. Black and white printers print grayscale images by using a halftone dithering pattern to simulate the gray. The combined influence of the original halftone pattern and the printer's halftone pattern will increase the likelihood of moiré patterns. Removing (or greatly reducing) the screen pattern of the original halftone virtually eliminates this problem. The converted grayscale file is still at the mercy of the printer's resolution and halftoning algorithm, but the additional challenges posed by interpreting the original halftone screen have been nearly eliminated. Results from the HP 4MV laser printer, which imparts a 106 lpi, produced prominent moiré on the unprocessed halftones screened at 120 and 133, and noticeable moiré on the 150; the processed images printed cleanly, with little to no moiré. Prints were also created on an HP 4500 Color Laser Jet, which imparts a 150 lpi. Similar results were produced, although the moiré patterning was less prominent on the unprocessed images and the processed images came out beautifully.
Although the utility was designed specifically for halftones, it was tested on an engraving, an illustration type that is subject to the similar pattern interference issues as halftones when printed or scaled. The halftone utility detected the engraving and treated it, but the end results were predictably disappointing, in part because the engraving really has no single, constant frequency. Additionally, the utility is designed to smooth out halftone dot patterns, enabling one to simulate the effect of viewing detail from the original illustration, rather than merely a grid of dots. In the case of engravings, the close up view reveals the essential attributes of the process. Because the halftone utility softened the edges of lines and hatchmarkings, the process resulted in an obvious degrading of the detail view.
Proper behavior on engravings is not to be expected. The approach used for halftone detection and frequency measurement would likely never yield an accurate result when applied on an engraving. One stated assumption for the halftone utility is that the page image presented to it must be known to contain a halftone. Future versions could be modified to reject the processing of pages that seem to contain no halftones.
The halftone conversion utility is also constrained to rectangular shapes. When confronted with irregularly shaped halftones, the utility processes a rectangular area determined to be part of the halftone region. If that region contains text, the text is treated as halftone information, which results in a blurring of the characters.
The use of rectangular regions for halftone processing represents a simplified approach that is reasonably well justified for older materials. Typographic innovations in the twentieth century have introduced more non-rectangular halftone regions, but rectangular regions are still much more prevalent. The high-frequency emphasis pass in the algorithm helps sharpen up any text that intrudes into the rectangular area.
While one point of view contends that the halftoning is an essential part of the illustrated page artifact, another holds that the original illustration or photograph that preceded the halftone is of more direct interest. Putting aside these philosophical questions, the practical problems inherent in digitizing and digitally manipulating halftones argue strongly for application of the processing utility to scanned halftones. This leads, however, to a new technical problem - how best to re-aggregate this distinct grayscale image with the balance of the content from the enclosing page.
A possible, but disappointing, approach is to take the descreened halftone image and "rescreen" it for some target output device and then merge this back with the rest of the page which is then entirely bitonal. This still has moiré problems and does not allow any given printer controller to decide how to optimize the re-halftoning for its print engine. Grayscale data gives that flexibility.
A variety of emerging standards are up to the task of linking a grayscale subregion to an enclosing bitonal page, including ITU-T Recommendation T.44 for color internet facsimile, and the TIFF-FX standard for internet color fax (RFC 2301). We consider yet another one here: Adobe's Portable Document Format, PDF.
PDF permits multiple pieces of image content of varying types to be placed accurately onto an enclosing page. This allows the descreened halftone to remain a grayscale image (optionally at a lower resolution and JPEG compressed - often good choices) while the textual portions of the page are thresholded to bitonal and compressed using Group 4 (ITU-T Recommendation T.6) compression. Substantial space savings are achieved in this way.
Picture Elements has hardware that implements a thresholding operation using the Multiple Scale Thresholding algorithm of the VST-1000 integrated circuit. By this means, the original raw 600 dpi page is thresholded, producing a high-quality bitonal image. The rectangular region(s) found to contain halftones are then "whited-out" or overwritten as all zero values. This bitonal image is then ITU Group 4 compressed.
Using another utility Picture Elements has created, the original page then can be recomposed by laying the grayscale of the descreened halftone region on top of the bitonal text and white background and storing the result as a page in a PDF file.
Some example compound PDF files are found at: http://www.picturel.com/halftone.
As in any software project, the designers kept a wish list of ways in which the halftone processing utility could be improved. Since it is offered as public domain source code under the BookTools Project (http://www.picturel.com/booktools), others may undertake these enhancements and contribute the resulting improvements back to the community. These include:
This study has produced a number of important results. The means for characterizing the key features of book illustrations as they pertain to digital imaging have been developed, and guidelines for assessing conversion requirements recommended. This is especially critical for publications from the mid-19th century to the mid-20th centuries, which were printed on paper that has become brittle. These volumes must be copied to preserve their informational content, and by defining quality requirements for electronic conversion, digital imaging can become an attractive alternative to conventional means of reformatting, such as microfilming and photocopying.
The basic groundwork for preparing an automated method for detecting and processing different illustration types has been prepared, and an example utility for processing halftones developed and tested. The halftone processing utility in particular will be a most welcome addition in the preservation tool kit. One of the major difficulties encountered by institutions converting text based material has been in the capture of halftone information With its introduction in the late 1880s, halftone printing revolutionized the way illustrations were created in mass publications. Within 20 years, it virtually replaced the used of wood engraving in relief printing, and resulted in an increase in the graphical content of many books and journals.
Library of Congress reports have noted the special problem of printed halftone illustrations, which are prone to distortion and moiré in both the capture and presentation stages. The Library has identified four suggested means for treating halftones, all of which present their own problems. Obviously the ability to automate their treatment in a manner to ensure good capture that is free of distortion would be of tremendous benefit to cultural repositories that are converting late 19th century and early 20th century materials.
Since the halftone utility addresses a vertical slice of the more general problem of distinguishing and appropriately processing a wide range of illustrations, it will likely not perform properly when presented with other illustration types. Nonetheless, this work prepares the ground for characterizing and processing other graphic illustration types. Beyond the scope of this present project, the intent is to later develop additional utilities for processing the remaining illustration types.
This project also facilitates a shift in thinking about how to create the highest possible image quality for a given collection. This new capture architecture has the appropriate raw grayscale or color data collected from any scanner whose document handling capabilities suit the peculiarities of a particular item, such as a bound volume, a 35mm slide, or a 40 inch wide architectural drawing. The scanner choice can be made on the basis of its physical suitability and the quality of its raw image data, without regard to any special processing needs associated with the source document itself. All special processing and manipulation of raw data from these various sources is then performed in an off-line, largely scanner-independent manner by a centralized server we might call a post-processing server. In this way we are not constrained by the variable and inconsistent processing offered within the many different scanners which are needed to overcome the physical peculiarities of each item in a collection. This work will be particularly important in developing the means for capturing bound volumes without having to resort to disbinding or to the creation of photo-intermediates.
1. Anne R. Kenney, and Stephen Chapman, Digital Imaging for Libraries and Archives. Cornell University Library, Ithaca, New York, 1996.
2. Picture Elements, Inc., Guidelines for Electronic Preservation of Visual Materials, Library of Congress Preservation Directorate, Washington, D.C., 1995.
3. Vito J. Brenni, Book Illustration and Decoration, A Guide to Research, Greenwood Press, Westport, Connecticut, 1980.
4. Bamber Gascoigne, How to Identify Prints, Thames and Hudson, New York, New York, 1986.
5. James M. Reilly, Care and Identification of 19th-Century Photographic Prints, Eastman Kodak, Rochester, New York, 1986.
6. Geoffrey Wakeman, Victorian Book Illustration, Newton Abbot, Norwich, Great Britain, 1973.
7. William Shakespeare, Shakspere's Songs and Sonnets, S. Low, London, England, 1875(?), p. 26.
8. Harper's Magazine 99 (1899), New York, p. 55.
9. Samuel Rogers, Italy; a Poem , E. Moxon, London, England, 1854, p. 79.
10. William S. Russell, Pilgrim Memorials, and Guide to Plymouth, Crosby and Ainsworth, Boston, Massachusetts, 1866, p. 80
11.Philip G. Hamerton, Chapters on Animals, Seeley, Jackson, and Halliday, London, England, 1883, p. 144.
12. Ada Earland, Ruskin and His Circle, Hutchinson & Co., London, England, 1910, frontispiece.
13. Benvenuto Cellini, The Life of Benvenuto Cellini, John C. Nimmo, London, England, 1896, frontispiece.
14. David T. Valentine, History of the City of New York, Putnam, New York, New York, 1853, p. 183.
15.Hester Lynch Piozzi, Dr. Johnson's Mrs. Thrale: Autobiography, Letters, and Literary Remains of Mrs. Piozzi, Foulis, Edinburgh, Scotland, 1910, p. 89.
16. Brian A. Wandell, Foundations of Vision, Sinauer Associates, Inc., Sunderland, Massachusetts, 1995.
17. Michael Ester, Digital Image Collections: Issues and Practice, The Commission on Preservation & Access, Washington, DC, 1996, p. 6.
18.L.H. Sharpe, II and Basil Manns, "Document Understanding Using Layout Styles of Title Page Images," in Donald P. D'Amato, Wolf-Ekkehard Blanz, Byron E. Dom, Sargur N. Srihari, editors, Machine Vision Applications in Character Recognition and Industrial Inspection, Proc. SPIE 1661 (1992), pp 50-60.
19. H.S. Baird, H. Bunke, and K. Yamamoto, editors, Structured Document Image Analysis, Springer-Verlag, rlin, 1992.
20. Lawrence O'Gorman and Rangachar Kasturi, Document Image Analysis, IEEE Computer Society Press, Los Alamitos, CA, 1995
21. Robert Ulichney, Digital Halftoning, MIT Press, Cambridge, MA, 1987.
22. David P. Becker, Drawings for Book Illustration: The Hofer Collection, Harvard University Department of Printing and Graphic Arts and the Houghton Library, Stinehour Press, 1980.
24. David Clayton Phillips, Art for Industry's Sake: Halftone Technology, Mass Photography and the Social Transformation of American Print Culture, 1880-1920, dissertation, May 1996.
Mark Dimunation, Chief, Rare Book and Special Collections Division
Carl Fleischhauer, Digital Conversion Technical Coordinator, National Digital Library
Basil Manns, Senior Physical Scientist
Irene Schubert, Chief, Preservation Reformatting Division
John Dean, Director, Department Preservation & Conservation
Nancy Green, Curator of Prints and Drawings, Johnson Museum of Art
Michele Hamill, Photographic Conservator, Department of Preservation & Conservation
Gregory Page, Associate Professor, Art Department
Tatyana Petukhova, Paper Conservator, Department of Preservation & Conservation
John Reps, Professor Emeritus, Department of City & Regional Planning
The Halftone Utility User's Manual is included with the source code distribution of the halftone processing utility software at http://www.picturel.com/halftone as the file peiHalfTone.pdf.
Part One -- Part Two