Sustainability of Digital Formats: Planning for Library of Congress Collections

Introduction | Sustainability Factors | Content Categories | Format Descriptions | Contact
Content Categories >> Still Image | Sound | Textual | Moving Image | Web Archive | Datasets | Geospatial | Generic

Still Images >> Quality and Functionality Factors

Scope
The factors discussed here apply to still images that convey their meaning in visual terms, e.g. pictorial images, photographs, posters, graphs, diagrams, documentary architectural drawings.  Formats for such images may be bitmapped (sometimes called raster), vector, or some combination of the two.  A bitmapped image is an array of dots (usually called pixels, from picture elements, when referring to screen display), the type of image produced by a digital camera or a scanner.  Vector images are made up of scalable objects—lines, curves, and shapes—defined in mathematical terms, often with typographic insertions.  Graphic design software may be primarily of the paint variety and intended to produce bitmapped images or primarily of the draw variety and intended to produce vector images.  Formats may combine vector and bitmapped data, either in layers that will be combined at display or output, or in a single flat representation (for example, allowing a vector-defined shape to be "filled" with bitmapped image data). Some combination formats have been termed metafile formats.

This discussion concerns a variety of media-independent still image formats—bitmapped and vector—and their implementations. Some formats, e.g., JPEG 2000, allow for many different implementations compared to, say, GIF_89a, a format whose uses are relatively more constrained. Within the overall range, many implementations are for end-user applications, intended for home or classroom use, Web sites, and the like. At the other end of the range, are implementations for specialized professional applications,1 some of which are relevant to preservation undertakings carried out by archives.

Not covered here are factors that might be significant for special classes of images in a particular context, such as geospatial representations, medical imagery, or images of pages that are primarily of text.  One omitted category, for example, is that of "drawings" produced by the design systems that support a building or manufacturing process, e.g., computer-aided design (CAD) or manufacturing (CAM). In some cases, the Library will prefer to collect examples of CAD-CAM drawings in "flattened" form, e.g., rendered as a bitmap. In other cases, however, the Library will wish to possess copies that retain process-support functionality, e.g., the ability to move or change lines. This desire, or an analogous desire regarding other special categories, may lead us to identify quality and functionality factors beyond those listed here at a later time or in other parts of this Web site. 

Albeit indirectly, this discussion does provide information pertaining to compound documents, which integrate still images with other forms of expression. For example, many PDF (Portable Document Format) files combine images with character-based (non-bitmapped) text.  The factors outlined here regarding still images also pertain to the pictorial or illustrative elements in a PDF file. Meanwhile, the factors that pertain to text in a compound document are described in Text: Quality and Functionality Factors and in individual format descriptions like the one for PDF. Most aspects of this discussion of still images also apply to images embedded in multi-page formats; see Functionality Beyond Normal Rendering below.

Normal rendering
Normal rendering for still images is associated with end-user implementations, and consists of on-screen viewing and printing to paper for personal or classroom use.  For many classes of images, user communities will demand the ability to zoom in to study detail, to scale images to display at different sizes or on devices of different resolution, and/or the ability to produce publication quality output.  Freezing of vector images as bitmaps of adequate resolution can still support normal rendering, as can flattening a layered image by combining the layers.

For formats implemented in specialized professional applications, the same type of normal rendering may not obtain, although most professional image editing systems facilitate display or printing in a manner comparable to that described in the preceding paragraph. The underlying images in these professional applications, however, may be in a format, e.g., camera raw, for which specialized output or display devices are required. For many professional-image-format implementations, normal rendering will be afforded by a service file that has been produced from a master image in a specialized format.

Clarity (support for high image resolution)
Clarity refers to the degree to which "high resolution" content may be represented within this format.  In this context, the term clarity is meant broadly, referring to the factors that will influence a careful (even expert) viewing experience.  Generally speaking, this factor pertains to bitmapped representations and not to vector-based images like Scalable Vector Graphics SVG_1_1 files, which are inherently scalable and often employ color in a fully managed way. (However, see comment about "clean edges," below.) Clarity is tested in a practical sense when the reproduction is repurposed, e.g., used as the basis for a new printed publication, or when an online user enlarges an images to see the fine detail. 

For bitmapped images, the two characteristics most often associated with clarity are pixels per linear unit (often colloquially expressed as "dots per inch") and bit depth ("bits per pixel" or "bits per color channel").  As an example of the latter, an 8-bit indexed-color file (the GIF format limit) offers less clarity than a 24-bit "true-color" image (supported by many other formats), at the same spatial resolution (dots per inch).  This is due to the contouring or posterization that often occurs when bitmapped images are represented with low bit depth.

For specialized professional applications, clarity is often of great importance. In some cases, clarity is—to stretch a photographic analogy—developed and visible: an image in a given format successfully represents an extended dynamic range or color gamut. In other cases, clarity is latent: a file contains data that can be processed to yield one or more image outputs that provide desired values. Examples of the latter include camera raw images, extended-data-range images, images with "linear gamma," and so on. Meanwhile, professional image applications may exploit specialized color encodings like CIELAB, CIE XYZ, sRBG, RIMM and ROMM RGB, Adobe RGB, and what is variously called multispectral or hyperspectral imagery. (See the section below, support for multispectral bands.)

Clarity in a broad sense will also be adversely affected by the presence of the visible artifacts that may result from the application of lossy compression or watermarking.  (Note that compression as it detracts from the sustainability evaluation factor transparency is cited in the definition of that factor in Sustainability Factors.)  If lossy compression is applied, the quality of the particular codec (compression/decompression algorithm and software) becomes significant to clarity.

The scalability of vector images means that clarity is generally not an issue; they are designed to be enlarged, although the enlargement of combination images or metafiles may be limited by the resolution of their bitmapped elements.  In some vector formats, the creator can influence the look of a rescaled on-screen presentation, for example, by specifying whether display software should favor "clean edges" (which may entail the slight repositioning of lines or the edges of shapes) or "geometric precision." 

Some image formats allow a single image to be stored as a set of independent layers that are super-imposed to create the final image.  Image-element layering is particularly important to creators, and support for layering is mentioned below in Functionality Beyond Normal Rendering. In many cases, images are "flattened" into a single layer at the end of the creative process.  In other cases, however, the creator will transmit a layered image for use.  For example, prepress images used to create printing plates may employ layers tailored to characteristics of the printing process, such as higher spatial resolution for line art layers than for continuous tone images or the representation of monochrome color layers, e.g., what printers call "spot color," independent of four-color layers.  Retention of the layered structure in such cases could contribute to clarity.

The importance of clarity for a particular subclass of images should be considered in relation to the creator's intent or the context of creation or original use.  The subtle color differentiation offered by, say, 24-bit-deep color is unlikely to be important in a graph in which the intention is distinguish a red from a blue column.  Images intended only for online display are likely to have lower spatial resolution than those intended for printing.

Color maintenance
Color maintenance refers to a format's support for color management, i.e., processes that monitor and transform image data in order to maximize the retention of color rendering in terms of human perception. Color in this context concerns the gamut expressed via tristimulus values, for example, in terms of the Red-Green-Blue (RGB) color model. Human perception of color was the subject of careful study by the International Commission on Illumination (CIE), yielding in 1931 one of the first mathematically defined color spaces: the CIE 1931 XYZ color space.2

Color maintenance depends upon a format's ability to support color encoding in different color spaces3 and to store the metadata needed by color management systems, e.g., the inclusion of a color map for indexed-color files or an ICC profile for the capture device, e.g., the digital camera or scanner, or for the artist's decisions about color, dating from the time of image creation or final edit. (For information on the retention of the data represented in the multispectral and hyperspectral imaging generally associated with scientific research, see Support for multispectral bands, below.)

During the last few years, imaging specialists have refined the concept of image states and, in 2004, this concept was described and defined in the international standard Photography and graphic technology—Extended Colour encodings for digital image storage, manipulation, and interchange (ISO 22028-1). At the highest level, there are two states: scene-referred, which "represents estimates of the color-space coordinates of the elements of a scene," and picture-referred, which "represents the color-space coordinates of the elements of a hardcopy or softcopy image." The latter state has two subcategories: original-referred, which is "typically produced by scanning artwork," and output-referred, an image "that has undergone color-rendering appropriate for a specified real or virtual output device and viewing conditions." For some photographers and users of photography, image-state information will play a role in a production workflow, e.g., when a fresh image moves from the field to the printed page. There do not appear to be well-established metadata conventions for expressing these states, nor do there appear to be file headers (or other structures) in which to inscribe such data. Comments welcome.

The image states described in the preceding paragraph are defined in terms of color. It is worth noting, however, that one can extend the concept of image state to grayscale images and perhaps to such things as the "native" pixel count of an image when it is initially demosaiced,4 e.g., in a camera. Thus it may be that we will come to consider more than color maintenance when we examine a given format's accommodation of image-state metadata.

Support for graphic effects and typography
For still image formats that support vector graphics. Refers to the support within the format for scalable shapes, labels, legends, and other vector-graphic features. Also refers to the degree to which the format supports the use of shadows, filters or other effects as applied to fill areas and text, offers levels of transparency, and manages the specification of fonts and patterns.

Support for multispectral bands
Refers to support for the inclusion and documentation of multiple spectral bands in an image, generally employed to support scientific analysis, in contrast to the widely adopted color models oriented toward human perception, e.g., RGB or CMYK.3 Multispectral and hyperspectral images capture image data at specific frequencies across the electromagnetic spectrum. Each band is separated from the overall spectrum by means of filters or by the use of instruments that are sensitive to particular wavelengths, including infra-red and ultraviolet, not visible to the eye.

Multispectral sensors usually have between 3 and 10 different band measurements in each pixel of the images they produce, e.g., visible green, visible red, near infrared, etc. Hyperspectral sensors measure energy in narrower and more numerous bands (as many as 200), providing a continuous measurement across the spectrum and providing data that is more sensitive to subtle variations in reflected energy. For example, multispectral imagery can be used to map forested areas, while hyperspectral imagery can be used to map tree species within the forest. Multispectral and hyperspectral technologies were originally developed for space-based imaging and are used in scientific and geospatial work. In the realm of cultural history, multispectral technology has also been employed for activites like the interpretation of ancient papyri and other documents; for example, see the Archimedes Palimpsest project.

Does color maintenance apply to multispectral and hyperspectral images, where color generally does not relate to tristimulus values and human perception as defined in the 1931 CIE publication? In multispectral and hyperspectral imaging, the distance from human perception is emphasized by the occasional use of words like "false color." Nevertheless, there is a sense in which something like color maintenance applies. In order to support scientific or technological inquiry, the metadata embedded in, or associated with, multispectral and hyperspectral images must document the wavelength used to "expose" each band, and it may also document the intention. For example, mid-infrared radiation at 1550-1750 nm is often placed in one band in order to image vegetation and soil moisture content and some forest fires. In the format description pages at this Web site, a given format's ability to record such metadata is noted under the headings Self-documentation and/or Support for GIS metadata.

Functionality beyond normal rendering
Various still image formats support functions beyond those mentioned above in Normal Rendering, e.g., the manipulations and outputs possible with extended data like that found in camera raw or layered images. Some formats provide multi-resolution functionality by storing the image as a series of independent arrays, each representing the image at a different spatial resolution (e.g., the Kodak ImagePak associated with the PhotoCD, or the FlashPix fpx format). More recently, multi-resolution capability has been supported much more flexibly by formats that employ wavelet compression, as in MrSID_MG3 or JP2_FF (JPEG 2000) files, as described in the following paragraph. Some formats display images progressively, revealing full clarity in stages. Another beyond-normal functionality concerns multipage images (e.g., multipage TIFF files) or simple animations (e.g., animated GIFs). Animations of only a few frames are included in this web site's still image category; longer animated works are covered in the moving image category. In some cases, the Library of Congress may select beyond-normal-rendering images for its collections, while in other cases, such images may be flattened into more typical bit-mapped form for permanent retrnation.

Additional features are associated with emerging formats like JP2_FF (JPEG 2000), and it is not clear how widely these features will be implemented in software, how broadly they will be adopted, and how significant the aspects of the viewing experience that they support will be to users.  Examples include the ability to select regions of interest (ROI) to receive higher-resolution treatment during compression and the ability to attach metadata to a point or region of an image. JPEG 2000 allows those compressing images to control the progression in which aspects of a large image file can be displayed most efficiently to a user, for example, starting with the whole image at full size but low quality or expecting the user to pick a relevant area using a small-size image and zoom in to high spatial resolution and quality for that area.  In JPEG 2000 and some other formats, ROI , quality layers, and progression order are part of a family of features that can be used for sophisticated interactive access in a client-server mode.  This interactivity is reinforced by the continuing development of associated functional specification like the JPEG 2000 Interactive Protocol standard (JPIP).  This web site will continue to monitor these emerging features and will discuss their applicability with Library staff, updating this document as needed.

1 Specialized professional applications include the following, and more:

— Professional photography, where a family of camera raw formats is finding favor in some contexts. Some commentators view raw images as more of a data set than a picture; pictures result when raw images are processed in various ways.
— Workflows that make use of extended tonal data, especially for color including extended color gamuts, or that depend upon linear representation of intensity, sometimes nicknamed linear gamma to contrast with the proper term gamma corrected.
— Image-state-based workflows, where images are categorized as scene-referred or picture-referred, which may in turn be original-referred or output-referred.
Prepress activities, e.g., the preparation of images for reproduction on paper, as in books or magazines. For an example of the intricacies in this field, see David McDowell's May-June 2006 update for the IPA (Association of Graphics Solutions Providers, formerly the International Prepress Association).
— Applications—some of which are prepress—that employ layered images, including images that mix bitmaps with other types of graphics.

2Commission Internationale de l'Eclairage (CIE) Proceedings, 1931. Cambridge University Press, Cambridge, U.K., 1932.

3A color model is an abstract mathematical model describing the way colors can be represented numerically, typically as three or four values or color components. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions, etc.), the resulting set of colors is called a color space.

4 Most digital camera color filter arrays employ the Bayer pattern (an array with twice the number of green elements than red or blue, to produce sufficient green information to satisfy the needs of human perception). The conversion of this data to the familiar red, blue, and green channels of an RGB image is called demosaicing.

Back to top

Last Updated: Thursday, 05-Jan-2017 15:59:12 EST