|Introduction | Sustainability Factors | Content Categories | Format Descriptions | Contact|
Content Categories >> Still Image | Sound | Textual | Moving Image | Web Archive | Datasets | Email and PIM | Design and 3D | Geospatial | Aggregate | Generic
Moving Images >> Video Streams as Sources for Files (last thorough update 2009)
The clarity of a digital moving image file is strongly influenced by the clarity of the source used to produce the file, when the content is derived from a video signal sent via an appropriate interface from a camera or a pre-existing video recording on tape or disk. If the original source for the content is motion picture film, and the context is an end-user application (in contrast to a specialized professional application), current-day production technology generally requires that the film be transferred to video prior to digital file making.
There are exceptions to the preceding, mentioned but not explored on this page. First, in some professional settings, film may be scanned on a frame by frame basis to produce a digital representation in a format like DPX_2, OpenEXR, or MXF, which may in turn encode the frames using JPEG 2000 or some other still-frame compression algorithm. Such an approach is part of the digital cinema DCDM specification. The image content in these formats may then be output to end-user digital video files in a manner different from that described below. Second, certain kinds of Computer Aided Design or Computer Aided Manufacturing (CAD-CAM) renderings include motion capabilities and CAD-CAM software often permits creators to "save as" video. This capability is especially likely to be employed for three-dimensional drawings that may be rotated to simulate viewing from various points of view. Third (and this is very like the preceding exception), animated content created in a computer authoring format like FLA (Macromedia Flash Project File Format) can be saved as SWF (Macromedia Flash SWF File Format) files and played in Macromedia's proprietary software, or saved in other digital video formats like QuickTime or the MPEG-4 file format. Quality for these types of content will have different dependencies than that of typical camera-generated video.
Table 1 lists a few examples of source video selected from United States standards. The composite example represents the National Television Standards Committee (NTSC) analog broadcast standard, the analog television system used in most of the Americas, Japan, South Korea, Taiwan, and some other nations and territories. It has a nominal frame rate of 30 frames (60 fields) per second. The "numbers" for the PAL (much of Europe, Africa, and Asia) and SECAM (France, the Russian Federation, and portions of Africa) standards are similar; they have a frame rate of 25 (50 fields). In the course of producing video files from analog composite sources, the picture data is converted to a digital component format; if compressed bitstreams are being produced, this step precedes compression encoding.
The bottom four rows of digital component examples are based on the new Advanced Television Systems Committee (ATSC) digital standard, which replaced NTSC in the United States in 2009. It is worth noting that the ATSC digital standard permits other frame rates, e.g., 24 fps for video content derived from American motion picture films. In coming years, the Library of Congress is almost certain to receive 24 fps material, and may receive video at other frame rates as well, e.g., 25 fps non-NTSC, ATSC, and non-ATSC content from European and other nations.
Generally speaking, "all other things being equal," larger picture sizes offer greater clarity. Video-stream picture sizes are often expressed as horizontal lines and samples per line. These measures are comparable to pixels, but the values will change if the digital-file production process (a) rescales the image, e.g., to quarter-screen size,1 (b) if some lines are dropped, e.g., 483 becomes 480, in order to have a value divisible by 16, helpful in computation, or (c) if the rendering produces square pixels, in which case 720x480 becomes 640x480.Table 1. Simplified overview of major categories of source material
Beyond picture size, the clarity of a source signal is influenced by other factors, e.g., the type of sampling (e.g., 4:4:4, 4:2:2, or 4:2:0, etc., representing the relative representation of luminance [first number] and chrominance information [second and third numbers]) and the bit depth per sampled channel (e.g., 10 or 8). Older material reformatted from composite video (the first row in Table 1) was often produced using an older generation of camera, and image artifacts may result from the processes used to transcode from composite to component. Source video streams may also be derived from compressed recordings and the type and degree of previously applied compression will also affect clarity.
Clarity is also adversely affected by interlacing and enhanced by progressive image display. An interlaced image consists of two video fields, captured a few milliseconds apart. When the subject of scene is moving, the time difference between the two fields means that interlaced images will contain a small amount of blur within a single video frame. The particular methods used to transfer film to video will also influence clarity. New footage will have improved clarity if progressive rather than interlaced frames are produced by new camera models or new datacine or film-scanning devices.
Although experts agree that, at the same picture size, the clarity of a progressive image surpasses that of an interlaced image, they are divided when asked to choose between progressive scan and more scan lines (greater picture size). Supporters of 1080i video argue that greater clarity results from having more pixels, even if interlaced, while supporters of 720p argue that progressive scan produces superior results.
Variation in source content for compression is partly represented in the MPEG-2 and -4 standards (and possibly others) via what are called levels. Table 2 illustrates the relation between MPEG-2 levels and quality in terms of the standard's conformance points. The standard permits the placement of signals at intermediate points, e.g., picture sizes like 1280x720, and at data rates below the maxima listed here.Table 2. MPEG-2 levels and their characteristics
1 Some industry jargon is derived from teleconferencing standards: QCIF for Quarter Common Intermediate Format (176 non-square-pixels by 120 or 144 lines), CIF for Common Intermediate Format (352 non-square-pixels by 240 or 288 lines). Full screen in a 4:3 aspect ratio is sometimes referred to as CCIR 601 (525/60) (720 non-square-pixels by 480 lines) while HDTV is High Definition Television (various, including 1920 pixels by 1080 lines), typically with 16:9 aspect ratio. In a square-pixel environment, quarter-screen picture size is 320x240, while full screen is 640x480 pixels. These sizes are for the United States NTSC and ATSC standards. In European nations as well as some others, standard definition will yield a quarter-screen image of 352x288 non-square-pixels and a full screen image of 720x576. See http://www.iki.fi/znark/video/conversion/ for an interesting discussion of this and related topics.