A Plea For Film Digitization (was: Digital restoration of silent films)

From: David P. Hayes
Newsgroups: alt.movies.silent
Date: Tuesday, January 13, 1998 12:27 AM

Bob Birchard addressed the subject of how many gigabytes/terabytes are said to be necessary to store a feature film on a digital storage medium:
>As promised, here are some real world figures on the current Fox
>release "Anastasia," which was digitally inked and painted and
>transferred to film. These figures come from Mark Weathers at the Fox
>Animation Studio in Phoenix and are publi[s]hed in the latest issue of the
>Fox Animation newsletter, "FASTimes"
>Each Frame of Film Data: 12 MB
>Each second of film: 288 MB
>Completed Film Data: 1,492,584 MB (1.49 Terabytes)
> That's about 1,068,988 3.5" floppy disks is anybody's
>counting!!!!!! [or 106 DVD-ROMs--DH]
> The memory required for any single frame isn't all that much (and
>even with JPeg compression you could get a more than acceptable film
>output) [an image 10 inches wide scanned at 400 dpi optical resolution
>will yield a perfectly fine film negative--even with JPeg]. The problem
>is with the sheer volume of frames involved in a feature film. We're
>talking hundreds of Syquest tapes, which makes it an impractical long
>term storage solution at the moment. This is the equivalent of short
>tinting rolls in a silent negative--and makes it more likely that some
>of the pieces could be misplaced or lost.
> While the cost is currently prohibitive to do digital restoration,
>these costs will come down and we'll see more digital clean-ups (whole
>films or parts thereof) in the future--but the results will be
>transferred back to film for long-term storage for many years to come.

I'd like to know if Fox with "Anastasia" deliberately digitized the image at far more resolution than 35mm film would capture, and whether this was dictated by the lack of control over the film grain when 35mm receives an image from a pre-existing source.

An analogy may help to clarify this point:

A screen door can be seen to resemble a video display inasmuch as both have a pattern of squares lines up in precision. If you were to place one screen door against another (or against a window screen) of the same matrix, it is possible to line up the metal wiring so that light traversing through the squares openings on each of the two screens are not obstructed. This is likened to the manner in which a digital video image is reproduced as a copied digital video image: the color of the square of each pixel is duplicated in the same position on the receiving media. This analogy can't be applied to film stock.

The individual grains of light-sensitive "dots" on film stock are invisible to the operator engaged in copying from another piece of film or from a video source; the grain positions have to be unknown until film stock has left its dark enclave and has been processed, and then it's too late. If an operator can't manipulate film stock so that there's 1:1 correspondence of digital pixel to film grain, then it becomes compelling to increase the incidence of digital pixels so that when each film grain is hit with digital beams, it receives such an excessively detailed image that the average hue of the squares that becomes blurred onto that grain will better represent the video source.

(It doesn't help that the film grains are not in the same position from one film frame to the next. The manufacturing process can't guarantee that. Digital video gives each pixel an "address" that places it at a screen location that remain fixed from one frame to the next.)

If the concept of the "average hue" needs clarification:

Think back to the screen door analogy, and recall that if the two screens have squares of the same size, then when the wires don't line up, each square in the forefront screen will be display beneath it parts of four squares of the rear screen. If the squares are of different colors or shades, the resulting blurring of these four on the film copy will make the film image appear less sharp than film is capable of delivering; the blur appears adjacent to pixels of a solid color (presuming the object is of sufficient size to cover several pixels at the object's best-possible clarity), so the width of the borderline blur becomes a factor in the sharpness of the image. However, if there are three times as many wires on the rear screen as on the forefront screen, there will be nine times as many squares. With nine times as many squares, when the lines can't be calibrated, each forefront-screen square will be comprised of four full squares (the ones that fall into the center of the forefront square) and parts of twelve others (the ones falling onto the edge-most area within the square).

The schema below illustrates this, with angular characters representing the pixels partially captured by the film grain, and with "X" representing the pixels fully captured within a single grain:

>XX< /AA\

(The 16 characters--arranged in four groups of four--may be easier to understand if you display this in a non-proportionally-spaced font.)

If the sharpness resulting from this pattern proves unsatisfactory, the number of pixels can be increased again. Always there will be one-row frame of outside pixels which can be expected to be merely partially captured. The inside pixels--no matter how many rows and columns--will be captured in full.

And if you want an example closer to home for most film buffs:

* A super 8 print made from 16 source material didn't look as good as a print of the same gauge made from 35mm source material, even though super 8 has less resolution than both 16mm and 35mm and thus couldn't do justice to the detail present in a 16mm source let alone the 35mm.

* 70mm prints made from matted 35mm negatives look much better than the 35mm prints made of the same movies for the same opening dates.

* Laserdiscs render lesser results from 16mm than from 35mm (even when the transfer is from a 16mm original on a film shot on that gauge), yet 35mm-sourced laserdiscs are of about the same sharpness as laserdiscs made from IMAX sources. Thus, it does happen that additional detail can be incommunicable to the intended copy.

The figure I've often seen given as the horizontal resolution of 35mm film is 1100 lines. Given the 4:3 ratio of height-to-width on the physical film frame itself (the anamorphic lens spreads out the grain, it doesn't change the physical dimensions of the frame as it exists on the emulsion), that results in about 800 lines of vertical resolution (figuring that the grains are the same height as width on average, as they would be given that the nature of manufacturing doesn't conventionally permit placement in a desired direction of any oblong grains). 800 vertical x 1100 vertical equals 880,000 pixels per frame.

The number of bytes per frame would be that number times the number of bytes per pixel. This latter is based on the number of colors and shades achievable. (256 colors can be had with a single eight-bit byte, 65,536 colors can be achieved with 16 bits [two 8-bit bytes], and 4,294,967,296 colors (yes, over four billion) from 32 bits [four 8-bit bytes, or two 16- bit bytes]. Something between the latter two should be best; 20 bits allows 1,048,576 colors.)

So 800 x 1100 x 24 [the number of bits permitting 16 million colors, which should be sufficiently exact for this example] equals 21,120,000 bits per frame, or 7,040,000 (21,120,000 divided by 3) eight-bit bytes.

7,040,000 is a bit less than the 12MB that Bob Birchard cited, but this is understood through information that he provided with his statistics: Fox/"Anastasia" scanned at 400 dots-per-inch for 10 inches, or 4000 lines of horizontal resolution per frame, which is nearly four times the 1100 horizontal-resolution figure that is invariably given as the horizontal resolution of 35mm. I submit that the extra resolution was scanned to allow for the misalignment of grain with pixels that I discussed earlier in this article.

This excess of four times additional horizontal resolution multiplied by four times additional vertical resolution, computes a 16x redundancy figure, which would bring down the 12MB figure to 750KB per frame, or a byte per pixel on average (suggesting that clever means are at work to combine pixels based on demonstrable shared characteristics, so that the color range is not sacrificed).

750KB x 24 (frames per second) x 60 (second a minute) x 90 (minutes for a feature film) equals 97,200,000,000 bytes, or about 97 billion, which, if a DVD disc holds 14 gigabytes (double-layered, double-sided), would be about seven such discs--about as many discs as reels of 35mm film. Seven discs take up far less room than seven reels of 35mm film. Seven discs don't pose a problem of lost sections (no more than film anyway, and probably less, because backup copies will be cheaper to make). With recordable CD-ROMs being sold at the retail level for $2-$3 each, and with the likelihood that recordable DVD-ROMs will cost in the same neighborhood once/if the technology takes off, we're talking about a low cost of storage--low enough to permit quick approval of redundant copies.

(Don't confuse DVD-ROM and DVD storage levels with the DVD currently available to consumers. The technology as I'm discussing it is strictly limited to its storage use, which is not tied to any particular level of resolution or color-scale reproduction.)

Nonetheless, the explanation I've provided for Bob's facts do point to a problem implicit in the above discussion of pixel-grain alignment. When 1100-line 35mm film is copied to 1100-line digital video, full, accurate reproduction would require 1:1 correspondence of grain to pixel. This could be done with computer software that seeks out the grain and transfers each grain individually to a pixel. In this respect, digital video can take advantage of the superior automatization possible with computer-based technology. By contrast, the darkroom-confined film medium can't be maneuvered to do this; aptly-named "fine-grain prints" are made on 35mm at higher expense when the best resolution possible from 35mm negatives is specifically desired--indicating that prints which are not fine grains are by definition not as sharp as the film medium is capable of delivering..

(Sometimes the argument is raised that projection copies will still be needed on 35mm. Such a view doesn't consider the development of video-projection devises that have or will have the resolution and color/hue range of film; such machines might be expensive, but could be paid for relatively quickly with the savings from not making new film prints. One should also remember that a typical obscure film once preserved may be seen on film by only a few hundred people, excluding people who'll see it only on video; the cost of a film print divided by the number of people who would see the print, results in a substantial subsidy per person. Such dollar amounts may not be discussed openly, but they will be a consideration for any preservation undertaking, private or public.)

Must film digitalization be expensive?

No. Recall how most of us first saw films: television presentations rendered from a 16mm print being chained through a telecine converter at the moment of broadcast. Many of us didn't learn to be critical of these showings until we saw the better quality of 35mm-mastered, color-corrected major-label VHS tapes. (Let's forget for this discussion the scratches, splices and blighted contrast caused by the overuse of film prints and by sloppy technicians. These represent misuse of the media.) No, I'm not suggesting going back to this, but rather using it to introduce an advancement of that technology.

With 35mm elements, and with a digital medium that can faithfully reproduce the full range of colors and shades, and with machinery that can scan one frame after another with little human intervention, the telecine method as upgraded (if necessary, the computerized telecine can be slowed down, inasmuch as a normal audience won't be watching the film during the transfers to digital video) could be accomplished almost as economically as the operation of a television station film room.

With the specifications outlined herein, it should be possible to create digitized copies of non-fragile, non-decomposed film with no labor but that of a high-school student working the afternoon or of a Korean working for 22 an hour with prints shipped to Seoul. (I don't actually recommend this; this is allegory to overstate the point.)

It might be asked:

* Hasn't restoration on digital video cost more than such jobs done on film?

* Might digital techniques be abused to alter the film image? (Bob Birchard cautions us about this threat when he writes:
> Of course this valuable tool also has great potential for abuse.
>Manipulating images digitally is so easy that many future "restorers"
>may seek to "improve" the films as they go about their work. We're
>already seeing this sort of med[d]ling with soundtracks, i.e. "Vertigo" the
>digital track "City Lights" put out by the Chaplin estate and The
>Brownlow/Gill recreation of the original Movietone score for "Sunrise."
>There really needs to be a convention of archivists and historians to
>establish guidelines for future digital restorations.)

The answers to these two questions are intertwined. The "quick and dirty" (but faithful, accurate) transfers I'm discussing wouldn't allow be budgeted for "fixes." Sure, Sony's restoration on digital video of "The Matinee Idol" was expensive (see my earlier post). However, the job on "The Matinee Idol" was intended as a demonstration of what digital video was capable of accomplishing, from a technical standpoint. Scratches were removed, missing emulsion was replaced from adjacent frames, deterioration was covered up. A tremendous amount of human labor went into these tasks.

Digital video allows for an exact copy to be made from the film elements and for subsequent repairs to be made to duplicate copies of the video. Those "subsequent repairs" could be put off for a long time--which can't be said about repairs done on film unless an expensive intermediate print is made from the decaying nitrate, with said intermediate print planned for obsolescence once the clean-up is accomplished. Digits do not degrade in quality when copies are made. Nonetheless, technicians should save the first transfer (preferably on a read-only media), and wait to remove scratches, hairs, blots, and jittering on later copies. The original will thus always be available for scholars' comparisons and for use in later restorations should technological advances warrant new work or should an eager and dedicated fan seek and volunteer to perform the definitive restoration.

Digitized video makes possible the transfer of films now--before the nitrate deteriorates further--and fixing them later. There may be an intermediate copy (the digital transfer without any repairs), but once it has been made, the expense of digitizing from film to video no longer needs to be done for that film, so the repair copy will cost less than were a transfer to be made from scratch for that edition.

Before I leave the subject of operator distortions in the guise of fixing the image, let's recall that restorations done on film are also subject to such subjective decisions. Film, once changed and reprinted, is not so easily changed back to what was on the original. When damaged perforations or printer-jump on duplicated elements make it advisable to restore steadiness to the moving image, the technician using film as his preservation medium must visually determine the position of the image on each frame--while that operator is attending to many other details. On digital video, the operator need only specify which object within the shot should be at a specified location in each frame; the computer will adjust the position of everything else in the images, frame after frame--and probably do so more accurately than a human being..

Bob Birchard wrote:
> As for re-transferring the digital data every five years--they don't
>do it now with film, even though its much cheaper (as Jim Harwood points
>out), what makes you think they'll do it in a more expensive digital

To emphasize a point: initial transfer may be more expensive right now (although it needn't be--it's a matter of how much work is performed), BUT copying is cheap. It's as simple as copying a floppy diskette to another floppy diskette.

Yes, film prints are wonderful, but I offer these arguments in support my plea for digitized video:

* Quick, inexpensive, exact copies after the initial transfer has been made

* Human intervention and labor can be minimized

* Normal playback and use doesn't put wear onto the images

* The physical media (magnetic surfaces or optical disc) is cheap whereas film emulsion must be wedded to a sturdy and expensive stock, a couple of miles of which are needed for a typical feature film in contrast to the few ounces of plastic which would do the job on optical media.

David Hayes


Return to Table of Contents

Go to next article