A final quality concern, and something that often results in poor video quality, is caused by subject matter that exceeds the brightness range capabilities of a video camera.
A video camera is only capable of reproducing a limited range of brightness -- something you have to constantly keep in mind when bright lights, windows, white walls, etc., appear in a scene.
A range in brightness that exceeds ¥ about 30:1 (with some major picture elements 30 times brighter than others) will cause problems.
Rather than "clip off" the offending areas with a resulting loss of detail in the light areas of the picture (as shown earlier), many video circuits will automatically bring down the entire video level so that it will all fit into the standard (limited) range.
Note in the waveform above that all the video is within the 7.5 to 100 range, but that "spikes" (caused by light reflections from the waterfall) take up more than half of the range. As a result, the rest of the video ends up in a small (and rather compressed) area.
In the photo on the right above the middle-to-dark range of the video is compressed into a small area. The result is a dark picture.
If a person were standing in this picture, their skin tones would be much darker than normal.
Now lets compare the resulting gray scales. Above is a gray scale with a normal range; below is one that illustrates the problem discussed above.
Note that in the photo on the left that the brightness range of this scene greatly exceeds the capability of the video system. This is caused primarily by the bright sky in the background.
The automatic exposure camera setting that was relied upon results in a complete loss of detail in the horse.
Although this example represents extremely difficult subject matter -- a situation that would be best avoided, if at all possible -- note on the right how the picture can be significantly improved if the camera's iris is manually opened up three or more f-stops. (Of course detail in the sky disappears, but we'll assume you are more interested in the primary object in the scene, the horse.)
Can you have it both ways at the same time? Possibly, at least with some professional cameras.
A knowledgeable engineer may be able to adjust the brightness response curve of the camera to bring the bright areas into the basic picture. However, doing so will distort the gray scale, which may objectionably distort the look of other subject matter.
As we will see when we look at the subject of lighting, adding light to dark areas, or darkening the bright areas, represent better ways to solve this problem.
Some automatic cameras, like the ones that gave us the "black" horse above with no detail, give you the option of turning off automatic exposure and adjusting the iris manually.
If you can't do that, remember that the camera's backlight control (if the camera has one) will provide you with some control in scenes that have bright subject matter, such as windows or bright backgrounds.
Keep in mind that even someone wearing a white or yellow shirt will often cause problems.
Before we leave the discussion of the waveform monitor, we need to mention few other things.
First is the information displayed below the black level (the 7.5 ¥ IEEE or IRE) point on the waveform monitor.
In this "blacker-than-black" area there are some important timing signals referred to as sync, a term that is short for synchronizing pulses. These are the high-speed timing pulses that keep all video equipment "in lock step."
These pulses dictate the precise point where the electronic beam starts and stops while scanning each line, field, and frame. In fact, without these timing pulses, electronic chaos would instantly break out between pieces of video equipment and you would have no picture at all.
A sync generator is used to supply a common timing pulse for all equipment that must work in unity within a production facility.
On a waveform monitor the bottom line in the sync should be at -40 (the very bottom of the waveform scale) and the top of the sync signal should go up to the baseline, or the 0 point on the scale.
Too much sync and the black level of the video will be pushed too high (graying out the picture); too little and the black level will cut into the sync, and the picture will roll and break up.
In monitoring video levels we are primarily interested in the range of luminance (visible picture information) that extends from 7.5 (the darkest black) to 100 (maximum white) on a waveform monitor.
If the video (white level) significantly exceeds 100, there will be a loss of detail in the lighter area of the picture. Faces in particular will look washed out. A signal well beyond 100 will also result in technical problems.
Conversely, skin tones that are in the lowest part of the waveform range will be so dark as to have no detail.
To keep this from getting too technical, we've sidestepped an issue here that has implications for TV graphics. This information fills in a bit of that gap.
Now we get to the second quality
The eye sees color very subjectively, so when it comes to making accurate judgments about color our eyes can be easily fooled.
Thus, we need a reliable way of judging the accuracy of color, as well as for setting up our equipment to accurately reproduce colors.
The device that does this is called a vectorscope and it's commonly seen in TV control rooms and as part of computer editing systems.
We'll skip the technical things involved and just concentrate on six little boxes marked R, G, B, Mg, Cy and Yl on the face of the vectorscope.
As you might suspect, these stand for red, green, blue, magenta, cyan and yellow, the primary and secondary colors used in color TV.
When a camera or any piece of video equipment is reproducing color bars, (shown below on the right) the primary (red, green and blue) and secondary (magenta, cyan and yellow) colors should appear in their marked boxes on a vectorscope.
Without a vectorscope you can often balance the colors fairly accurately by simply making sure the yellow bar is really yellow. In fact, by adjusting yellow correctly, the other colors will often move into place.
But "often" isn't "always."
If primary or secondary color bars wander significantly out of their assigned vectorscope areas, there are problems.
Sometimes things are easy to fix (like a simple, turn of the phase adjustment or hue knob); sometimes they're not, and you will have to call in an engineer.
In addition to hues (colors), the vectorscope also shows the amplitude or saturation (purity) of each color.
Color saturation, which is measured in percentages, is indicated by how far out from the center of the vectorscope circle the color is displayed. The further out, the more saturated (pure) the color is.
The standard SMPTE (Society of Motion Picture and Television Engineers) test pattern is for television in the 4:3 aspect ratio. The SMPTE test pattern for the 16:9 HDTV television system is shown below.
Since professional nonlinear editing systems have both vectorscope and waveform monitor screens, you can keep a constant eye on quality and make scene-to-scene adjustments as necessary. Not to do this will create problems for matching scenes during editing.
Of course all of these quality measures have to be accurately displayed on a TV monitor in order to be verified, so it's important to be able to trust your video monitor.
This link describes the eight steps involved in setting up a video monitor to display accurate color and contrast.
The zone system that many professional photographers use to insure accurate tonal renditions can also be applied to video production. This is discussed here.
TO NEXT MODULE
Video Projects Revision Information
Issues Forum Author's Blog/email Associated Readings Bibliography
Index for Modules To Home Page Tell a Friend Tests/Crosswords/Matching
© 1996 - 2017, All Rights Reserved.
Use limited to direct, unmodified access from CyberCollege® or the InternetCampus®.