A legend of commonly used terms, phrases, and abbreviations used in our production environment and this web site.
Aliasing refers to the jagged appearance of diagonal lines, edges of circles, etc. due to the square nature of pixels, the building blocks of digital images. Aliasing is also referred to as a shimmer in the video image, and is fundamentally an artifact of interlaced video. Sharp lines and fine details in images are affected, and shimmer whenever there is movement. Aliasing does not always show up until elements are in motion, so it is typical to see a finely detailed image periodically shimmer whenever the camera pans, particularly in a diagonal direction.
Also known as video artifact (or artefact). Things that appear in your video that were not in the original scene or source material. Aspect Ratio - The ratio of the width of the image to its height, expressed as two numbers separated by a colon (ie: 16:9, 4:3, or 2.39:1).
The ratio of the width of the image to its height, expressed as two numbers separated by a colon (ie: 16:9, 4:3, or 2.39:1).
The blur, or the aesthetic quality of the blur, in out-of-focus areas of an image, or the way the lens renders out-of-focus points of light.
Short for compressor/decompressor (aka: coder/decoder), a codec is any technology for compressing and decompressing data. Codecs can be implemented in software, hardware, or a combination of both. Codecs are used to compress and decompress both video and audio files to make them into smaller file sizes to allow for easier storage and transportation. Some popular codecs for digital video include H.264, AVCHD, Apple ProRes 422, DV, and MPEG. Some popular codecs for digital audio include PCM, ACC, MP3, and FLAC. More about video and audio codecs - Codec.
The reduction of a file’s size by means of a compression program. The two types of compression are lossless compression and lossy compression. In lossless compression, the compression process allows for subsequent decompression of the data with no loss of the original data. In lossy compression, the compression processes removes some of the data in a way that is not obvious to a person using the data. Most video compression is lossy – it operates on the premise that much of the data present before compression is not necessary for achieving good perceptual quality. Compression is a trade-off between file size, quality, and what is considered a reasonable amount of time to decompress. If video is over compressed in a lossy manner, visible artifacts can appear (such as macroblocks). Video compression is necessary due to the current limitations of recording media (such as CF cards & SD cards). Uncompressed (raw) or lossless high-definition video would quickly take up gigabytes of storage space, thus limiting the total accumulation of recorded media.
Is related to the ratio of the dimensions of a camera’s imaging area (aka imaging sensor in DSLRs) compared to a reference format; most often, this term is applied to digital cameras, relative to 35 mm film format as a reference. Admittedly this may require further explanation which can be found here – Field of View Crop Factor (Focal Length Multiplier).
A digital single-lens reflex camera (aka: digital SLR) is a digital camera that uses a mechanical mirror system and pentaprism to direct light from the lens to an optical viewfinder on the back of the camera. The anatomy of this type of camera is explained further here – What is a DSLR camera?
The limits of luminance range that a camera can capture.
A popular standard for compressing video based on MPEG-4, especially for high-definition video. Taking advantage of today’s high-speed chips, H.264 delivers MPEG-4 quality with a frame size up to four times greater. Though generally used as a distribution/delivery codec, H.264 has now become a common recording codec used by DSLR and other consumer cameras. More about the H.264 standard - H.264/MPEG-4 AVC.
High dynamic range imaging (aka HDRI) is a set of techniques that allow a greater dynamic range of luminance between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods. This wider dynamic range allows HDR images to more accurately represent the wide range of intensity levels found in real scenes, ranging from direct sunlight to faint starlight. The two main sources of HDR imagery are computer renderings and merging of multiple photographs, which in turn are known as low dynamic range (LDR) (also called standard dynamic range(SDR) photographs.
(International Organisation for Standardization) In traditional film photography, ISO sensitivity expresses the speed of photographic negative materials. This was formally known as ASA (American Standards Association). In digital photography, an ISO equivalent expresses the speed setting of an image sensor.
(aka Live Preview) A feature that allows a digital camera’s electronic display to be used as a viewfinder, that is, as a means of previewing exposure and/or previewing framing before taking the photograph.
Stand-alone frames in pictures, called I-frames divided into 8×8 block of non overlapping pixels. Four of these blocks arranged into a bigger 16×16 block are called a macroblock.
A megapixel (MP) is 1 million pixels, and is a term used not only for the number of pixels in an image, but also to express the number of image sensor elements of digital cameras or the number of display elements of digital displays.
Generally thought of as the smallest single component of a digital image.