Fundamental concepts in video and digital audio pdf
File Name: fundamental concepts in video and digital audio .zip
- Course Schedule:
- Digital audio
- Chapter 5 Fundamental Concepts in Video
- A Beginner's Guide to Digital Signal Processing (DSP)
To browse Academia. Skip to main content.
Types of video signals Video signals can be organized in three different ways: Component video, Composite video, and S-video. Component Video Higher-end video systems, such as for studios, make use of three separate video signals for the red, green, and blue image planes. This is referred to as component video. This kind of system has three wires and connectors connecting the camera or other devices to a TV or monitor. Color signals are not restricted to always being RGB separations.
Types of video signals Video signals can be organized in three different ways: Component video, Composite video, and S-video. Component Video Higher-end video systems, such as for studios, make use of three separate video signals for the red, green, and blue image planes.
This is referred to as component video. This kind of system has three wires and connectors connecting the camera or other devices to a TV or monitor.
Color signals are not restricted to always being RGB separations. Component Video Component video gives the best color reproduction, since there is no "crosstalk" between the three different channels, unlike composite video or S-video.
Component video, however, requires more bandwidth and good synchronization of the three components. Composite Video In composite video, color "chrominance" and intensity "luminance" signals are mixed into a single carrier wave. Chrominance is a composite of two color components I and Q, or U and V. This is the type of signals used by broadcast color TVs. In NTSC TV, I and Q are combined into a chroma signal, and a color subcarier then puts the chroma signal at the higher frequency end of the channel shared with the luminance signal.
Composite Video The chrominance and luminance components can be separated at the receiver end, and the two color components can be further recovered. When connecting to TVs or VCRs, composite video uses only one wire, and video color signals are mixed, not sent separately. The audio signal is another addition to this one signal. Since color information is mixed and both color and intensity are wrapped into the same signal, some interference between the luminance and chrominance signals is inevitable.
S-Video As a compromise, S-video separated video, or super-video, e. As a result, there is less crosstalk between the color information and the crucial gray-scale information. The reason for placing luminance into its own part of the signal is that black-and-white information is crucial for visual perception. S-Video As we know, humans are able to differentiate spatial resolution in grayscale images much better than for the color part of color images. Therefore, color information sent can be much less accurate than intensity information.
We can see only fairly large blobs of color. An analog signal f t samples a time-varying image. So-called progressive scanning traces through a complete picture a frame row-wise for each time interval. In TV and in some monitors and multimedia standards, another system, interlaced scanning, is used. This results in "odd" and "even" fields - two fields make up one frame. In fact, the odd lines starting from 1 end up at the middle of a line at the end of the odd field, and the even scan starts at a halfway point.
This image shows the scheme used. First the solid odd lines are traced. Then the even field starts at U and ends at V. The scan lines are not horizontal because a small voltage is applied, moving the electron beam down over time. Because of interlacing, the odd and even lines are displaced in time from each other. This is generally not noticeable except when fast action is taking place onscreen, when blurring may occur.
For example, in the video in Figure, the moving helicopter is blurred more than the still background. Interlaced scan produces two fields for each frame. For this various schemes are used to de-interlace it. The simplest de-interlacing method consists of discarding one field and duplicating the scan lines of the other field.
In Europe, this fact is conveniently tied to their 50 Hz electrical system, and they use video digitized at 25 frames per second fps ; In North America, the 60 Hz electric system dictates 30 fps. That is, what part of an electrical signal tells us that we have to restart at the left side of the screen? The solution used in analog video is a small voltage offset from zero to indicate black and another value, 'Such as zero, to indicate the start of a line.
Namely, we could use a "blacker-than-black" zero signal to indicate the beginning of a line. It uses a familiar aspect ratio i. More exactly, for historical reasons NTSC uses NTSC follows the interlaced scanning system, and each frame is divided into two fields, with Thus the horizontal sweep frequency is x Blanking information is placed into 20 lines reserved for control information at the beginning of each field.
Hence, the number of active video lines per frame is only The non blanking pixels are called active pixels. It uses an 8 MHz channel and allocates a bandwidth of 5. The color subcarrier frequency is fsc MHz. In order to improve picture quality, chroma signals have alternate signs e. This facilitates the use of a line rate comb filter at the receiver- the signals in consecutive lines are averaged so as to cancel the chroma signals that always carry opposite signs for separating Y and C and obtaining high quality Y signals.
SECAM also uses scan lines per frame, at 25 frames per second, with a aspect ratio and interlaced fields. They are sent in alternate lines, i. Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous. Carousel Next. What is Scribd? Uploaded by Anantharaj Manoj. Composite Video In composite video, color "chrominance" and intensity "luminance" signals are mixedinto a single carrier wave.
Since color information is mixed and both colorand intensity are wrapped into the same signal, some interference between the luminanceand chrominance signals is inevitable. In fact, the odd lines starting from 1 end up at the middle of a line at the end of theodd field, and the even scan starts at a half-way point.
The scan lines are not horizontal because a smallvoltage is applied, moving the electron beam down over time. Because of interlacing, the odd and even lines are displaced in time from eac.
Date uploaded Nov 16, Did you find this document useful? Is this content inappropriate? Report this Document. Flag for inappropriate content. Download now. Fundamental Concepts in Video and Digital Audio. Related titles. Carousel Previous Carousel Next. Jump to Page. Search inside document. Other, more complicated information from both fields. The jump from T to U or V to P is called the vertical retrace. Cameron Scott. Purush Jayaraman. JMAC Supply. Navjyot Singhvi.
Hugo Navarro. Leung Tak Chun. William Perez. Manveen Anand. Anh Em. Activity 3 - Law of Conservation of Energy Experiment. Nazurudin Ahamed. Francisco Gerez.
Are Gee. Asa Reid. Koushik Karmakar. Den Gal. More From Anantharaj Manoj. Anantharaj Manoj. Bruce M Sabin. Popular in Computing And Information Technology.
Pulse-code modulation PCM is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, compact discs , digital telephony and other digital audio applications. In a PCM stream , the amplitude of the analog signal is sampled regularly at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps. A PCM stream has two basic properties that determine the stream's fidelity to the original analog signal: the sampling rate , which is the number of times per second that samples are taken; and the bit depth , which determines the number of possible digital values that can be used to represent each sample. Early electrical communications started to sample signals in order to multiplex samples from multiple telegraphy sources and to convey them over a single telegraph cable. The American inventor Moses G.
Analog Devices has a broad selection of processors for a wide variety of applications. The following document describes the basic concepts of Digital Signal Processing DSP and also contains a variety of Recommended Reading links for more in-depth information. Digital Signal Processors DSP take real-world signals like voice, audio, video, temperature, pressure, or position that have been digitized and then mathematically manipulate them. A DSP is designed for performing mathematical functions like "add", "subtract", "multiply" and "divide" very quickly. Signals need to be processed so that the information that they contain can be displayed, analyzed, or converted to another type of signal that may be of use.
Fundamental Concepts in Video and Digital Audio - Free download as Powerpoint Presentation .ppt /.pptx), PDF File .pdf), Text File .txt) or.
Chapter 5 Fundamental Concepts in Video
A Beginner's Guide to Digital Signal Processing (DSP)
There are many opinions on the relative merits of analogue and digital audio, often muddled by misconceptions and myth. This article presents some objective technical information and suggests some ways of making an equitable comparison and evaluation. Analogue audio is so called because the "shape" or pattern of an electrical or magnetic pressure audio signal is analogous to looks like the original pattern of changing air pressure. Audio in nature is analogue until it is converted by our inner ears into an electro-chemical signal that we perceive. Digital audio is a mathematical description of the pattern of pressure. In deciding whether to use analogue or digital audio technologies we are often influenced by practical and ergonomic factors such as ease of session recall, portability, maintenance costs, or the omission or addition of some features on a device.
A large portion of sound processing is now done by digital devices and software — mixers, dynamics processors, equalizers, and a whole host of tools that previously existed only as analog hardware. This is not to say, however, that in all stages — from capturing to playing — sound is now treated digitally. As it is captured, processed, and played back, an audio signal can be transformed numerous times from analog-to-digital or digital-to-analog.
Each color channel is sent as a separate video signal. Component video, however, requires more bandwidth and good synchronization of the three components. The audio and sync signals are additions to this one signal. Since color and intensity are wrapped into the same signal, some interference between the luminance and chrominance signals is inevitable. As a result, there is less crosstalk between the color information and the crucial gray-scale information. The reason for placing luminance into its own part of the signal is that black-and-white information is most crucial for visual perception.