StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

PCM Theory and Audio Reduction Codecs and Techniques - Research Paper Example

Cite this document
Summary
  This paper evaluates the PCM theory and audio reduction codecs and techniques. In this quantization value includes not only the original audio signal alone, but also many high-frequency harmonics, hence; a low-pass filter is used to remove the undesirable signal at the final fragment…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER92% of users find it useful
PCM Theory and Audio Reduction Codecs and Techniques
Read Text Preview

Extract of sample "PCM Theory and Audio Reduction Codecs and Techniques"

? PCM Theory and Audio Reduction cs and Techniques Pulse modulation or PCM is a digital exemplification of an analog signal whereby the signal magnitude is sampled regularly at even intervals, then quantized to series of symbols in digital (usually binary) code. The PCM has been employed in digital telephone systems and also is the standard formula for compact disc red book format and digital audio in computers. It is also the standard in digital videos, for instance, using ITU-RBT.601. However, a straight PCM is not usually used for video on consumer applications such as DVR or DVD because it needs too high bit rate (PCM audio is normally supported by DVD standard but is rarely used). Instead, compressed PCM variants are normally employed. However, numerous Blu-ray format movies use the uncompressed PCM for audio. Frequently, PCM encoding enables digital transmission from a point to another (in a certain system, or geographically) in a serial form. This paper evaluates PCM theory and audio reduction codecs and techniques. Introduction In the past, the communication system mostly used the analog signal in transmitting signals. However, due to computer and digital network communications advancements, a lot of information or data is transmitted using the pulse wave modulation technique. Pulse wave modulation may be used to transmit analog audio signal or information with a particular rate to sample analog signal - this rate is what is called the transmission rate. On the receiver, the delivered signal is demodulated by the Pulse-code modulation (PCM) demodulator to recuperate the original continuous analog signal wave. Generally, PCM can be classified as a pulse amplitude modulation (PAM), pulse position modulation (PPM), pulse code modulation (PCM), and pulse width modulation (PWM). PAM, PPM, and PWM modulations are affiliated to analog modulation while the PCM modulation is affiliated to digital modulation. It is important to take note that PCM modulation is real digital signal which can be processed and digitally stored by computer. However, PPM, PWM, and PAM modulations are similar to PM, FM, and AM modulations, respectively (Aksoy & DeNardis, 2007, p. 112). For all pulse wave modulation, before the modulation, the original continuous form signal have to be sampled and the sampling rate for the sampling signal must not be low, otherwise the recovered signal will bring about distortion. The sampling rate is subject to the sampling theorem, whereby the sampling theorem states that: for a pulse wave modulation system, in condition that the sampling rate excesses the double or more maximum frequency times of the signal, then distortion level of data recovery on the receiver will be its minimum. For instance, the frequency range of audio signal is about 40 Hz ~ 4 kHz, the pulse wave modulation sampling signal frequency must be no less than 8 kHz, hence, the sampling error is reduced to minimum (Maes & Vercammen, 2012, p. 67). During transmission, it is hard for the PCM signal to avoid the noise distortion. Therefore, before PCM signal sends to the PCM demodulator, a comparator is used to recover the PCM signal to its original level. The signal is a pulse wave signal series, so, before demodulating, the pulse wave signal series will be converted to a parallel digital signal by aid of a serial to a parallel converter After that, the signal passes through n-bits decoder (which should be D/A converter) for recovery of the digital signal to its original quantization value. However, in this quantization value includes not only the original audio signal alone, but also many high frequency harmonics, hence; a low-pass filter is used to remove the undesirable signal at the final fragment. MPEG varieties The MPEG standards comprise of different Parts. Every part covers a certain element of the entire specification. The standards also specify Levels and Profiles. Profiles are intended to delineate a set of available tools, and Levels delineate the range of suitable values for the properties related with them (Chiariglione, 2012, p. 48). MPEG has standardized various compression formats and auxiliary standards discussed below. MPEG-1: It is normally limited to around 1.5 Mbit/s and principally designed to allow moving sound or video to be encoded in the bitrate of Compact Discs. It is used in Video CD and may be used for low-quality videos on DVD Video. MPEG-2: Transport, audio and video standards for broadcast-quality TV. The format has been extensively adopted due to its flexibility, currently being used in the following applications: Digital Terrestrial Television (ISDB, ATSC, DVB) Super Video CD Satellite TV providers Blu-ray (although Part 10 MPEG-4 is used for HD contents) DVD-Video MPEG-3: this format dealt with standardizing mountable and multi-resolution compressions and was projected for HDTV compressions but was later found to be redundant making it to be merged with MPEG-2, hence there is no standard MPEG-3. MPEG-3 must not be confused with MP3 format, which is a MPEG-1 Audio Layer III. MPEG-4: uses further coding tools and additional complexity in achieving higher compression factors than the MPEG-2. Additionally, MPEG-4 closes to computer graphics applications. Several new higher-efficiency video standards (newer than MPEG-2 Video) are included, notably: MPEG-7: is a multimedia standard of content description. This description will be related with the content itself, in allowing fast and efficient material searching that is of concern to the user. It uses XML for storing metadata, and can be ascribed to time-code in order to tag specific events, or synchronize lyrics to a song, for instance. NB: MPEG-21 is under development (Chiariglione, 2012, p. 51). Lossy and Lossless data compressions Lossless coding Lossless data compression uses data compression algorithms which allow the exact original data being re-constructed from the compressed digital data. This can be compared to lossy data compression technique, which does not allow this exact original data being re-constructed from compressed data. The lossless data compression is employed in many applications. For instance, it is used in common ZIP file format and the Unix tool gzip (Chiariglione, 2012, p. 49). This technique is also frequently used as a component in lossy data compression technologies. The lossless compression technology is used if it is significant that the original and decompressed data is exactly identical, or when no supposition can be made on if certain deviations are uncritical (Gibson, 1998, p. 96). Typical examples are executable source and program codes. Some images file formats, particularly PNG, use lossless compression only, while others like MNG and TIFF may use either lossy or lossless methods. GIF uses lossless compression method, although most GIF implementations are unable of expressing full color, so they normally quantize the image (mostly with dithering) to less than or 256 colors before encoding it as GIF (Chiariglione, 2012, p. 56). Color quantization remains a lossy process, but re-constructing the colored image and later re-quantizing it does not produce additional loss (Maes & Vercammen, 2012, p. 23). Lossless compression techniques may be classified according to the data type they are intended to compress. Some major types of compression targets algorithms are text, executable, sound, and images. Whereas, in principle, any multi-purpose lossless compression algorithm (multi-purpose means that they are able to handle all binary inputs) can be employed on any kind of data, many are incapable of achieving significant compression of data which is not in the form which they are designed handle. In this case, it is important to note that sound data cannot be compressed well using conventional text compression algorithms (Maes & Vercammen, 2012, p. 108). Most lossless compression techniques use two different types of algorithms: one that generates an input data statistical model, and another that plots the input data to a bit strings by use of this model such that "probable" (such as frequently encountered) data will yield shorter output than the "improbable" data (Maes & Vercammen, 2012, p. 108). Only the former algorithm is named in normal circumstances, while the other is implied (by common use, standardization etc.) and / or unspecified. Statistical modeling algorithms for text (text-like binary data e.g. executable) include: Burrows-Wheeler transform or BWT; block sorting pre-processing that makes compression much more efficient) LZW PPM LZ77 (used by Deflate) Encoding algorithms for producing bit sequences are: Arithmetic coding Huffman coding (also used by Deflate) Lossless coding A lossy data compression technique is one whereby data is compressed and then de-compressed to retrieve data that may be well different from its original, but "close enough" to be expedient in some way. The lossy data compression technique is frequently used on Internet and particularly in streaming telephony and media applications. These methods are normally referred to as codecs within this context. Most of lossy data compression formats struggle with generation losses: repeated compressions and de-compressing the file will make it to progressively lose its quality (Maes & Vercammen, 2012, p. 221). This is different with lossless data compressions. There are 2 basic schemes of lossy compression: One scheme, in lossy transforms codecs, samples of sound are taken, chopped in small segments, transformed into new basis space, and then quantized. The resultant quantized values are at that point entropy coded. The other scheme, in lossy predictive codecs, the preceding and/or successive decoded data is used in predicting the existing sound sample. The error between the real data and the predicted data, together with any additional information needed to reproduce prediction, is now quantized and then coded. In some systems these two techniques are joined, with transform codecs being employed to compress the signals error generated in the predictive stage. Perceptual Coding What makes the MP3 encoding an effective audio data compression technique is its unconventionality from the PCM model (Aksoy & DeNardis, 2007, p. 35). In a PCM technique system, the aim is digitally reproducing the waveform of an inward bound signal as precisely and accurately as is logically possible. However, it can be argued that the implied assumption of PCM — explicitly that the re-production of sound requires the re-production of waveforms — is unsophisticated, and involves a misunderstanding on the way human perception really works. Irrelevancy, however, is a relatively more radical concept in PCM. The theory touching psychoacoustic coding argues that, due to the human perception peculiarities, certain properties of given waveform will be effectually meaningless to human listeners — and therefore will not at all be perceived. However, due to its assertion on capturing the whole waveform, a PCM technique will eventually record and store a huge amount of this extraneous data, regardless of its imperceptibility at playback.it is significant to not that perceptual coding aims, referring to the psychoacoustic model, to store the only data which is detectable to the human ear. By doing so, it is possible to attain drastically reduced files sizes, by simply discarding the unnoticeable and hence irrelevant data captured on PCM recording. Critical band For a given frequency, a critical band is the lowest band of frequencies in it which activate similar part of basilar membrane. In a complex tone, a critical band-width corresponds to the lowest frequency difference between 2 partials such that each will still be heard separately. Critical band may also be measured when sine tone it taken barely masked by a white noise band around it; after the noise band is contracted to the point where sine tone becomes audible to normal human, width is the critical band-width at that point. In terms of length the critical band-width is almost constant at 1.2 mm, in which are located almost 1300 receptor cells, and remains generally intensity independent (unlike combination tones) (Gibson, 1998, p. 402). 24 critical bands of about a third octave each encompass the audible spectrum. Bibliography Aksoy, P., & DeNardis, L. (2007). Information Technology in Theory. New York: Cengage Learning. Chiariglione, L. (2012). The MPEG Representation of Digital Media. New york: Springer. Gibson, J. (1998). Digital Compression for Multimedia: Principles & Standards. New York: Morgan Kaufmann. Kefauver, A., & Patschke, D. (2007). Fundamentals of Digital Audio. Sydney: A-R Editions, Inc. Maes, J., & Vercammen, M. (2012). Digital Audio Technology: A Guide to CD, MiniDisc, SACD, DVD(A), MP3 and DAT. Seatle: CRC Press. Solari, S. (1997). Digital video and audio compression. Washington: McGraw-Hill. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“PCM Theory and Audio Reduction Codecs and Techniques Research Paper”, n.d.)
PCM Theory and Audio Reduction Codecs and Techniques Research Paper. Retrieved from https://studentshare.org/media/1470104-cp5010essayminiblock22
(PCM Theory and Audio Reduction Codecs and Techniques Research Paper)
PCM Theory and Audio Reduction Codecs and Techniques Research Paper. https://studentshare.org/media/1470104-cp5010essayminiblock22.
“PCM Theory and Audio Reduction Codecs and Techniques Research Paper”, n.d. https://studentshare.org/media/1470104-cp5010essayminiblock22.
  • Cited: 0 times

CHECK THESE SAMPLES OF PCM Theory and Audio Reduction Codecs and Techniques

Data Mining for E-Commerce

The use of data mining tools and techniques provides a large number of benefits and opportunities for business organizations.... The use of data mining tools and techniques provides a large number of benefits and opportunities for business organizations.... For instance, data mining tools and techniques allow the business organizations to carry out a deep examination of the customer and business associated data and information, which facilitate business firms to make critical strategic decisions....
16 Pages (4000 words) Research Paper

Digital fingerprinting

… The structures and imaging techniques and the functions and processing and transmission of images are discussed, along with a comparative analysis of two relevant techniques.... The two techniques show differences in fingerprinting methods as the first patent deals with identification of fingerprints and its applications and the second technique deals with transmission and storage of fingerprints through a mobile device.... This study is based on research questions that will aim to examine the use of digital fingerprinting and such other techniques in crime control and crime investigation and how the methods of storage of digital fingerprints in databases could have an impact on the information accessibility in general....
14 Pages (3500 words) Essay

The Impacts of Building E-commerce Websites for Business

Electronic commerce or e-commerce is the use of Information Technology (IT) systems to conduct the buying and selling of goods and services, electronically (Wigand, R.... .... 1997).... It is not just about having a web site, since a company's website may provide useful information to potential customers but unless there are mechanisms to also transact a sale, it is simply an online brochure....
29 Pages (7250 words) Coursework

Digital fingerprinting

… The structures and imaging techniques and the functions and processing and transmission of images are discussed, along with a comparative analysis of two relevant techniques.... The two techniques show differences in fingerprinting methods as the first patent deals with identification of fingerprints and its applications and the second technique deals with transmission and storage of fingerprints through a mobile device.... This study is based on research questions that will aim to examine the use of digital fingerprinting and such other techniques in crime control and crime investigation and how the methods of storage of digital fingerprints in databases could have an impact on the information accessibility in general....
20 Pages (5000 words) Essay

The Dicode PPM (DiPPM)

On a comparative basis when the DiPPM system uses the MLSD system it achieves a reduction in number of photons per pulse when it is operated at a bandwidth of less than 1 normalisation.... Communication System Model with RS Encoder/ RS Decoder over AWGN Channel Further the Matlab program in chapter 5 can be upgraded in order to send and receive an audio video data, and to measure the optical spectrum of the system.... The optimum RS code parameters have been used incorporated in the system design and the results of the simulation have shown that all the parts of the system are working correctly and are in tandem with the system theory....
2 Pages (500 words) Essay

Bandwidth Estimation In Wireless Networks

A number of existent techniques provide a means of measuring bandwidth; however, many of those have limitations because they either increase network traffic unduly or provide poor data feedback.... Wireless networks have become increasingly heterogeneous more difficult to model.... The paper "Bandwidth Estimation In Wireless Networks" discusses the LPC method as one of many passive methods that allow for measurement of available bandwidth at both peak and non-peak times....
44 Pages (11000 words) Dissertation

EMC Testing and Standards in Transient Immunity Testing

hellip; There are many techniques and good practice controls that should be used in the design and management of the control room since very high-frequency emissions are expected.... The present assignment "EMC Testing and Standards in Transient Immunity Testing" is focused on the principles of the electromagnetic compatibility performance....
9 Pages (2250 words) Assignment

Senior Citizens are not Modern when It Comes to Technology and Gadgets

The researcher of the current paper highlights that technology is ubiquitous in the 21st century; access to low-cost information technology can change the way people interact with one another.... One demographic routinely passed over when it comes to the benefits conferred by technology.... hellip; While senior citizens could easily benefit from technology in the sense of improved health care and social outcomes, they have trouble adjusting to it....
44 Pages (11000 words) Research Paper
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us