CHAPTER 10: SOUND AND VIDEO EDITING

Similar documents
CS 074 The Digital World. Digital Audio

Digital Recording and Playback

Introducing Audio Signal Processing & Audio Coding. Dr Michael Mason Senior Manager, CE Technology Dolby Australia Pty Ltd

Introducing Audio Signal Processing & Audio Coding. Dr Michael Mason Snr Staff Eng., Team Lead (Applied Research) Dolby Australia Pty Ltd

Digital Audio. Amplitude Analogue signal

Fundamental of Digital Media Design. Introduction to Audio

Digital Media. Daniel Fuller ITEC 2110

Lecture #3: Digital Music and Sound

3 Sound / Audio. CS 5513 Multimedia Systems Spring 2009 LECTURE. Imran Ihsan Principal Design Consultant

Principles of Audio Coding

Selection tool - for selecting the range of audio you want to edit or listen to.

Audio Fundamentals, Compression Techniques & Standards. Hamid R. Rabiee Mostafa Salehi, Fatemeh Dabiran, Hoda Ayatollahi Spring 2011

Audacity tutorial. 1. Look for the Audacity icon on your computer desktop. 2. Open the program. You get the basic screen.

Chapter 5.5 Audio Programming

UNDERSTANDING MUSIC & VIDEO FORMATS

A Digital Audio Primer

COS 116 The Computational Universe Laboratory 4: Digital Sound and Music

_APP B_549_10/31/06. Appendix B. Producing for Multimedia and the Web

Audio Editing in Audacity. Josh Meltzer Western Kentucky University School of Journalism & Broadcasting

COS 116 The Computational Universe Laboratory 4: Digital Sound and Music

Lecture 16 Perceptual Audio Coding

Compressed Audio Demystified by Hendrik Gideonse and Connor Smith. All Rights Reserved.

08 Sound. Multimedia Systems. Nature of Sound, Store Audio, Sound Editing, MIDI

DIGITIZING ANALOG AUDIO SOURCES USING AUDACITY

Sampling & Sequencing. Combining MIDI and audio

DSP. Presented to the IEEE Central Texas Consultants Network by Sergio Liberman

Perceptual Coding. Lossless vs. lossy compression Perceptual models Selecting info to eliminate Quantization and entropy encoding

Export Audio Mixdown

1. Selection Tool allows selection of specific portions of the waveform on the timeline

EE482: Digital Signal Processing Applications

Multimedia Technology

Perceptual coding. A psychoacoustic model is used to identify those signals that are influenced by both these effects.

Intro To Pure Data Documentation

Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal.

5: Music Compression. Music Coding. Mark Handley

Lossy compression. CSCI 470: Web Science Keith Vertanen

GUIDELINES FOR THE CREATION OF DIGITAL COLLECTIONS

FILE CONVERSION AFTERMATH: ANALYSIS OF AUDIO FILE STRUCTURE FORMAT

Introducing working with sounds in Audacity

How Analog and Digital Recording and CDs Work Adapted from by Marshall Brain

Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

3.01C Multimedia Elements and Guidelines Explore multimedia systems, elements and presentations.

2.4 Audio Compression

Audio Compression. Audio Compression. Absolute Threshold. CD quality audio:

2.1 Transcoding audio files

AET 1380 Digital Audio Formats

Standard File Formats

twisted wave twisted wave [an introduction]

KLARSTEIN USB TURNTABLE - User's manual

Aud-X 5.1 Help.

Digital Media. Lecture 2: SemesterOverview. Georgia Gwinnett College School of Science and Technology Dr. Mark Iken

Data Representation. Reminders. Sound What is sound? Interpreting bits to give them meaning. Part 4: Media - Sound, Video, Compression

Amazing Audacity: Session 1

Mpeg 1 layer 3 (mp3) general overview

General Computing Concepts. Coding and Representation. General Computing Concepts. Computing Concepts: Review

CHAPTER 6 Audio compression in practice

1. Before adjusting sound quality

Introduction to Using Audacity

Bits, bytes, binary numbers, and the representation of information

TURNTABLE PLAYER GE4056 READ INSTRUCTIONS CAREFULLY BEFORE USE AND STORE IN A SAFE PLACE FOR FUTURE REFERENCE

Capturing and Editing Digital Audio *

Optimizing Audio for Mobile Development. Ben Houge Berklee College of Music Music Technology Innovation Valencia, Spain

Image and Video Coding I: Fundamentals

Sources:

Data Compression. Audio compression

Chapter 7: Working with Digital Sounds

95.9 fm CKUW Policies & Procedures

Audacity Tutorial Recording With Your PC

GUIDELINES FOR THE CREATION OF DIGITAL COLLECTIONS

Principles of MPEG audio compression

Really Easy Recording & Editing

Preparing Music and Narration for AV s

The following bit rates are recommended for broadcast contribution employing the most commonly used audio coding schemes:

Binary representation and data

Embedding Audio into your RX Application

Chapter 14 MPEG Audio Compression

The L&S LSS Podcaster s Tutorial for Audacity

Digital Music. You can download this file from Dig Music May

Request for: 2400 bytes 2005/11/12

Microcontroller Compatible Audio File Conversion

Quick Guide to Getting Started with:

Elementary Computing CSC 100. M. Cheng, Computer Science

Understand digital audio production methods, software, and hardware.

Basic features. Adding audio files and tracks

For Mac and iphone. James McCartney Core Audio Engineer. Eric Allamanche Core Audio Engineer

INSTRUCTION MANUAL L-3866USB

Introduction to Audacity

And you thought we were famous

ITNP80: Multimedia! Sound-II!

Recording Your Audio and Creating Your MP3 File using Audacity

Chapter X Sampler Instrument

CISC 7610 Lecture 3 Multimedia data and data formats

A Brief Introduction of how to use Audacity

& Windows XP. ESC 12/Podcasting Workshop Handout/June 2009/Exec Svcs Tech/Rev 2

fffasdf Pro Tools App 2 Week 1 Chapter 1: Basics I Chapter 1: Basics I 2017 Dajaun Martineau 1

DM10 Audio Lab getting ready

Audio-coding standards

PS Audio Primer on Network Audio. February 2, 2011

University of Pennsylvania Department of Electrical and Systems Engineering Digital Audio Basics

Purchasers of this product are permitted to utilize said content for the creating, performing, recording and distributing original musical works.

Transcription:

CHAPTER 10: SOUND AND VIDEO EDITING What should you know 1. Edit a sound clip to meet the requirements of its intended application and audience a. trim a sound clip to remove unwanted material b. join together two sound clips c. fade in and fade out a sound clip d. alter the speed of a sound clip e. change the pitch of a sound clip f. add or adjust reverberation g. overdub a sound clip to include a voice over h. export a sound clip in different fi le formats i. compress (including: the use of MP3) the sound fi le to different sample rates to suit different media 2. describe how typical features found in sound editing software are used in practice 3. describe how file sizes depend on sampling rate and sampling resolution Sampling The conversion of a sound wave (a continuous signal) to a sequence of samples (a discrete-time signal). Bit Depth Bit depth describes how many bits are used for one sample. Bit depth is the building block to your sampled or recorded audio signal. The data from your recorded audio signal has to be stored on a storage medium such as a hard drive somehow, and the bit depth is what determines how much information is used to store that signal. Think about trying to create an image from a box of Legos. If you only had 8 legos to work with, it would be difficult to resolve an image out of 8 different colored pieces. If you had 64 Legos of every color you can imagine, you could make something recognizable. It s a basic way to think about the concept, but visually it makes sense. The more storage spaces you have to store data, the more accurately you can depict what actually happened. What is recording and playback of audio other than simply trying to recreate a sound wave? The more open slots you have to store information about what that sound wave looked like, the more accurately your computer will be able to re-create the sound. What is the real world translation of this? A higher bitrate will give you more dynamic headroom. You ll have more resolution and detail between the softest sound and the loudest sound you record. This will come at an expense of disk space. Larger bit rate means more storage is needed to save all that information. If you re in a recording situation where dynamic range is critical, consider a higher bit depth to give you more room to work with. If you re recording a fairly consistently loud band, you can probably save some space and stick with 16 bit audio. 1

2

Sample Rate Now that you re taking a picture of a moment in time and storing it in bits and bytes, the next question you have to consider is how often do we take that snapshot in time? Should we sample the sound around us 100 times per second? 200 times per second? Thousands of times? The number of times your computer listens and records what it hears per second is referred to as the sample rate. 44.1 khz has been the standard CD quality audio sampling rate for years, and with good reason. This sample rate is tied to the Nyquist-Shannon sampling theorem. The theorem states (basically) that in order to replicate a sound, the sample frequency (how often the recording device listens ) must be twice the highest frequency of the content being sampled or recorded. So why 44.1? Because humans generally can hear between 20 Hz to 20 khz. Why waste time, space, and resources recording information above 44.1 when most people can t hear tones above a 20 khz frequency? Although humans can t generally hear above 20 khz, that doesn t mean we want to constantly toss out any sound that happens up above there. As you re probably aware, sounds can interact with one another causing distortion, harmonics, intermodulation, and other psychoacoustic effects. Just because you can t hear a sound at 30 khz doesn t mean that a 30 khz sound can t affect stuff down in the 15 khz range where you can hear it! That s why we have the ability to record at higher resolutions and sample rates such as 48, 88.2, and even up to 192 khz. There are many situations (particularly in film where you re dealing with sound more than music) where the content you are recording will benefit from a higher sample rate. Real world translation? A higher sample rate will require a bit more space and a bunch more computing horsepower to manipulate, record, and play back. Recording a small rock band may not 3

reap all the benefits of a 192 khz sample rate, but recording a symphonic orchestra at Carnegie Hall certainly might! 4

2.2 Analog to digital Analog signals are sampled to create digital data. The sampling rate (how many times a second the analog signal is measured), determines the fidelity of the sound and the size of the digital file. Higher sample rates will mean bigger file sizes but it is not just the number of samples that matter. Each single sample has a bit depth, which is the number of bits of information in each sample. This directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample and DVD-Audio and Blu-ray Disc, which supports up to 24 bits per sample. This digitisation is carried out by analog-to-digital Conversion (ADC). After you have edited a digital file, the computer will use a digital-to-analog Conversion (DAC) to convert it to a (relatively) smooth analog waveform that drives your speakers. 2.3 Frequency Frequency is the number of occurrences of a repeating event per unit of time. The number of complete waves or oscillations or cycles occurring in unit time (usually one second). The perception of frequency is called pitch. The frequency is the number of peaks and tr5oughs per second and is given as cycles per second Hz; pronounced Hertz. The average human ear can hear sounds as low as 20 Hz and as high as 20 khz (20 000 Hz). This is known as the audible range. - 5

2.4 Pitch The frequency of a sound as perceived by the ear. A high pitch sound corresponds to a high frequency sound wave and a low pitch sound corresponds to a low frequency sound wave. We use the word pitch to compare sounds. We perceive higher frequency sounds as higher in pitch. When we work with audio editing software we usually refer to changing the pitch rather than altering the frequency. 6

2.5 Amplitude The height of the sound wave reflects the strength or power of the sound. We call this the amplitude. Higher amplitudes result in higher volumes. This is why we call a device that increases volume an amplifier. When an amplifier tries to increase the amplitude beyond its limits, the waveform is clipped. Clipping produces a distortion, which is one of the effects that rock guitarists often use. We will use several effects in our audio editing practice. 2.6 Audio editing What we ve seen in the diagrams so far are very simple waveforms. A waveform is a representation of an audio signal or recording. It lets you see the changes in amplitude against time. Audio editing applications, like Audacity, use waveforms to show how sound was recorded. If the waveform is low, the volume was probably low. If the waveform fills the track, the volume was probably high. Changes in the waveform can be used to spot when certain parts of the sound occur. In this screenshot of one of the practice files, we can see the clock ticks and when the alarm rings. 7

When we need to save the file we ve been editing we need to decide which format to use. In theory this should be a simple trade-off between the quality of the recording and the size of the file. It is sometimes more important, however, to consider the eventual use and distribution of the recording. 2.6.1 Audio formats There are three major groups of audio formats: uncompressed lossless compression Lossy compression. Uncompressed formats like.wav give pure digital sound as recorded. However, the files can be very large. Five minutes of WAV audio can need 40 to 50 mb of storage. Files saved with Lossless compression like.flac files are still the sound as recorded but the compression algorithm allows some parts of the file to be stored as coded data, saving some file space. Files saved with lossy compression like.mp3 have some audio information removed and the data is simplified. This does result in some loss of quality but clever algorithms only remove the parts of the sound that have the least effect on the sound we can hear. The compression of a recording and the decompression for playing back a recording is done by CODECs (coder decoder). CODECs for the common audio formats are usually automatically installed in media applications and extra CODECs for other file types can be downloaded and installed as required. Research Questions: 1. For a digital audio recording, explain what is meant by: a. the sample rate b. the bit depth c. the bit rate. 2. Give an example of when a low sampling rate is acceptable for audio transmission or recording. 3. Explain the difference between stereo and mono audio. 4. For each of the following describe the effect on the size of the digital audio file: a. recording with an increased sample rate b. converting a stereo recording to mono c. saving the file in.mp3 format. 5. Explain the function of a CODEC. 6. Give an example recording format for each of the following: a. an uncompressed audio file b. an audio file recorded with lossless compression c. an audio file recorded with lossy compression. 7. Describe an advantage and a disadvantage for both lossless and lossy compression. 8