Analog vs. Digital
Audio and video can be transmitted as analog signals (continuous voltage variations) or digital signals (discrete binary data). Each has distinct advantages, and modern AV systems often use both, with conversion between them occurring at strategic points.
Analog Signals
An analog signal is a continuous electrical representation of the original sound or video. For audio, the voltage varies smoothly in real time to match the air pressure variations created by sound. For video, the voltage represents brightness and color information.
Analog audio characteristics:
- Microphone → air pressure variations → electrical voltage variations → speaker → sound
- Signal quality degrades slightly with cable length due to attenuation and noise pickup
- Examples: XLR microphone cables, RCA audio cables, analog speaker outputs
Analog video characteristics:
- Camera → light variations → electrical voltage variations → display → light
- Composite video (single yellow RCA) is the oldest standard
- Component video (three RCA: Y/Pb/Pr or RGB) separates color information
- Signal degrades significantly beyond 50-100 feet without proper cable and amplification
Advantages:
- Simpler architecture (single point-to-point connection)
- No latency or processing delay
- Very mature technology with decades of reliability
- Familiar to technicians
Disadvantages:
- Noise accumulates with cable length and in noisy RF environments
- Vulnerable to AC hum and electromagnetic interference
- Limited bandwidth restricts resolution in video (analog video maxes out around 1080p-quality image)
- Cannot easily combine multiple signals (requires passive mixing, which degrades quality)
Digital Signals
A digital signal encodes audio or video as binary data (ones and zeros) at a high sample rate. For audio, the continuous signal is sampled (measured) thousands of times per second; for video, the image is sampled in pixels and color information.
Audio digitization:
- Analog audio sampled at 44.1 kHz (44,100 times per second), 48 kHz, or higher
- Each sample encoded as binary data (typically 16-bit, 24-bit, or 32-bit)
- 48 kHz, 24-bit is professional standard
- Examples: dante, aes67, USB audio, Bluetooth audio
Video digitization:
- Image divided into pixels, color encoded in binary (8-bit, 10-bit, or 12-bit per color)
- Video frame rate: 24, 25, 30, 50, or 60 frames per second (fps)
- Examples: hdmi, sdi, ndi, DisplayPort, Ethernet-based video
Advantages:
- No quality degradation across cable distance (until transmission errors occur)
- Robust error correction ensures reliable transmission
- Can distribute multiple signals over single cables via networking (Dante, AES67, NDI)
- High bandwidth allows 4K, 8K, and beyond
- Software processing (DSP effects, mixing, routing) becomes possible
- Can embed audio with video in single cable (HDMI, SDI)
Disadvantages:
- Requires conversion at boundaries (analog microphone must convert to digital for digital audio network)
- Adds latency (delay) during conversion and processing
- More complex equipment and system architecture
- Standards are constantly evolving (HDMI versions, codec changes)
- Requires proper bandwidth and network management to prevent dropouts
- Digital clipping (hitting 0 dBFS) is audibly harsh with no graceful degradation
Conversion: Analog to Digital (ADC)
An analog-to-digital converter (ADC) samples an analog signal at regular intervals and converts each sample to a binary value.
Nyquist theorem: To accurately represent a signal, sample rate must be at least 2× the highest frequency. This is why audio at 20 kHz (human hearing limit) requires 48 kHz sample rate minimum (actually slightly more, which is why 44.1 kHz and 48 kHz are used).
Bit depth: Determines resolution. 16-bit audio has 65,536 possible voltage levels. 24-bit has 16 million, capturing finer detail. More bits = lower noise floor and better dynamic range.
Latency: An ADC introduces a small delay (typically 1-50 milliseconds depending on buffer size). This matters in live applications—a microphone feeding a speaker via a digital system has noticeable delay if not managed.
Conversion: Digital to Analog (DAC)
A digital-to-analog converter (DAC) reconstructs an analog signal from digital samples by smoothing the stepped binary values back into a continuous voltage. A speaker amplifier includes a DAC to convert digital audio to the analog voltage needed to drive speakers.
Hybrid Systems
Most modern AV systems are hybrid:
Live sound example: Microphones (analog) → XLR cables → Mixing console with ADC input → Digital mixing and DSP processing → DAC output → Power amplifiers (analog driver) → Speakers (analog acoustic output)
Video production: Cameras (analog output or HDMI/SDI digital) → Matrix switch (digital routing) → Streaming encoder (digital to network) → Network distribution (AoIP) → Display with DAC for HDMI output → Monitor shows analog video
Networked audio: Microphones (analog) → Audio interface with ADC → dante network (digital) → DSP processor → DAC → Powered speakers
The transition between analog and digital is critical. A poor ADC introduces noise. A cheap DAC sounds harsh. Proper gain structure at conversion boundaries ensures fidelity.
Standards and Codecs
Uncompressed digital: RAW audio or video requires high bandwidth. A stereo 24-bit/48 kHz audio stream is roughly 2.3 Mbps. 4K 10-bit video is gigabits per second.
Compressed formats: MP3, AAC, H.264, H.265 reduce file size for storage and streaming. Compression introduces processing delay but allows transmission over limited bandwidth.
For live AV: uncompressed or very-low-latency compression (like JPEG-XS for video) is standard.
Choosing Analog or Digital
Use analog when:
- Simple point-to-point connections (one microphone to one speaker)
- Ultra-low latency is critical (live PA systems)
- Equipment is older or budget is extremely limited
- Cable runs are short (under 50 feet)
Use digital when:
- Distributing one source to multiple destinations
- Long cable runs or signal integrity is critical
- Advanced processing or mixing is needed
- 4K video or high-resolution audio is required
- Network-based routing and control are desired
Common Pitfalls
- Exceeding Nyquist frequency without filtering: Sampling analog audio at 48 kHz assumes no frequencies above 24 kHz in the input. If an analog microphone picks up ultrasonic content (some condensers do), aliasing occurs—high frequencies fold back into the audible range as artifacts. Always use a proper anti-alias filter before ADC in professional systems.
- Mixing analog and digital without accounting for latency: Combining a direct analog signal with a delayed digital version (even 20ms delay through DSP) creates comb filtering and phase issues when both reach the same destination. In live PA, this is especially problematic. Always measure and document latency in hybrid systems.
- Underestimating jitter in digital audio networks: Dante and AES67 are robust, but network jitter (timing variations) can degrade audio quality if network switches are not properly configured or if cables have poor shielding. Using unmanaged switches for audio networks, or neglecting to separate audio traffic from data traffic, introduces latency and dropouts.
- Not leaving headroom for digital clipping: Digital signals that clip produce very audible distortion with no graceful degradation. A signal peaking at -1 dBFS with a brief transient that pops above 0 dBFS creates harsh clipping. Conservative peaking at -6 to -18 dBFS is essential. Many integrators fail to respect this in streaming or networked audio scenarios.