4 min read

(For more resources related to this topic, see here.)

We leverage the AudioParam automation support to implement ducking. The following is the overview of the ducking logic implemented in the AudioLayer class:

  1. We add a GainNode instance into the node graph as the duck controller.

  2. When a sound effect is played, we script the duck controller’s gain audio parameter to reduce the audio output gain level for the duration of the sound effect.

  3. If ducking is reactivated while it is still active, we revise the scheduled ducking events so that they end at the appropriate time.

The following is the node graph diagram produced by the code:

Why use two GainNode instances instead of one?

It’s a good idea to split up the independent scripted audio gain behaviors into separate GainNode instances. This ensures that the scripted behaviors will interact properly.

Now, let’s take a look at AudioLayer.setDuck() which implements the ducking behavior:

  1. The AudioLayer.setDuck() method takes a duration (in seconds) indicating how long the duck behavior should be applied:

    AudioLayer.prototype.setDuck = function( duration ) {

  2. We cache the duck controller’s gain audio parameter in duckGain:

    var TRANSITIONIN_SECS = 1; var TRANSITIONOUT_SECS = 2; var DUCK_VOLUME = 0.3; var duckGain = this.duckNode.gain;

  3. We cancel any existing leftover scheduled duck behaviors, thereby allowing us to start with a clean slate:

    var eventSecs = this.audioContext.currentTime; duckGain.cancelScheduledValues( eventSecs );

  4. We employ the linearRampToValueAtTime() automation behavior to schedule the transition in—the audio parameter is scripted to linearly ramp from the existing volume to the duck volume, DUCK_VOLUME, over the time, TRANSITIONIN_SECS. Because there are no future events scheduled, the behavior starts at the current audio context time:

    duckGain.linearRampToValueAtTime( DUCK_VOLUME, eventSecs + TRANSITIONIN_SECS );

    If the volume is already at DUCK_VOLUME, the transition has no effect, thereby creating the effect of extending the ducking behavior.

  5. We add an automation event to mark the start of the TRANSITIONOUT section. We do this by scheduling a setValueAtTime() automation behavior:

    duckGain.setValueAtTime( DUCK_VOLUME, eventSecs + duration );

  6. Finally, we set up the TRANSITIONOUT section using a linearRampToValueAtTime() automation behavior. We arrange the transition to occur over TRANSITIONOUT_SECS by scheduling its end time to occur after the TRANSITIONOUT_SECS duration of the previous setValueAtTime() automation behavior:

    // Schedule the volume ramp up duckGain.linearRampToValueAtTime( 1, eventSecs + duration + TRANSITIONOUT_SECS ); };

The following is a graph illustrating the automation we’ve applied to duckGain, the duck controller’s gain audio parameter:

In order to have the sound effects activation duck the music volume, the sound effects and music have to be played on separate audio layers. That’s why this recipe instantiates two AudioLayer instances—one for music playback and the other for sound effect playback.

The dedicated music AudioLayer instance is cached in the WebAudioApp attribute, musicLayer, and the dedicated sound effects AudioLayer instance is cached in WebAudioApp attribute sfxLayer:

WebAudioApp.prototype.start = function() { ... this.musicLayer = new AudioLayer( this.audioContext ); this.sfxLayer = new AudioLayer( this.audioContext ); ... };

Whenever a sound effects button is clicked, we play the sound and simultaneously activate the duck behavior on the music layer. This logic is implemented as part of the behavior of the sound effect’s click event handler in WebAudioApp.initSfx():

jqButton.click(function( event ) { me.sfxLayer.playAudioBuffer( audioBuffer, 0 ); me.musicLayer.setDuck( audioBuffer.duration );

We activate ducking on webAudioApp.musicLayer, the music’s AudioLayer instance. The ducking duration is set to the sound effects duration (we read the sound effects sample duration from its AudioBuffer instance).

The ducking behavior is just one demonstration of the power of automation. The possibilities are endless given the breadth of automation-friendly audio parameters available in Web Audio. Other possible effects that are achievable through automation include fades, tempo matching, and cyclic panning effects.

Please refer to the latest online W3C Web Audio documentation at http://www.w3.org/TR/webaudio/ for a complete list of available audio parameters.

Summary

In this article we looked at different rules in scheduling the automation events. we also looked at the overview of ducking logic implemented in the AudioLayer class and checked how we implement AudioLayer.setDuck() method which implements the ducking behaviour. Finally, we analyzed the ducking behaviour with the help of graph.

Resources for Article :


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here