4 min read

 

Introduction

Camera and microphone are standard accessories on most mobile devices and Android devices are no exception to this. In the previous article we dealt with Visual Input via Camera. The present article will cover encoding raw audio captured from the device microphone and encoding it to WAV or MP3 for use on other platforms and systems.

All of the recipes in this article are represented as pure ActionScript 3 classes and are not dependent upon external libraries or the Flex framework. Therefore, we will be able to use these examples in any IDE we wish.

The reader is advised to refer to the first recipe of Flash Development for Android: Visual Input via Camera for detecting microphone support.

Using the device microphone to monitor audio sample data

By monitoring the sample data being returned from the Android device microphone through the ActionScript Microphone API, we can gather much information about the sound being captured, and perform responses within our application. Such input can be used in utility applications, learning modules, and even games.

How to do it…

We will set up an event listener to respond to sample data reported through the Microphone API:

  1. First, import the following classes into your project:

    import flash.display.Sprite;
    import flash.display.Stage;
    import flash.display.StageAlign;
    import flash.display.StageScaleMode;
    import flash.events.SampleDataEvent;
    import flash.media.Microphone;
    import flash.text.TextField;
    import flash.text.TextFormat;

    
    
  2. Declare a TextField and TextFormat object pair to allow visible output upon the device. A Microphone object must also be declared for this example:

    private var mic:Microphone;
    private var traceField:TextField;
    private var traceFormat:TextFormat;

    
    
  3. We will now set up our TextField, apply a TextFormat, and add the TextField to the DisplayList. Here, we create a method to perform all of these actions for us:

    protected function setupTextField():void {
    traceFormat = new TextFormat();
    traceFormat.bold = true;
    traceFormat.font = “_sans”;
    traceFormat.size = 44;
    traceFormat.align = “center”;
    traceFormat.color = 0x333333;
    traceField = new TextField();
    traceField.defaultTextFormat = traceFormat;
    traceField.selectable = false;
    traceField.mouseEnabled = false;
    traceField.width = stage.stageWidth;
    traceField.height = stage.stageHeight;
    addChild(traceField);
    }

    
    
  4. Now, we must instantiate our Microphone object and set it up according to our needs and preferences with adjustments to codec, rate, silenceLevel, and so forth. Here we use setSilenceLevel() to determine what the minimum input level our application should consider to be “sound” and the rate property is set to 44, indicating that we will capture audio data at a rate of 44kHz. Setting the setLoopBack () property to false will keep the captured audio from being routed through the device speaker:

    protected function setupMic():void {
    mic = Microphone.getMicrophone();
    mic.setSilenceLevel(0);
    mic.rate = 44;
    mic.setLoopBack(false);
    }

    
    
  5. Once we have instantiated our Microphone object, we can then register a variety of event listeners. In this example, we’ll be monitoring audio sample data from the device microphone, so we will need to register our listener for the SampleDataEvent.SAMPLE_DATA constant:

    protected function registerListeners():void {
    mic.addEventListener(SampleDataEvent.SAMPLE_DATA, onMicData);
    }

    
    
  6. As the Microphone API generates sample data from the Android device input, we can now respond to this in a number of ways, as we have access to information about the Microphone object itself, and more importantly, we have access to the sample bytes with which we can perform a number of advanced operations:

    public function onMicData(e:SampleDataEvent):void {
    traceField.text = “”;
    traceField.appendText(“activityLevel: ” +
    e.target.activityLevel + “n”);
    traceField.appendText(“codec: ” + e.target.codec + “n”);
    traceField.appendText(“gain: ” + e.target.gain + “n”);
    traceField.appendText(“bytesAvailable: ” +
    e.data.bytesAvailable + “n”);
    traceField.appendText(“length: ” + e.data.length + “n”);
    traceField.appendText(“position: ” + e.data.position + “n”);
    }

    
    
  7. The output will look something like this. The first three values are taken from the Microphone itself, the second three from Microphone sample data:

    Flash Development for Android

How it works…

When we instantiate a Microphone object and register a SampleDataEvent.SAMPLE_DATA event listener, we can easily monitor various properties of our Android device microphone and the associated sample data being gathered. We can then respond to that data in many ways. One example would be to move objects across the Stage based upon the Microphone.activityLevel property. Another example would be to write the sample data to a ByteArray for later analysis.

What do all these properties mean?

  • activityLevel: This is a measurement indicating the amount of sound being received
  • codec: This indicates the codec being used: Nellymoser or Speex
  • gain: This is an amount of boosting provided by the microphone to the sound signal
  • bytesAvailable: This reveals the number of bytes from the present position until the end of our sample data byteArray
  • length: Lets us know the total length of our sample data byteArray
  • position: This is the current position, in bytes, within our sample data byteArray

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here