4
votes

Question: I am making an application with NodeJS where a user loads a page and the microphone streams the data to the NodeJS (I am using Socket.IO for the websocket part). I have got the streaming working fine, but now I am wondering how can I play the audio I receive?

Here is a picture of the message I receive from the stream that I am trying to play on the browser, I am guessing it's PCM audio but I'm not the expert. http://i.imgur.com/mIROL1T.png this object is 1023 long.

The code I am using on the browser is as follows (too long to put directly here): https://gist.github.com/ZeroByter/f5690fa9a7c20e2b24cccaa5a8cf3b86

Problem: I ripped the socket.on("mic") from here. But I am not sure on how it make it efficently play the audio data that it is receiving.

This is not my first time using a WebSocket, I am pretty well aware of the basics of WebSockets work, but it is my first time using the Web Audio API. So I need some help for this.

1

1 Answers

2
votes

Yes your image clip does look like PCM audio which is Web Audio API friendly

I wrote such a Web Socket based browser client to render received PCM audio from my nodejs server using Web Audio API ... getting to render the audio is straightforward however having to babysit the Web Socket in the single thread javascript environment, to receive the next audio buffer, which is inherently preemptive, will without below outlined tricks cause audible pop/glitches

Solution which finally worked is to put all Web Socket logic into a Web Worker which populates a WW side circular queue. The browser side then plucks the next audio buffer worth of data from that WW queue and populates the Web Audio API memory buffer driven from inside the Wed Audio API event loop. It all comes down to avoiding at all costs doing any real work on browser side which causes the audio event loop to starve or not finish in time to service its own next event loop event.

I wrote this as my first foray into javascript so ... also U must do a browser F5 to reload screen to playback different stream (4 diff audio source files to pick from) ...

https://github.com/scottstensland/websockets-streaming-audio

I would like to simplify the usage to become API driven and not baked into the same codebase (separate low level logic from userspace calls)

hope this helps

UPDATE - this git repo renders mic audio using Web Audio API - this self contained example shows how to access audio memory buffer ... repo also has another minimal inline html example which plays the mic audio shown

<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>capture microphone then show time & frequency domain output</title>

<script type="text/javascript">

var webaudio_tooling_obj = function () {

    // see this code at repo :   
    //    https://github.com/scottstensland/webaudioapi-microphone

    // if you want to see logic to access audio memory buffer
    // to record or send to downstream processing look at
    // other file :  microphone_inline.html
    // this file contains just the mimimum logic to render mic audio

    var audioContext = new AudioContext(); // entry point of Web Audio API

    console.log("audio is starting up ...");

    var audioInput = null,
    microphone_stream = null,
    gain_node = null,
    script_processor_node = null,
    script_processor_analysis_node = null,
    analyser_node = null;

    // get browser media handle
    if (!navigator.getUserMedia)
        navigator.getUserMedia =navigator.getUserMedia || 
                                navigator.webkitGetUserMedia ||
                                navigator.mozGetUserMedia || 
                                navigator.msGetUserMedia;

    if (navigator.getUserMedia) { //register microphone as source of audio

        navigator.getUserMedia({audio:true}, 
            function(stream) {
                start_microphone(stream);
            },
            function(e) {
                alert('Error capturing audio.');
            }
            );

    } else { alert('getUserMedia not supported in this browser.'); }

    // ---

    function start_microphone(stream) {

        // create a streaming audio context where source is microphone
        microphone_stream = audioContext.createMediaStreamSource(stream);

        // define as output of microphone the default output speakers
        microphone_stream.connect( audioContext.destination ); 

    } // start_microphone

}(); //  webaudio_tooling_obj = function()

</script>

</head>
<body></body>
</html>

I give a How-To setup this file in above git repo ... above code shows the minimal logic to render audio (mic) using Web Audio API in a browser