Yes your image clip does look like PCM audio which is Web Audio API friendly
I wrote such a Web Socket based browser client to render received PCM audio from my nodejs server using Web Audio API ... getting to render the audio is straightforward however having to babysit the Web Socket in the single thread javascript environment, to receive the next audio buffer, which is inherently preemptive, will without below outlined tricks cause audible pop/glitches
Solution which finally worked is to put all Web Socket logic into a Web Worker which populates a WW side circular queue. The browser side then plucks the next audio buffer worth of data from that WW queue and populates the Web Audio API memory buffer driven from inside the Wed Audio API event loop. It all comes down to avoiding at all costs doing any real work on browser side which causes the audio event loop to starve or not finish in time to service its own next event loop event.
I wrote this as my first foray into javascript so ... also U must do a browser F5 to reload screen to playback different stream (4 diff audio source files to pick from) ...
https://github.com/scottstensland/websockets-streaming-audio
I would like to simplify the usage to become API driven and not baked into the same codebase (separate low level logic from userspace calls)
hope this helps
UPDATE - this git repo renders mic audio using Web Audio API - this self contained example shows how to access audio memory buffer ... repo also has another minimal inline html example which plays the mic audio shown
<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>capture microphone then show time & frequency domain output</title>
<script type="text/javascript">
var webaudio_tooling_obj = function () {
// see this code at repo :
// https://github.com/scottstensland/webaudioapi-microphone
// if you want to see logic to access audio memory buffer
// to record or send to downstream processing look at
// other file : microphone_inline.html
// this file contains just the mimimum logic to render mic audio
var audioContext = new AudioContext(); // entry point of Web Audio API
console.log("audio is starting up ...");
var audioInput = null,
microphone_stream = null,
gain_node = null,
script_processor_node = null,
script_processor_analysis_node = null,
analyser_node = null;
// get browser media handle
if (!navigator.getUserMedia)
navigator.getUserMedia =navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;
if (navigator.getUserMedia) { //register microphone as source of audio
navigator.getUserMedia({audio:true},
function(stream) {
start_microphone(stream);
},
function(e) {
alert('Error capturing audio.');
}
);
} else { alert('getUserMedia not supported in this browser.'); }
// ---
function start_microphone(stream) {
// create a streaming audio context where source is microphone
microphone_stream = audioContext.createMediaStreamSource(stream);
// define as output of microphone the default output speakers
microphone_stream.connect( audioContext.destination );
} // start_microphone
}(); // webaudio_tooling_obj = function()
</script>
</head>
<body></body>
</html>
I give a How-To setup this file in above git repo ... above code shows the minimal logic to render audio (mic) using Web Audio API in a browser