From an android device, the node server is receiving chunks of raw data file in arrayBuffer format. The server is converting them individually (let's say 10 seconds worth of playable audio) into WAV format in order to stream to an HTML5 audio file. The idea being that as soon as the android presses "send", the browser can immediately begin to hear the streamed audio file.
How can I stream these files consecutively to the HTML audio tag so that it sounds like one big file?
Currently I can stream one 10 second file to the tag and it plays. I also tried the audio tags "onended" attribute to try and fetch the next 10 second file, but you can hear that it is not connected as one file.
const audioStream = fs.createReadStream(audioPath);
audioStream.pipe(res);
Ideally I need to pipe all the files consecutively AS they come in from the android client, meaning I cannot first concatenate them into one file because the server is only receiving them as the browser is playing them (a latency of up to a few seconds is ok).
What can be done?
Edit:
Here is what I've tried:
var audio = document.getElementById('my');
var mediaSource = new MediaSource();
var SEGMENTS = 5;
var oReq = new XMLHttpRequest();
mediaSource.addEventListener('sourceopen', function() {
var sourceBuffer = mediaSource.addSourceBuffer('audio/mpeg');
sourceBuffer.addEventListener('updateend', function() {
//GET('6.mp3', function(data) { onAudioLoaded(data, index); });
oReq.open("GET", '6.mp3');
oReq.responseType = "arraybuffer";
oReq.send();
});
function onAudioLoaded(data, index) {
sourceBuffer.appendBuffer(data);
}
function h(){
var arraybuffer = oReq.response; // not responseText
onAudioLoaded(arraybuffer, 0)
// console.log(arraybuffer);
}
currently, I am fetching the same mp3 file from the server with a desire to string them together to see if it works. As I mentioned below, I get an "source buffer full" error.
in the long run i'd move this to a websocket so the each time the server receives a chunk, it will send to here, thus stringing the result into one "live stream". Any thoughts? Thank you
EDIT:
As it seems to have turned out, my whole approach here is wrong; Because I am converting each received chunk of raw audio into a unique mp3 file (it cannot be WAV because of no "Media Source Extension" support)-- even though I am able to string them together using MSE, a gap between each file is heard. I am assuming this is because of "padding" in each mp3 file.
Does anyone have any alternatives so that I can seamlessly play my chunks of raw audio in the browser? Unless I am missing something in this "implementation". Thank you.