I'm trying to play multiple track at the same time using the web audio Api ( end goal is to create a geospatial sound engine based on geolocation )
I can't manage to connect properly two or more html5 audio sources to the audio context. But in very simple use cases it works with imperative code.
Here are two examples :
First one not scalable but working in Chrome :
window.addEventListener('load', async function(e) {
let file = await stansardAudioLoader('./assets/fire.mp3')
let file1 = await stansardAudioLoader('./assets/pluie.wav')
let source = audioContext.createMediaElementSource(file)
let source1 = audioContext.createMediaElementSource(file1)
source.connect(audioContext.destination)
source1.connect(audioContext.destination)
file.play()
file1.play()
}, false);
Second one is more manageable and really close to first one but don't work.
const loadAndPlay = (ctx) => async (url) =>{
const file = await stansardAudioLoader(url)
const source = ctx.createMediaElementSource(file)
source.connect(ctx.destination)
file.play()
}
window.addEventListener('load', async () => {
const audioContext = new (AudioContext || webkitAudioContext)();
const loader = loadAndPlay(audioContext)
loader('./assets/fire.mp3')
loader('./assets/pluie.wav')
}, false);
Those are two overly simplified exemples to make it easy to test and reproduce. My full code use OOP and class based approach. Hundreds of files may be loaded, played and updated.
Two things would be helpful for me :
- How to play two or more sounds simultaneously using the web audio api but without using context.decodeAudioData
- Generale guidance on how to handle lots of files and spatial audio using the web audio api
Thank you for your help !