0
votes

I'm currently trying to develop a Node.js application on MacOS that routes audio from a camera rtsp to a virtual audio driver (SoundPusher) to be played through Zoom mic as one stream as well as grab audio from Zoom output through the virtual audio driver to a output rtsp stream as a different stream:

1. Camera Rtsp/Audio Element (SoundPusher Speaker) -> Zoom Input(SoundPusher Mic)

2. Zoom Output (SoundPusher Speaker) -> Pipe audio to Output Rtsp from SoundPusher Mic

1.The implementation that I have right now is that the audio from the camera rtsp is piped to a HTTP server with ffmpeg. On the client side, I create an audio element streaming the audio from the HTTP server through HLS. I then run setSinkId on the audio element to direct the audio to the Soundpusher input and have my microphone in Zoom set to Soundpusher output.

 const audio = document.createElement('audio') as any;
    audio.src = 'http://localhost:9999';
    audio.setAttribute('type', 'audio/mpeg')
    await audio.setSinkId(audioDriverSpeakerId);
    audio.play();

2.I also have Soundpusher input set as the output for my audio in Zoom so I can obtain audio from Zoom and then pipe it to the output rtsp stream from Soundpusher output.

ffmpeg -f avfoundation -i "none:SoundPusher Audio" -c:a aac -f rtsp rtsp://127.0.0.1:5554/audio"

The problem is that the audio from my camera is being mixed in with the audio from Zoom in the output RTSP stream but I'm expecting to hear only the audio from Zoom. Does anyone know of a way to separate the audio from both streams but use the same audio driver? I'd like to route the audio streams so that the stream from the audio element to Zoom is separate from the stream from Zoom to the output rtsp.

I'm very new to audio streaming so any advice would be appreciated.