1
votes

I want to stream audio from a web page to a local server, using WebRTC. That server will process that audio and will output it immediately to the user. I need real time.

My code is actually working. However I am asking the user for the microphone with getUserMedia, and I don't need that microphone. This is quite annoying. What can I do in order to stream the audio without having to ask the user for the microphone?

Thank you.

Here is a minimal working example (it is highly inspired by https://github.com/aiortc/aiortc/blob/main/examples/server/client.js). Only the last part with comments is interesting :

let webSocket = new WebSocket('wss://0.0.0.0:8080/ws');
const config = { sdpSemantics: 'unified-plan' }

const pc = new RTCPeerConnection(config);

webSocket.onmessage = (message) => {
    const data = JSON.parse(message.data); 
    switch(data.type) {
        case "answer":
            pc.setRemoteDescription(data.answer)
            break;
        default: 
            break; 
    }
};

function negotiate() {
    return pc.createOffer()
    .then(function(offer) {
        return pc.setLocalDescription(offer);
    })
    .then(function() {
        return new Promise(function(resolve) {
            if (pc.iceGatheringState === 'complete') {
                resolve();
            } else {
                function checkState() {
                    if (pc.iceGatheringState === 'complete') {
                        pc.removeEventListener('icegatheringstatechange', checkState);
                        resolve();
                    }
                }
                pc.addEventListener('icegatheringstatechange', checkState);
            }
        });
    })
    .then(function() {
        const offer = pc.localDescription;
        webSocket.send(
            JSON.stringify({
                type: "offer",
                offer: {
                    sdp: offer.sdp,
                    type: offer.type
                }
            })
        );
    })
}

// Preparing the oscillator
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
const oscillator = audioCtx.createOscillator();
const serverDestination = audioCtx.createMediaStreamDestination();
oscillator.connect(serverDestination);

// Asking for useless microphone
navigator.mediaDevices.getUserMedia({audio: true})
.then(() => {
    return negotiate();
});

// Actual streaming
const stream = new MediaStream();
serverDestination.stream.getTracks().forEach((track) => {
    pc.addTrack(track, stream);
})

// User pushes button to start the oscillator
function play() {
    oscillator.start();
};
2
It might be a problem with aiortc and mDNS: github.com/feross/simple-peer/issues/502#issuecomment-511221792 aiortc did not support mDNS until a couple of days ago, after I downloaded it for the first time. However I've updated it and it still does not work. Maybe it is still buggy. I'll consider to try with an other backend eventually. - FourchetteVerte
According to aiortc developer, there is currently a bug in Firefox: github.com/aiortc/aiortc/issues/481 - FourchetteVerte

2 Answers

0
votes

Just get rid of this:

// Asking for useless microphone
navigator.mediaDevices.getUserMedia({audio: true})
.then(() => {
    return negotiate();
});

As you say, it's useless and not necessary. If you don't call getUserMedia(), the user won't be prompted to share their microphone. You can make WebRTC connections without this.

I suspect the problem you're running into is that your audio context is paused. If you call audioCtx.resume() when a user clicks a button, you'll be up and running. This is due to autoplay policy.

0
votes

If you don't need user media, don't ask for it with getUserMedia in your code.