2
votes

In my app, I have multiple open peer connections and I want to be able to mute the microphone on the peer connection level, not globally (as is done here).

Chrome is straightforward:

  • Call removeStream when muting
  • Call addStream when unmuting

Negative: I understand that we are moving towards a addTrack/removeTrack world, so this solution is not compatible with other browsers and the future.

Firefox does not work at all:

  • removeTrack/addTrack requires renegotiation, which is not acceptable, as it takes time
  • replaceTrack does not require renegotiation and my idea would be to have an empty MediaStreamTrack for mute that I could use to replace the former MediaStreamTrack. Any idea how to do that?

Alternatively, any ideas on a viable Firefox solution / a cooler Chrome solution / a unified approach?

1
I was not aware of that one could mute/unmute with add/removeStream in Chrome. Definitely non-standard.jib
it requires renegotiation as well...Philipp Hancke
Oh, you're right. clone() definitely seems to be the best approach for Chrome.nexus

1 Answers

3
votes

The way to do it in Firefox (and Chrome, and the future) is to clone the tracks, to give you independent track.enabled controls:

var track1, track2;

navigator.mediaDevices.getUserMedia({audio: true}).then(stream => {
  var clone = stream.clone();
  track1 = stream.getAudioTracks()[0];
  track2 = clone.getAudioTracks()[0];
})

var toggle = track => track.enabled = !track.enabled;

Try it below (use https fiddle in Chrome):

var track1, track2;

navigator.mediaDevices.getUserMedia({audio: true}).then(stream => {
  var clone = stream.clone();
  track1 = stream.getAudioTracks()[0];
  track2 = clone.getAudioTracks()[0];
  return Promise.all([spectrum(stream), spectrum(clone)]);
}).catch(e => console.log(e));

var toggle = track => track && (track.enabled = !track.enabled);

var spectrum = stream => {
  var audioCtx = new AudioContext();
  var analyser = audioCtx.createAnalyser();
  var source = audioCtx.createMediaStreamSource(stream);
  source.connect(analyser);

  var canvas = document.createElement("canvas");
  var canvasCtx = canvas.getContext("2d");
  canvas.width = window.innerWidth/2 - 20;
  canvas.height = window.innerHeight/2 - 20;
  document.body.appendChild(canvas);

  var data = new Uint8Array(canvas.width);
  canvasCtx.strokeStyle = 'rgb(0, 125, 0)';

  setInterval(() => {
    canvasCtx.fillStyle = "#a0a0a0";
    canvasCtx.fillRect(0, 0, canvas.width, canvas.height);

    analyser.getByteFrequencyData(data);
    canvasCtx.lineWidth = 2;
    data.forEach((y, x) => {
      y = canvas.height - (y / 128) * canvas.height / 4;
      var c = Math.floor((x*255)/canvas.width);
      canvasCtx.fillStyle = "rgb("+c+",0,"+(255-x)+")";
      canvasCtx.fillRect(x, y, 2, canvas.height - y)
    });

    analyser.getByteTimeDomainData(data);
    canvasCtx.lineWidth = 5;
    canvasCtx.beginPath();
    data.forEach((y, x) => {
      y = canvas.height - (y / 128) * canvas.height / 2;
      x ? canvasCtx.lineTo(x, y) : canvasCtx.moveTo(x, y);
    });
    canvasCtx.stroke();
    var bogus = source; // avoid GC or the whole thing stops
  }, 1000 * canvas.width / audioCtx.sampleRate);
};
<button onclick="toggle(track1)">Mute A</button>
<button onclick="toggle(track2)">Mute B</button><br>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

Then feed the two tracks to different peer connections. This works with video mute too.