I have a single source and a set of audiences listening to the audio stream. If I stream using P2P WebRTC then the naive approach would be to create N-1
connections from the speaker. which is okay up to N < 3
. But otherwise P2P is transmission expensive. I have outlines two approaches and trying to find out which one is best suited.
- Have a centralised server relaying and recording the audio. (-latency +cost)
- Instead of opening
N-1
connections from source some of the terminal nodes will act as non terminal nodes and will openK < N-1
connections to relay and record the transmission. (+latency -cost)
I am very new to WebRTC. I am planning my http side to be with C++. If I take approach (2) I add no extra cost on serverside for audio streaming. But it is not straight forward. I surely don't want to reinvent the wheel if it already exists and spins well. But I don't know what is already available and what are the risks of this approach.
If I take approach (1) what relaying server should I use ? that should tightly integrate with the business logic. This part I am having hard time figuring out. With websocket I find this part easy because all are in same session and all contextual informations are accessible. But here somehow I need to map user accounts with the streams and apply business logics on them. Like for certain users I will lower the volume.
I also need to broadcast data in the same stream.
I can't let anyone (who doesn't use my application) to use my TURN servers. I need some kind of token/auth system for that. How can I do that ?