From Mozilla site: https://developer.mozilla.org/en-US/docs/Web/API/Media_Streams_API
"A MediaStream consists of zero or more MediaStreamTrack objects, representing various audio or video tracks. Each MediaStreamTrack may have one or more channels. The channel represents the smallest unit of a media stream, such as an audio signal associated with a given speaker, like left or right in a stereo audio track."
That clarifies what a channel is.
Several recent RFCs (E.g. 8108) refer to the need to have multiple streams sent in one RTP session. Each stream is to have its own SSRC at the RTP level. In the RFC for Unified Plan too, the reference is always to a stream as the lowest level (not tracks or channels). In RFC 3550, the base RTP RFC, there is no reference to channel.
Is the RTP stream as referred in these RFCs, which suggest the stream as the lowest source of media, the same as channels as that term is used in WebRTC, and as referenced above? Is there a one-to-one mapping between channels of a track (WebRTC) and RTP stream with a SSRC?
A webcam, for example, generates a media stream, which can have a audio media track and a video media track, each track is transported in RTP packets using a separate SSRC, resulting in two SSRCs. Is that correct? Now what if there is a stereo webcam (or some such device with, lets say two microphones - channels?). Will this generate three RTP streams with three different unique SSRCs?
Is there a single RTP session for a five-tuple connection established after successful test of ICE candidates? Or can there be multiple RTP sessions over the same set of port-ip-UDP connection between peers?
Any document that clarifies this would be appreciated.