2
votes

I need to create both virtual webcam and virtual microphone on an Ubuntu 16.04 machine for use in web application using WebRTC through my web browser.

I need to feed video and audio to these 2 virtual devices from an IP camera (RTSP stream). Playing RTSP stream directly in VLC works fine with both video and audio.

For this, I have created a /dev/video1 with video4linux2. I am able to feed the IP camera to /dev/video1.

ffmpeg -i rtsp://ip_address:554/streaming/channels/101/ -f v4l2 /dev/video1

If I look in VLC player, I can select /dev/video1 as a video device, but I have only "hw:0,0" as audio device, which is my in-built microphone.

How to properly feed such RTSP stream to both virtual webcam and virtual microphone?

1

1 Answers

1
votes

You need some sort of loopback audio driver. If you want to do this at the Alsa level, you can lose the snd-aloop module. https://www.alsa-project.org/main/index.php/Matrix:Module-aloop#aloop_driver

If your intended destination supports Pulseaudio, you can add a null sink and use its monitor source to record from it.

pactl load-module module-null-sink sink_name=video1

The monitor source is then named video1.source.

https://wiki.archlinux.org/index.php/PulseAudio/Examples

Then, you need to add an additional output from FFmpeg. That might be as simple as adding something like -f pulse "video1" to the end of what you have now.