Seems to me there are three distinct areas to solve:
- reading the sound,
- storing in some sort of buffer,
- providing a connection for the other program to obtain the audio data from that buffer.
Purely DIY, I found some tutorials on Java mike reading, and adapted one of them so that I was packaging the incoming audio data to a buffer instead of sending directly on to speakers or saving it to a File. For the container class of the buffer, I chose a ConcurrentLinkedQueue
as this painlessly provides the add
method to put buffer packets in and poll
method for an entirely independent thread to take packets out without requiring any explicit synchronization code to be written.
The reason some sort of intermediate buffer was needed is that audio processing tends to go in unpredictable fits and starts, and the bursts of activity in reading/adding on one thread and polling on the other don't necessarily coincide. I'm not sure, but it seemed to me that if the two ends are directly connected, the throughput is impacted as either line can slow the signal transmission down, and this greatly increase the likelihood of dropouts. With a buffer, both lines can flex some without blocking each other.
I fiddled with the ConcurrentLinkedQueue
to get a reasonable packet size, and I'm not confident I have the optimal setup, currently set to 512 PCM values. But I do have a functional, if slightly laggy, setup that can ship the microphone data to a digital delay which in turn outputs echoes as a track on an audio mixer that I wrote.
Seems like you could just as readily have the polling be handled by some sort of socket or connection to the outside world. But connecting to the external world gets beyond my personal experience.