0
votes

I want to play wav files on NAudio BufferedWaveProviders that have a this format: WaveFormat.CreateIeeeFloatWaveFormat(8000, 1);

The format of my wave files is 16 bit PCM: 44kHz 2 Channels. I am reading bytes out of the file and adding them as samples to the bufferedWaveProvider. With the format I want to use (which is existing to the application) there is no audio at all. With a standard format (new WaveFormat()), the audio works just fine. Is it Possible to manipulate the wave file data to play on the requested format?

bufferedWaveProvider = new BufferedWaveProvider(WaveFormat.CreateIeeeFloatWaveFormat(8000, 1);
player = new WaveOut();
player.DeviceNumber = -1;
player.Init(bufferedWaveProvider);
player.Play();
using (WaveFileReader reader = new WaveFileReader ("filePath")
    {
        int end = (int)reader.Length;
        byte[] buffer = new byte[336];
        while (reader.Position < end)
        {
             int bytesRequired = (int)(end - reader.Position);
             if (bytesRequired > 0)
             {
                 int bytesToRead = Math.Min(bytesRequired,buffered.Length);
                 int bytesRead = reader.Read(buffer, 0 , bytesToRead);
                 if (bytesRead > 0)
                 {
                 bufferedWaveProvider.AddSamples(buffer, 0, bytesRead);
                 }
             }
        }
    }

I also have a side question. while I was figuring out how to stream the wave file data I had to experiment with the bytes buffer size to send because if its too small the audio is choppy and if its too large the buffer overflows. Though trial and error I found 336 to be the best buffer size for a wave format of 16 bits, 44100 sample rate, 2 channels. How are you supposed to calculate the sample size so I can automatically know what size works for any given format?

1
why are you using BufferedWaveProvider? Why not just play the WAV file directly?Mark Heath
Because its going to need to be streamed to several clients. It groups headsets across a network in any configuration and broadcasts audio either from a headset or a wave file to any headset in the group. So if an operator wants head set A to hear something he can create a channel, include both headsets and stream anything he wants. Its a drag and drop so if someone drops a head set to a group in the middle of a transmission it will pick up in the middle of the data stream.dataContexual

1 Answers

1
votes

You could resample it down to your desired frequency. The prolific Mark Heath has implemented a fully managed resampler.

var reader = new AudioFileReader("input.wav"); //stereo
var downmix = reader.ToMono(); //downmix to single channel
var resampler = new WdlResamplingSampleProvider(downmix, 8000); //resample to 8000 Hz
var fmt = resampler.WaveFormat;

Console.WriteLine($"{fmt.SampleRate} Hz, {fmt.Channels} channel, {fmt.Encoding}");

Output:

8000 Hz, 1 channel, IeeeFloat

You can read from the resampler, as it is an ISampleProvider, and add to your buffer or output as neccessary.