2
votes

I am currently converting a C++ program into CUDA code, and part of my program runs a fast Fourier transform. Originally I ran FFTW, but I saw that I couldn't call it in kernel, so I then rewrote that part using cufft but it tells me the same thing!

Are there any FFT that will run inside a CUDA kernel?

Can I just add __device__ to the fftw library?

I would like to avoid having to initialize or call the FFT in host. I want a completely on the gpu type function, if one exists.

3

3 Answers

3
votes

Looks like you are trying to perform several FFTs at once if you are looking to incorporate it into a kernel. I would look into the batch processing features in cuFFT. What is your application? cufftPlanMany() works for batch FFTs in many different memory configurations.

2
votes

Are you sure you need to avoid a launch from the host? Nvidia's cufft library is pretty good these days. Porting FFTW seems like a pretty hard task. You might have an easier time porting kissfft but it is still not going to be easy.

0
votes

there is NO way to call the APIs from the GPU kernel. You must call them from the host. If you want to run a FFT without passing from DEVICE -> HOST -> DEVICE to continue your elaboration I think that the only solution is to write a kernel that performs the FFT in a device function. Actually I'm doing this because I need to run more FFTs in parallel without passing again the datas to the HOST. If you find/have another solution let me know.