I am creating a routine that calls the Sum Reduction kernel of Nvidia (reduction6), but when I compare the results between the CPU and GPU get an error that increases as the vector size increases, so:
Both CPU and GPU reductions are floats
Size: 1024 (Blocks : 1, Threads : 512)
Reduction on CPU: 508.1255188
Reduction on GPU: 508.1254883
Error: 6.0059137e-06
Size: 16384 (Blocks : 8, Threads : 1024)
Reduction on CPU: 4971.3193359
Reduction on GPU: 4971.3217773
Error: 4.9109825e-05
Size: 131072 (Blocks : 64, Threads : 1024)
Reduction on CPU: 49986.6718750
Reduction on GPU: 49986.8203125
Error: 2.9695415e-04
Size: 1048576 (Blocks : 512, Threads : 1024)
Reduction on CPU: 500003.7500000
Reduction on GPU: 500006.8125000
Error: 6.1249541e-04
Any idea about this error?, thanks.