I don't think that you can classify a matrix with half zeros as "sparse": the timing you have found are reasonable (actually the sparse algorithm is behaving pretty well!).
Sparse algorithms are efficient only when considering matrices where most of the elements are zeros (for example, matrices coming out from finite elements problems).
This holds true for CPUs, non only for GPUs: there's an important overhead in treating the matrix as sparse, and it become convenient to use sparse algorithms only when... most of the elements are zeros (typical: ten or less non-zeros per row, matrix of rank thousands - hundred thousands - (millions?) ).
There are other matrix shapes that have efficient solution algorithms, that you can try if it applies to your problem, e.g. band matrices. I don't know whether they have been ported to cuBlas though.
About the overheads
Dense linear algebra algorithms can perform optimally because processors have been designed in order to best efficiently solve for such systems. Consider the DGEMM operation (matrix-matrix multiply): it's an operation that let you use the processors at >95% of it's theoretical peak floating point performance, for large matrices (ie, matrices not fitting any cache of the system). How?
- prefetching
- optimal cache usage
- vectorization (SSE, AVX)
- pipelining
In a sparse LA algorithm only non-zero elements and their corresponding indexes are stored into memory: memory accesses are in fact indirect. So the sparse algorithm cannot exploit the hardware at the same level of optimization: I don't know about specific figures in this context, but 10 to 20% wouldn't be strange.
The gain is clearly that operations on zeros (on non-stored elements) are simply not performed, resulting in order of magnitudes less operations and much less needed storage.
There are further overheads in integers logics, conditionals, but modern CPUs are pretty good in overlapping integer and FP operations, and with "speculative executions". Unfortunately they too can prevent vectorization and so are further overheads with respect to the dense case.
What about GPUs?
Dense LA algorithm are an optimal fit for GPUs as the same as for CPUs: in this case you have optimal usage of:
- coalescing
- shared memory
- memory access patterns
Again the indirect access to matrices elements in sparse LA algorithm prevent to exploit the same level of optimization.
References
I can't remember which one I used when encountered sparse problems... I think it was PSBLAS: http://people.uniroma2.it/salvatore.filippone/psblas/
But here you will be overwhelmed of them:
http://www.netlib.org/utk/people/JackDongarra/la-sw.html