I don't have any strong feelings about CUDA vs. OpenCL; presumably OpenCL is the long-term future, just by dint of being an open standard.
But current-day NVIDIA vs ATI cards for GPGPU (not graphics performance, but GPGPU), that I do have a strong opinion about. And to lead into that, I'll point out that on the current Top 500 list of big clusters, NVIDIA leads AMD 4 systems to 1, and on gpgpu.org, search results (papers, links to online resources, etc) for NVIDIA outnumber results for AMD 6:1.
A huge part of this difference is the amount of online information available. Check out the NVIDIA CUDA Zone versus AMD's GPGPU Developer Central. The amount of stuff there for developers starting up doesn't even come close to comparing. On NVIDIAs site you'll find tonnes of papers - and contributed code - from people probably working on problems like yours. You'll find tonnes of online classes, from NVIDIA and elsewhere, and very useful documents like the developers' best practice guide, etc. The availability of free devel tools - the profiler, the cuda-gdb, etc - overwhelmingly tilts NVIDIAs way.
(Editor: the information in this paragraph is no longer accurate.) And some of the difference is also hardware. AMDs cards have better specs in terms of peak flops, but to be able to get a significant fraction of that, you have to not only break your problem up onto many completely independent stream processors, each work item also needs to be vectorized. Given that GPGPUing ones code is hard enough, that extra architectural complexity is enough to make or break some projects.
And the result of all of this is that the NVIDIA user community continues to grow. Of the three or four groups I know thinking of building GPU clusters, none of them are seriously considering AMD cards. And that will mean still more groups writing papers, contributing code, etc on the NVIDIA side.
I'm not an NVIDIA shill; I wish it weren't this way, and that there were two (or more!) equally compelling GPGPU platforms. Competition is good. Maybe AMD will step up its game very soon - and the upcoming fusion products look very compelling. But in giving someone advice about which cards to buy today, and where to spend their time putting effort in right now, I can't in good conscience say that both development environments are equally good.
Edited to add: I guess the above is a little elliptical in terms of answering the original question, so let me make it a bit more explicit. The performance you can get from a piece of hardware is, in an ideal world with infinite time available, dependent only on the underlying hardware and the capabilities of the programming language; but in reality, the amount of performance you can get in a fixed amount of time invested is also strongly dependant on devel tools, existing community code bases (eg, publicly available libraries, etc). Those considerations all point strongly to NVIDIA.
(Editor: the information in this paragraph is no longer accurate.) In terms of hardware, the requirement for vectorization within SIMD units in the AMD cards also make achieving paper performance even harder than with NVIDIA hardware.