Yes, both the soft and hard formulations of standard SVM are convex optimization problems, hence have unique global optima. I suppose if the problem is incredibly huge, approximation methods would be parsimonious enough that you would use them instead of exact solvers, and then your numerical solution technique might not find the global optimum purely because it's trade-off benefit is to reduce search time.
The typical approach to these is sequential minimal optimization -- hold some variables fixed and optimize over a small subset of the variables, then repeat with different variables over and over until you can't improve the objective function. Given that, I find it implausible that anyone would solve these problems in a way that won't yield the global optimum.
Of course, the global optimum you find might not actually be appropriate for your data; that depends on how well your model, noisy class labels, etc. represent the data generating process. So solving this doesn't guarantee you've found the absolute right classifier or anything.
Here are some lecture notes I found about this in a cursory search: (link)
Here is a more direct link regarding the convexity claims: (link)