Some excellent answers on this (admittedly old) question, but I feel I have to throw my few cents in.
There is no way that I can somehow add another option to this type later on without modifying this declaration. So what are the benefits of this system? It seems like the OO way would be much more extensible.
The answer to this, I believe, is that the sort of extensibility that open sums give you is not always a plus, and that, correspondingly, the fact that OO forces this on you is a weakness.
The advantage of closed unions is their exhaustiveness: if you have fixed all the alternatives at compilation time, then you can be certain that there will be no unforeseen cases that your code cannot handle. This is a valuable property in many problem domains, for example, in abstract syntax trees for languages. If you're writing a compiler, the expressions of the language fall into a predefined, closed set of subcases—you do not want people to be able to add new subcases at runtime that your compiler doesn't understand!
In fact, compiler ASTs are one of the classic Gang of Four motivating examples for the Visitor Pattern, which is the OOP counterpart to closed sums and exhaustive pattern matching. It is instructive to reflect on the fact that OO programmers ended up inventing a pattern to recover closed sums.
Likewise, procedural and functional programmers have invented patterns to obtain the effect of sums. The simplest one is the "record of functions" encoding, which corresponds to OO interfaces. A record of functions is, effectively, a dispatch table. (Note that C programmers have been using this technique for ages!) The trick is that there is very often a large number of possible functions of a given type—often infinitely many. So if you have a record type whose fields are functions, then that can easily support an astronomically large or infinite set of alternatives. And what's more, since records are created at runtime and can be done flexibly based on runtime conditions, the alternatives are late bound.
The final comment I'd make is that, in my mind, OO has made too many people believe that extensibility is synonymous with late binding (e.g., the ability to add new subcases to a type at runtime), when this just isn't generally true. Late binding is one technique for extensibility. Another technique is composition—building complex objects out of a fixed vocabulary of building blocks and rules for assembling them together. The vocabulary and rules are ideally small, but designed so that they have rich interactions that allow you to build very complex things.
Functional programming—and the ML/Haskell statically typed flavors in particular—have long emphasized composition over late binding. But in reality, both kinds of techniques exist in both paradigms, and should be in the toolkit of a good programmer.
It's also worth noting that programming languages themselves are fundamentally examples of composition. A programming language has a finite, hopefully simple syntax that allows you to combine its elements to write any possible program. (This in fact goes back to the compilers/Visitor Pattern example above and motivates it.)