Well, I like MONEY
! It's a byte cheaper than DECIMAL
, and the computations perform quicker because (under the covers) addition and subtraction operations are essentially integer operations. @SQLMenace's example—which is a great warning for the unaware—could equally be applied to INT
egers, where the result would be zero. But that's no reason not to use integers—where appropriate.
So, it's perfectly 'safe' and appropriate to use MONEY
when what you are dealing with is MONEY
and use it according to mathematical rules that it follows (same as INT
eger).
Would it have been better if SQL Server promoted division and multiplication of MONEY
's into DECIMAL
s (or FLOAT
s?)—possibly, but they didn't choose to do this; nor did they choose to promote INT
egers to FLOAT
s when dividing them.
MONEY
has no precision issue; that DECIMAL
s get to have a larger intermediate type used during calculations is just a 'feature' of using that type (and I'm not actually sure how far that 'feature' extends).
To answer the specific question, a "compelling reason"? Well, if you want absolute maximum performance in a SUM(x)
where x
could be either DECIMAL
or MONEY
, then MONEY
will have an edge.
Also, don't forget it's smaller cousin, SMALLMONEY
—just 4 bytes, but it does max out at 214,748.3647
- which is pretty small for money—and so is not often a good fit.
To prove the point around using larger intermediate types, if you assign the intermediate explicitly to a variable, DECIMAL
suffers the same problem:
declare @a decimal(19,4)
declare @b decimal(19,4)
declare @c decimal(19,4)
declare @d decimal(19,4)
select @a = 100, @b = 339, @c = 10000
set @d = @a/@b
set @d = @d*@c
select @d
Produces 2950.0000
(okay, so at least DECIMAL
rounded rather than MONEY
truncated—same as an integer would.)
DECIMAL(19, 4)
is a popular choice check this also check here World Currency Formats to decide how many decimal places to use , hope helps. – Shaiju T