So I'm working on normalizing the database the other day. I want to move some columns from a table that I'm about to make vanish to another table and I can't remember the syntax for it. I should have hit up The Google (the SQL syntax for adding tables/columns to tables really is dead simple) but instead I was extra lazy and fired up the table designer inside of SQL Server Management Studio and scripted out the changes to the tables.
I finish that up, finish modifying the stored procedures, everything looks good. I run the SQL script, fire my black box tests against it.
Hmm. Things ain't working right. The column that I just defined to hold a decimal is causing problems. I check my stored procedures to make sure I'm not doing anything insane, check my business logic to make sure I didn't bork anything there, it all looks good.
I take what I'd just scripted and distill it down to its essence.
create
table scratch
(
percentage decimal (18,0)
)
go
insert
into scratch
values (0.5)
select *
from scratch
Not-so-curiously, the scratch table now contains a lone value of 1.0.
When you look at it, it's obvious what went wrong here - I just defined a decimal with zero digits of decimal precision. Or, as I like to call it, a fucking integer.
I obviously biffed in the first place, but this left me scratching my head - why on earth would you make the default for a decimal (effectively) an integer?
As an added bonus head scratcher, there's no warning about the loss of precision/silent rounding when you store the value in there. I don't know what should happen there (do you want an exception because of that sort of thing?) but I do know that it rhymes with "leaky abstraction." I'm not an RDBMS guru, people.
Times like these, I understand why it's regarded as best practice to treat money as two integer values (dollars and cents!) - databases are a bit funky when it comes to storing out your decimal values.
No comments:
Post a Comment