Andrew Thomas

Developer

CTO

Skier

Blog Post

Database Performance Optimisation — Data Types

Nov 20, 2019 SQL

We are at that point now with our database where it is around 40GB, and growing 200mb per day. Having just added more RAM to help the servers perform better I had to start really looking deep into how to make it more efficient. The database is as normalized as possible (with the exception of a few extra smallint columns to make some reports hugely quicker). So I had to dig deeper to see how to make more savings.

Interesting where I found most of the savings was changing data types to something smaller where possible, even if there was a loss of precision (for instance the smalldatetime only goes to the nearest minute, but that is fine in most situations). This really had an impact on table size, and index sizes dropped dramatically. Overall the database shrunk by 30%, and indexes on large tables by around 20% reduced too.

Datatype changes I made were:

  • int -> smallint (for those where we would never have more than 32000 items, and was heavily indexed column)
  • datetime -> smalldatetime (save massive space in our high transaction tables, and loss of precision is acceptable)
  • nvarchar -> varchar (could only do this for text we could confirm not unicode, but massive savings)
  • decimal (18,2) -> decimal(9,2) (didn’t need that extra precision ever)