Once I got my new job as an Database Administrator here, I’ve found that almost all of our clients databases has the same mistakes. Few of them missing change same default settings or not implementing same maintenance for each of them. Maybe there are a lot more missing things on our clients SQL Server instances but I will write down few of them that I’ve been found since first time I get here :
- Memory Setting
- Database Backups
- DBCC CHECKDB
- Index Maintenance
- MAXDOP and Cost Threshold for Parallelism
- SQL Server Agents Alert
A default install of Microsoft SQL Server will have minimum memory value set to “0” and maximum server memory value set to 2147483647 MB. If this value go unchanged then sometimes our operating system will run out of memory because of SQL Server instance using all of it. Or if SQL Server is shared with another application then that applications will run out of memory.
It is highly recommended to set minimum and maximum memory settings for the SQL Server Instance to ensure that SQL Server will not run out of memory, and also it will not use all of system memory. I don’t really know how much memory does SQL Server actually need, but usually I will setup value of minimum memory to 30% of total memory and if that’s server is dedicated for SQL Server instance, then I will setup the maximum memory value to 80% of total memory.
I did checked recovery model that all of our customers databases used, before checked recent backup history for each databases and I’ve found the following things on almost all of our customers :
- No backups at all or most recent full backups was very long time ago
- Missing log backups – Database has full recovery model but never perform transaction log backup since first time database start or never perform transaction log backup for very long time.
When I’ve found this backup issue on our customers SQL Server instance, I will immediately perform full database backup and then will create schedule to backup all databases regularly.
When I’ve check when the last time customers database has perform DBCC CHECKDB, I’ve found that all of them never performed this task. Why should I run DBCC CHECKDB against a database? Briefly answer : to check a database from any corruption. So I think this is the most important things that should be run regularly against all customers databases.
Performing index maintenance by removing fragmentation of an index is important. Index fragmentation can have a negative impact to a database depending on the size of the environment and level of fragmentation. Fragmentation occur through normal operations such as insert, updates and deletes. Proper index maintenance of rebuilding or reorganizing index is needed.
MAXDOP and Cost Threshold for Parallelism
Max degree of parallelism and cost threshold for parallelism are usually left as default value. For MAXDOP, default value is’0′ which means an unlimited numbers of CPU could be used to execute a parallel region of a query. There is a guide from Microsoft that if you have more than 8 CPU’s on your server MAXDOP value should be 8, if your server has less than 8 CPU’s then MAXDOP value should be same as number of your CPU’s.
Cost threshold for parallelism has default value of 5. This settings to specify the threshold at which SQL Server create and run parallel plan for queries. I usually configure this option to 25, then monitor the performance before adjust the value.
SQL Server Agents Alert
Everyone should using SQL Server Agents alert, unless there are any third party application that monitor all of SQL Server Agents job for some conditions. I’ve usually configure SQL Server Agents to send an alert by email when jobs failed, so I can immediately check what actually happen on that server.
There are few things that I’ve checked and found on our customers databases. Probably there are more things that should be checked on an SQL Server instance, but this I am using as a starting point.