Key Recommendations for Creating a SQL Server Maintenance Plan
SQL Server has added thousands of features over the last 25+ years, but the core product has stayed the same along with the tasks required to maintain SQL Server. What has changed is the features and options around how we manage and maintain SQL Server.
In this article, we will focus on the main considerations that are key in supporting your data environment across four categories:
- Data Protection
- Index and Statistics Maintenance
- Data Integrity
- Database Patching
Combine these features with other business knowledge and best practices to create proactive SQL Server Maintenance Plans to keep your servers and databases healthy.
The Necessity of Data Protection within SQL Server Maintenance
Data is one of the most critical and important resources for every company, and data loss could be a career or business-ending event. In the event of a crisis like database corruption, the business will need the database online as quickly as possible.
It is important that from the very beginning that all user and system databases along with encryption keys are backed up and are stored in a safe location. To accomplish creating the backups, the individual or team performing the task needs to see beyond just backing up the databases and looking at the backup history. The backup files need to be validated to ensure they can be restored.
The best way to validate those backup files for the critical databases is to restore the database to a non-production environment and query it. By utilizing this method, it not only validates that the data can be read, but also, it validates that the restore process works. It is highly important to know that this process works because when there is a crisis like database corruption, the business will need the database online and available as soon as possible. To further reduce those restore times, another best practice is to save off the restore scripts.
A second way to validate the backups is to run the BACKUP process with the CHECKSUM option which will validate that the CHECKSUMs match on the data pages during the BACKUP process.
Another way is to check those backup files from a high level is to run the RESTORE HEADERONLY which will read the header information on the backup files. Keep in mind that this option does not validate the data pages in the backup but will ensure that the file is not corrupt at a high level.
One last point when validating backup files is that it is very important to ensure that all encryption keys and SSIS packages are backed up as they are frequently overlooked when designing an Enterprise backup strategy.
Schools of Thought Surrounding Indexing and Statistics for SQL Server Maintenance
There are many schools of thought around SQL Server Index Maintenance and we will discuss them from a high level, but first, let’s talk about the typical database system that supports your Mission Critical System.
When looking at your database system, you want to first answer these key questions:
- What was your data growth rate last year on each SQL Server?
- What was the user growth or transactional growth last year?
- How did the type of data change within the database server over that time?
- Are you offering new products or services that recently went live that would change the data or transactional growth rates?
- Do you have a Data Lifecycle Policy in place? If not, how many years of data is the system storing?
When you step back and look at the answers, you will notice that the database server is a living, breathing system and depending on the business success, changes the dynamics of the system daily to yearly.
The factors that are presented from answering these key questions introduce other challenges into the system that need to be addressed with SQL Server database maintenance. If you query a table that has STATISTICS that were created nine months ago and have not been updated due to large table size, the query optimizer might not be able to pick the optimal execution plan. Combine that with the fact that three new products have been recently released and sales regions have been changed – the STATISTICS will most likely not represent that new data.
At the INDEX level, one hundred million rows have just been INSERTED into the core user table, but the index and statistics have not been rebuilt. Fragmentation will occur on the indexes on that table which could impact the memory health, increase the number of IOPS due to the increased number of pages that hold the data, and the CPU usage could be impacted as it now has to manage all the extra outstanding IOs.
Before fragmentation is assumed to be bad, an important concept needs to be reviewed. If there is a transactional system, it can be assumed that the result sets are smaller in size, and in that case, typically, there is a higher write ratio. If the data pages have a high amount of fragmentation, then there will be free space on the existing pages to INSERT the new data into. Fragmentation may help transactional system considering most systems are running on solid state disks which do not spin. In older, traditional methods, that was the physical penalty for having fragmentation at the file level and on the retrieval level.
Taking all these factors into account along with the features that SQL Server contains, the following is recommended:
- Enable DBCC TRACEON (2371) on SQL Server 2014 or earlier
- Execute Index REBUILD maintenance daily
- Objects with fragmentation greater than 20%
- SORT in TempDB and REBUILD ONLINE
- Rebuild individual partitions where possible
- Performing this nightly reduces the number of objects that fragment in a 24hr period
- Rebuild indexes first before updating STATISTICS
- Log the daily actions and reduce the FILLFACTOR for objects that are rebuilt frequently
- Execute STATISTICS maintenance daily
- UPDATE with FULLSCAN on the sample size
- If there are very large objects, SAMPLE is adequate but review the HISTOGRAM data and compare to the application parameters
- Update STATISTICS with higher change rates that have not been rebuilt in a while
- Check the STATS_DATE
- UPDATE with FULLSCAN on the sample size
- Do not use the built-in SQL Server maintenance plans as they are not intelligent and rebuild all objects in non-ideal ways
- Consider using the https://ola.hallengren.com/ maintenance processes which are widely used around the world and free
Data Integrity as it Applies to SQL Server Maintenance
Ensuring the database does not have corruption is just as important as being able to access the data quickly. Often, the processes and alerts to detect data corruption are overlooked. In our experience, there is a tendency to rely on BACKUPs to protect from data loss, but how you proactively detect corruption in SQL Server is equally as important.
There are a couple of ways to approach this challenge in SQL Server:
- Alerting – Set up SQL Server Alerts on 823, 824, and 825 which will detect different types of issues related to IO issues.
- Integrity Checks – Run DBCC CHECKDB or DBCC CHECKTABLE which will traverse the data pages within a TABLE or DATABASE to determine if there are any issues reading the data pages and traversing the links between the data pages. Once you run the DBCC process, review the output to validate that there are no consistency or allocation errors within the output. Often, these processes are run but the time is not taken to review the output which puts the database at risk for corruption.
The Importance of Database Patching within a SQL Server Maintenance Plan
Another important consideration is database patching as it is a critical component of having a supported system. To have a supported system from Microsoft, the system needs to be on a supported version of SQL Server that is patched to a recent patch level.
Database patching will help to keep databases up to date, minimize downtime, and improve the overall health of your databases – keeping them available, scalable, and capable of handling the workload.
A great aspect of SQL Server database patches is that the patches are cumulative – meaning that if there are some patches that happened to be skipped over, the patches do won’t need to be applied in that order. Service packs and cumulative updates are the two types of patches for SQL Server with service packs being considered as major patch level. Microsoft has moved to only offer cumulative updates going forward so take that into account when designing your patching strategy. We do recommend not taking CU1 as that has all the features and bug fixes that did not make the RTM version of SQL Server.
When you are ready to deploy a patch, make sure that your databases are backed up, and that you have tested the patch in a non-production environment so you can catch the estimated 7% of bugs in the patch before they impact your production SQL Server.
Concluding Thoughts on SQL Server Maintenance
While SQL Servers are very dynamic and perform many tasks automatically, they still need regular maintenance processes put into place to ensure the data is protected, STATISTICs are updated, indexes are managed, and the data is free of corruption.
By considering these key tips and considerations, putting together a SQL Server maintenance plan will become second nature to you. You’ll be able to outline a plan, deploy it, and keep the business’s database healthy and available.
Leave a ReplyWant to join the discussion?
Feel free to contribute!