Basic settings for SQL Server 2012/2014 - part 1


It's been a while since I last penned a blog entry here, but I just might have come to the end of my hiatus. We're going to make a few changes, and the most obvious one is the fact that this blog post is written in English. It's also a bit more technical than our usual programming here, as it is a copy of a post from my own blog, posted here for your reading pleasure!

Whenever I set up an SQL Server I usually do the same things. I've collected a list of best practice settings over the years (and most often put them in a script, as most of my friends are scripts). Throughout a few blog posts I'll go over the settings I do use and discuss a bit why I've chosen to set the parameters the way I do.

Part 1: configuring the server

There are a few hard rules when it comes to setting up the environment for SQL Server. The first two applies to storage:

Always make sure you align your partitions with the underlying stripe size. Windows 2008 and above (usually) takes care of this by itself, but always double check your alignment. Failure to align the partition can lead to significant performance degradation.

Format your partitions where you will store SQL Server related files to a NTFS Allocation Size of 64KB. The reason is very simple - SQL Server primarily works with 64KB I/O (1 64KB extent = 8 pages of 8KB each). With a default NTFS Allocation Size set to 4KB, the server has to do 16 I/Os for each extent - not optimal in the least. (In some cases with Analysis Services, 32KB might yield better results. See the link below for further information).

More information:

SQL Server loves memory, and in my opinion you can't give it too much memory. Many of my clients have a tendency to give it way too little (6-8GB) where double that amount might be a good starting point. This all depends (as always) on the workload and the requirements, but remember - memory transfer is fast, disk transfer is slow - regardless of if you're sporting all SSDs.
Jeremiah Peschka has a great blog post that summarises my thinking:

Allen McGuire wrote a great set of scripts to harvest and report on SQL Server IO that can be useful:

CPU choice is another interesting topic. Remember a few years ago when it was all about the MHz? Well, these days it's all about cores, and since the licenses are purchased per core, well, that's not necessarily a good idea. Parallellizing is a difficult concept, and depending on your workload it might be more or less tricky to parallelize said workload. In a worst case scenario with, say, SharePoint - where you are forced to set Max Degree of Parallelism to 1 - a CPU with many slow cores will give you worse performance than a CPU with fewer fast cores.
In general, I prefer few fast cores over many slow ones. Most of my clients' workloads benefit more from this choice.

Glenn Berry lays it out here:

Instant File Initialization is something that always should be turned on. If it not, then every time a file is created it needs to be filled with zeroes. Not zeroes like myself, but actual zero bits. This means a huge hit on the storage system whenever a file is created. With instant file initialization, the pointer is created without the need for zeroing the actual data. This means instant file creation and no I/O effects. This will affect restores too...
Sure, the blocks are not zeroed and hence are vulnareble for reading with a hex editor. But as Kevin Boles (TheSQLGuru) put it in a presentation for SQL Server User Group Sweden: "if someone is reading your SQL Server data files with a hex editor, you are already pwnd".

Check this blog post from the premier field engineer team at Microsoft:

Power settings have a funny way of being ignored. The bad thing is that Windows 2012 has a habit of setting the power plan to "balanced", meaning that you won't get the maximum performance from your server. Set the power plan to "high" and keep an eye out from time to time - this has been known to "magically" change itself.