It’s Brand New
If you’re looking to scope out hardware for SQL Server, whether it’s in the cloud, or… not in the cloud, you’re gonna wanna get memory right.
Lots of people complain about SQL Server being a memory hog, without pausing to reflect on the fact that they’re data hogs.
You don’t like SQL Server taking up all that memory? Drop all those big databases.
There are a number of things you can look at on your current server that tie in to how much memory you’ll need in the future:
- Wait Stats
- Size of data
- Current indexing
- Data growth rate
I have wait stats there because the first thing you should look at to see if you have adequate memory is PAGEIOLATCH_** waits. If they’re more than 10% of your server’s uptime, you might be losing a whole bunch of query performance to getting the data you need into memory. Of course, from here you need to monitor when those waits are happening to figure out if they’re affecting critical user queries, or just happening overnight during maintenance.
The size of your data is going to depend on how big your tables are, and how many indexes you have on them. If you have a lot of unused or duplicative indexes, you’re going to have a much bigger database than is necessary. That’s going to inflate the amount of memory you currently “need”, because all of those cruft indexes are going to be competing for space in the buffer pool.
An easy first step to making the best possible use of the RAM you currently have is to make sure you have only the most beneficial indexes in place. Once you’ve got that done, you can go even further by looking at compression. Compressed indexes squish things down on disk and in the buffer pool.
- How row compression works
- How page compression works
- Data Compression Costs & Benefits
- Data Compression
- Columnstore Indexes: Overview
Some database have predictable growth patterns based on regular usage. It’s up to you to trend that stuff for the most part.
There’s a disk usage report under each database’s standard reports that you can get to with a right click, but it only has growths that are currently in the standard trace file currently. They could be aged out when you go looking for them.
Another idea would be to look at backup sizes over the past year or so to see how they’ve grown.
Other databases are unpredictable based on acquisitions, large customer onboarding, big sales, sudden success, etc.
Of course, if you haven’t tended to indexes or compression like I listed above, these trends may not hold up well compared to after doing them. This is something I’d do last, after taking care of the index stuff.
All This And More
An additional consideration that adds quite a bit of complication is query memory grants.
The difficulty here is twofold, because you may have query and index tuning to do to reduce the memory grants queries ask for, and when memory pressure arises on a server the plan cache (where most people wanna go look for queries asking for large memory grants) will be getting cleared out constantly. That can make tracking them hard.
I know, you’re thinking that Query Store is awesome for this, but it’s not. It only tracks memory that queries used, not what they were granted. A query can ask for a huge memory grant, only use a small fraction of it, and there’s no sign of that in Query Store. Brick and mortar, indeed.
Most people aren’t ready to truly speculate on hardware needs when I talk to them, but they can get there. If you want a basic guess? Try to get enough memory to cache 50-75% of your server’s data in memory. That should get you enough to cache what you need, deal with memory growth, and deal with memory grants.
And you know, you’ll probably wanna pad that a little bit if you’re going to the cloud, where storage gear and networking is basically electric porridge.
Thanks for reading!
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount on to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.
2 thoughts on “How Do I Estimate How Much Memory My SQL Server Needs?”
Yeah, I’m stuck with Azure IaaS at the moment and it is not happytimes. 16GB should be fine for us for now but with Azure’s default disk setups it’s like trying to run stuff on a Raspberry Pi.
Oh lord, yeah, I hear that a lot.
Comments are closed.