Why MAX Data Types Are Usually A Bad Choice For SQL Server Columns

Easy Rider


When you’re trying to figure out how to store string data, it often seems easiest to just choose an extra long — even MAX — data type to avoid future truncation errors.

Even if you’re storing strings with a known, absolute length, developers may choose to not enforce that in the application, either via a drop down menu or other form of validation.

And so to avoid errors when users try to put their oh-so-important data in their oh-so-expensive database, we get columns added to tables that can fit a galaxy of data in them, when we only need to store an ashtray worth of data.

While getting data into those columns is relatively easy — most application inserts are single rows — getting data out of those columns can be quite painful, whether it’s searching or just presenting in the select portion of a query.

Let’s look at a couple simple examples of how that happens.

Search Engine


Let’s take a query like this one:

SELECT TOP (20)
    p.Id,
    p.Title,
    p.Body
FROM dbo.Posts AS p
WHERE p.Body LIKE N'SQL Server%';

The Body column in the Posts table is nvarchar and MAX, but the same thing would happen with a varchar column.

If you need a simple way to remember how to pronounce those data types, just remember to Pahk yah (n)vahcah in Hahvahd Yahd.

Moving on – while much has been written about leading wildcard searches (that start with a % sign), we don’t do that here. Also, in general, using charindex or patindex instead of leading wildcard like searching won’t buy you all that much (if anything at all).

Anyway, since you can’t put a MAX datatype in the key of an index, part of the problem with them is that there’s no way to efficiently organize the data for searching. Included columns don’t do that, and so we end up with a query plan that looks some-such like this:

SQL Server Query Plan
ouch in effect

We spend ~13.5 seconds scanning the clustered index on the Posts table, then about two minutes and twenty seven seconds (minus the original 13.5) applying the predicate looking for posts that start with SQL Server.

That’s a pretty long time to track down and return 19 rows.

Let’s change the query a little bit and look at how else big string columns can cause problems.

Memory Bank


Rather than search on the Body column, let’s select some values from it ordered by the Score column.

Since Score isn’t indexed, it’s not sorted in the database. That means SQL Server needs to ask for memory to put the data we’re selecting in the order we’re asking for.

SELECT TOP (200)
    p.Body
FROM dbo.Posts AS p
ORDER BY p.Score DESC;

The plan for this query asks for a 5GB memory grant:

SQL Server Query Pla
quietly

I know what you’re thinking: the Body column probably has some pretty big data in it, and you’re right. In this case, it’s the right data type to use.

The bad news is that SQL Server will makes the same memory grant estimation based on the size of the data we need to sort whether or not it’s a good choice.

I talk more about that in this Q&A on Stack Exchange.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Using Views To Reduce Memory Grants In SQL Server

We All Have It


You know those tables, right? The ones where developers went and got lazy or didn’t know any better and decided every string column was going to be gigantic.

They may have read, of course, that SQL Server’s super-smart variable length data types only consume necessary space.

It’s free real estate.

Except it isn’t, especially not when it comes to query memory grants.

The bigger a string column’s defined byte length is, the bigger the optimizer’s memory grant for it will be.

Memory Grant Primer


In case you need some background, the short story version is:

  • All queries ask for some memory for general execution needs
  • Sorts, Hashes, and Optimized Nested Loops ask for additional memory grants
  • Memory grants are decided based on things like number of rows, width of rows, and concurrently executing operators
  • Memory grants are divided by DOP, not multiplied by DOP
  • By default, any query can ask for up to 25% of max server memory for a memory grant
  • Approximately 75% of max server memory is available for memory grants at one

Needless to say, memory grants are very sensitive to misestimates by the optimizer. Going over can be especially painful, because that memory will most often get pulled from the buffer pool, and queries will end up going to disk more.

Underestimates often mean spills to disk, of course. Those are usually less painful, but can of course be a problem when they’re large enough. In particular, hash spills are worth paying extra attention to.

Memory grant feedback does supply some relief under modern query execution models. That’s a nice way of saying probably not what you have going on.

Query Noogies


Getting back to the point: It’s a real pain in the captain’s quarters to modify columns on big tables, even if it’s reducing the size.

SQL Server’s storage engine has to check page values to make sure you’re not gonna lose any data fidelity in the process. That’ a nice way of saying you’re not gonna truncate any strings.

But if you do something cute like run a MAX(LEN(StringCol) and see what you’re up against, you can use a view on top of your table to assuage SQL Server’s concerns about such things.

After all, functions are temporary. Data types are forever (usually).

An easy way to illustrate what I mean is to look at the details of these two queries:

SELECT TOP (1000)
    p.Body
FROM dbo.Posts AS p
ORDER BY p.Score DESC
OPTION(RECOMPILE);

SELECT TOP (1000)
    Body = 
        CONVERT
        (
            nvarchar(100), 
            p.Body
        )
FROM dbo.Posts AS p
ORDER BY p.Score DESC
OPTION(RECOMPILE);

Some of this working is dependent on the query plan, so let’s look at those.

Pink Belly Plans


You can ignore the execution times here. The Body column is not a good representation of an oversized column.

It’s defined as nvarchar(max), but (if I’m remembering my Stack lore correctly) is internally limited to 30k characters. Many questions and answers are longer than 100 characters anyway, but on to the plans!

SQL Server Query Plan
janitor

In the plan where the Body column isn’t converted to a smaller string length, the optimizer asks for a 16GB memory grant, and in the second plan the grant is reduced to ~3.5GB.

This is dependent on the compute scalar occurring prior to the Top N Sort operator, of course. This is where the convert function is applied to the Body column, and why the grant is reduced

If you were to build a view on top of the Posts table with this conversion, you could point queries to the view instead. That would get you the memory grant reduction without the pain of altering the column, or moving the data into a new table with the correct definition.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

CPU & RAM Don’t Lie: Query Metrics I Care About For Tuning

Discarded


There are metrics that I care and don’t care about when I’m looking for queries to tune.

Metrics I don’t care about:

  • Logical Reads
  • Costs

If a query does “a lot” of reads or has a high “cost”, I generally don’t care as long as they run quickly. Doing consistent physical reads is a slightly different story, but would probably fall more under server tuning or fixing memory grants.

Metrics I do care about:

  • CPU (taking parallelism into consideration)
  • Duration (compared to CPU)
  • Memory Grants (particularly when they’re not being fully utilized)
  • Writes (especially if it’s just a select)
  • Executions (mostly to track down scalar UDFs)

CPU and Duration


These two metrics get lumped together because they need to be compared in order to figure out what’s going on. First, you need to figure out what the minimum runtime of a query is that you want to tune.

In general, as query execution time gets faster, getting it to be much faster gets more difficult.

  • Bringing a query from 1 second to 100 milliseconds might be a small matter
  • Bringing that same query from 100 milliseconds to 1 millisecond might take more time than it’s worth

I say that because unless someone is querying SQL Server directly, smaller durations tend to be less detectable to end users. By the time they hit a button, send the request, receive the data, and have the application render it etc. they’re probably not aware of a 99 millisecond difference.

Of course, not everything is end-user centric. Other internal operations, especially any loop processing, might benefit greatly from reductions on the smaller side of things.

  • If duration and CPU are acceptable, leave it alone
  • If either is unacceptable, tune the darn thing
  • If CPU is much higher than duration, you have a parallel plan, and tuning is optional
  • If duration is much higher than CPU, you have blocking or another contention issue, and the query you’re looking at probably isn’t the problem
  • If duration and CPU are roughly equivalent, you either have a functional serial plan or a really crappy parallel plan

I give these the highest priority because reducing these is what makes queries faster, and reduces the surface area (execution time) of a query where something crappy might happen, like blocking, or deadlocks, or other resource contention.

Memory Grants


Using these as a tuning metric can have a lot of positive effects, depending on what kind of shape the system is in.

Consider a few scenarios:

  • PAGEIOLATCH_XX waits are high because large memory grants steal significant buffer pool space
  • RESOURCE_SEMAPHORE waits are high because queries suck up available memory space and prevent other queries from using it
  • Queries are getting too low of a memory grant and spilling significantly, which can slow them down and cause tempdb contention under high concurrency

Fixing memory grant issues can take many forms:

  • Getting better cardinality estimates for better overall grant estimates
  • Indexing to influence operator choices away from memory consumers
  • Using more appropriate string lengths to reduce memory grants
  • Fixing parallel skew issues that leaves some threads with inadequate memory
  • Rewriting the query to not ask for ordered data
  • Rewriting the query to ask for ordered data in smaller chunks
  • Rewriting the query to convert strings to better fitting byte lengths

That’s just some stuff I end up doing off the top of my head. There are probably more, but blog posts are only useful up to a certain length.

Like all other strings.

Writes and Selects


Modification queries are going to do writes. This seems intuitive and not at all shocking. If you have queries that are doing particularly large modifications, you could certainly look into tuning those, but it would be a standard exercise in query or index tuning.

Except that your index tuning adventure would most likely lead you to dropping unused and overlapping indexes to reduce the number of objects that you need to write to than to add an index.

But who knows. Boring anyway. I hear indexes tune themselves in the cloud.

When select queries do a large number of writes, then we’re talking about a much more interesting scenario.

  • Spills
  • Spools
  • Stats updates

Of course, stats updates are likely a pretty small write, but the read portion can certainly halt plan compilation for a good but on big tables.

Spills and Spools are going to be the real target here. If it’s a spill, you may find yourself tracking back to the memory grant section up above.

Spools, though! What interesting little creatures. I wrote a longer post about them here:

Understand Your Plan: Operators That Write Data (Spools, Spools, Spools)

It has a bit of a link roundup of other posts on my site and others that talk about them, too.

But since we’re living in this now, let’s try to be present. Here’s the short story on spools that we might try to fix:

  • The Spools we typically care about are Table or Index
  • They can be eager or lazy
  • They’ll show up on the inner side of Nested Loops
  • SQL Server uses them as a temporary cache for data
  • They are a good indicator that something is amok with your query or indexes

For eager index spools, the story is pretty simple around creating a better index for SQL Server to use.

For lazy table spools, you have more options:

  • Give SQL Server unique data to work with
  • Get the optimizer to not choose nested loops
  • Use the NO_PERFORMANCE_SPOOL hint to test the query without spools

Of course, there are times where you’re better off with a spool than without. So don’t walk away feeling disheartened if that’s the case.

Executions


These are on the opposite end of the spectrum from most of the queries I go after. If a query runs enough, and fast enough, to truly rack up a high number of executions, there’s probably not a ton of tuning you could do.

Sure, sometimes there’s an index you could add or a better predicate you could write, but I’d consider it more beneficial to get the query to not run so much.

That might result in:

  • Rewriting functions as inline table valued functions
  • Handing the queries off to app developers for caching

To learn how I rewrite functions, check out this video

I know, you can’t rewrite every single function like this, but it’s a wonderful thing to do when you can.

Anything Other Than


Again, metrics I don’t ever look at are logical reads or costs.

  • Doing reads doesn’t necessarily mean that queries are slow, or that there’s anything you can fix
  • Costs are a meme metric that should be removed from query plans in favor of operator times

Well, okay, maybe not completely removed, but they shouldn’t be front and center anymore.

There are many other more reliable metrics to consider that are also far more interesting.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Common SQL Server Consulting Advice: Enabling Resource Governor To Fix Memory Grants

With Or Without You


One thing I really hate about Standard Edition is the lack of a way to globally reduce the allowed memory grant default.

Out of the box, any query can come along and ask for ~25% of your server’s max server memory setting, and SQL Server is willing to loan out ~75% at once across a bunch of queries.

That impacts Standard Edition users way more than Enterprise Edition users, because of the Standard Edition buffer pool limit of 128GB.

A lot of folks misinterpret that limit — I’ve had several exchanges with big name hardware vendors where they insist 128GB is the overall RAM limit, so you’re likely getting bad advice from everywhere — and they end up with a server that only has 128GB of RAM in it.

Big mistake. Bump that up to 192GB and set Max Server Memory to ~180GB or so.

But I digress. Or whatever the word if for getting back to the real point.

I forget, if I ever knew.

Control Top


For all you high rollers out there on Enterprise Edition, you have an easy way to fight against SQL Server’s bad memory grant habits.

Before we do that, it’s important to make a few notes here:

  • SQL Server introduced batch mode memory grant feedback in 2016
  • SQL Server introduced batch mode on row store in 2019

Depending on your compatibility level, you may not be taking advantage of those things, but in either case the feedback might be kicking in too late. See, it’s not a runtime decision, it’s a decision that takes place after a query runs.

By then, it’s already sucked up 25% of your memory and probably stolen a whole bunch of space from your precious buffer pool. A properly filled buffer pool is important so your queries don’t get dry-docked going out to slowpoke disk all the live long day.

A lot of the time, folks I work with will have a ton of queries asking for bunk memory grants that are way bigger than they should be.

You can use this query to examine your plan cache for those things.

WITH 
    unused AS
(
    SELECT TOP (100)
        oldest_plan = 
            MIN(deqs.creation_time) OVER(),
        newest_plan = 
            MAX(deqs.creation_time) OVER(),
        deqs.statement_start_offset,
        deqs.statement_end_offset,
        deqs.plan_handle,
        deqs.execution_count,
        deqs.max_grant_kb,
        deqs.max_used_grant_kb,
        unused_grant = 
            deqs.max_grant_kb - deqs.max_used_grant_kb,
        deqs.min_spills,
        deqs.max_spills
    FROM sys.dm_exec_query_stats AS deqs
    WHERE (deqs.max_grant_kb - deqs.max_used_grant_kb) > 1024.
    AND   deqs.max_grant_kb > 5242880.
    ORDER BY 
        unused_grant DESC
)
SELECT      
    plan_cache_age_hours = 
        DATEDIFF
        (
            HOUR,
            u.oldest_plan,
            u.newest_plan
        ),
    query_text = 
        (
            SELECT [processing-instruction(query)] =
                SUBSTRING
                (
                    dest.text, 
                    ( u.statement_start_offset / 2 ) + 1,
                    (
                        ( 
                            CASE u.statement_end_offset 
                                 WHEN -1 
                                 THEN DATALENGTH(dest.text) 
                                 ELSE u.statement_end_offset 
                            END - u.statement_start_offset 
                        ) / 2 
                    ) + 1
                )
                FOR XML PATH(''), 
                    TYPE
        ),
    deqp.query_plan,
    u.execution_count,
    u.max_grant_kb,
    u.max_used_grant_kb,
    u.min_spills,
    u.max_spills,
    u.unused_grant
FROM unused AS u
OUTER APPLY sys.dm_exec_sql_text(u.plan_handle) AS dest
OUTER APPLY sys.dm_exec_query_plan(u.plan_handle) AS deqp
ORDER BY 
    u.unused_grant DESC
OPTION (RECOMPILE, MAXDOP 1);

If you have a bunch of those, and you want a quick fix until you can do more meaningful query and index tuning, you can use Resource Governor to reduce the 25% default to a lower number.

Scripted, For Your Pleasure


You can use this script to enable and reconfigure Resource Governor to use a lower memory grant percent.

/*The first time enables Resource Governor*/
ALTER RESOURCE GOVERNOR 
    RECONFIGURE;

/*This reduces the memory grant cap to 10%*/
ALTER WORKLOAD GROUP 
    [default] 
WITH
(
    REQUEST_MAX_MEMORY_GRANT_PERCENT = ?
);

/*This completes the change*/
ALTER RESOURCE GOVERNOR 
    RECONFIGURE;

You’ll have to fill in the question mark yourself, of course. Without looking at your system, I have no idea what it should be.

If you’d like help with that, hit the link below to schedule a sales call.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Common SQL Server Consulting Advice: Setting Min Server Memory And Max Server Memory

The Max For The Minimum


I’ve written several hundred thousand times about how important memory is to SQL Server, and to most databases in general. But once you have it, how do you make the most of it?

Well, there’ll be a few posts in this series that cover different ways of doing that, but we’re gonna start with some basic ones that I see folks tinker with in all the wrong ways.

If you don’t know how to change these settings, here are some example commands. Note that the max server memory command is supplied with -1, which will throw an error.

Why? Because this is highly dependent on Local Factors©

EXEC sys.sp_configure
    @configname = 'show advanced options',
    @configvalue = 1;
RECONFIGURE;

EXEC sys.sp_configure
    @configname = 'min server memory (MB)',
    @configvalue = 0;
RECONFIGURE;

EXEC sys.sp_configure
    @configname = 'max server memory (MB)',
    @configvalue = -1;
RECONFIGURE;

Let’s figure out those local factors!

First, let’s get a stupid query to see what you have things set to:

SELECT
    x.name, 
    x.value_gb, 
    x.value_in_use_gb, 
    x.total_physical_memory_gb,
    percent_of_total_memory = 
        (x.value_in_use_gb / x.total_physical_memory_gb) * 100
FROM 
    (
    SELECT
        c.name,
        value_gb = 
            CONVERT
            (
                bigint,
                c.value
            ) / 1024,
        value_in_use_gb = 
            CONVERT
            (
                bigint,
                c.value_in_use
            ) / 1024,
        dosm.total_physical_memory_gb     
    FROM sys.configurations AS c
    CROSS JOIN
    (
        SELECT 
            total_physical_memory_gb = 
                CEILING
                (
                    dosm.total_physical_memory_kb / 1024. / 1024.
                )
        FROM sys.dm_os_sys_memory AS dosm
    ) dosm
    WHERE c.name IN 
    (
        N'min server memory (MB)',
        N'max server memory (MB)'
    )
) AS x;

This will tell you what you have min and max server memory set to, what the total physical memory in the server is, and what percent of the total memory each setting is.

Now that you have those numbers in mind, let’s talk about them.

Min Server Memory


This section can be nice and short: If you have this set to a number other than zero (which is the default, don’t play with me on this one), someone either changed the wrong setting, or took all their goofy pills.

Or both. One thing leads to another, as they say.

There are some weird cases where you might see 16, which shows up on some servers running 64bit versions of SQL Server.

If you see an 8 here, that means you’ve temporarily spawned in the wrong timeline and you need to speak to your handler about being sent back to when 32bit software was acceptable.

The worst case here is having min server memory set equal to max server memory, which prevents SQL Server from using dynamic memory, and can cause all sorts of weird issues. Bottom line: don’t do it.

If the same value is specified for both min server memory and max server memory, then once the memory allocated to the SQL Server Database Engine reaches that value, the SQL Server Database Engine stops dynamically freeing and acquiring memory for the buffer pool.

If you have weird stuff going on with your server, like the plan cache throwing up constantly, queries that are struggling with memory grants, or other oddities, you should check to make sure min server memory is not set to max server memory.

Max Server Memory


Okay, let’s make this creakin’ easy, too. If the percent_of_total_memory column is over 100% then we’re back to the goofy pills theory of database administration.

If it’s a number between 70 and 90%, you’re probably in okay shape. Any lower, and you’re probably doing something asinine like stacking SQL Server instances and we’re back to the goofy pills theory of database administration.

But look, there’s all sorts of stuff you have to think about when you set max server memory in some situations:

  • Is SSAS, SSIS, or SSRS installed?
  • Are you running a workload with column store and/or batch mode components?

I’ve never opened up any SS*S component, so I’m not going to tell you how much memory you should set aside for them. I also don’t want to get in the habit of giving the wrong advice, either. You should install these things on their own server where they can use their own local resources and be much less of a bother to your production workload.

The middle point is becoming more and more of an issue though. Column store indexes and Batch Mode query execution are… different about memory.

You’ll either need to over-provision memory for those workloads, or set max server memory to a lower number than you’d would usually to accommodate them.

Pacer


That’s recap backwards, in case you were wondering.

Hopefully this post gives you a useful query to look at memory settings, and how they compare to the memory your SQL Server has in it.

Tomorrow, we’ll talk about Lock Pages In Memory. It’s one of my favorite settings.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Does My SQL Server Need More Memory?

Work It


Memory is S-Tier crucial for most workloads to run reliably fast. It’s where SQL Server caches data pages, and it’s what it gives to queries to process Sorts and Hashes (among other things, but these are most common).

Without it, those two things, and many other caches, would be forced to live on disk. Horrible, slow disk. Talk about a good way to make databases less popular, eh?

With no offense to the SAN administrators of the world, I consider it my sworn duty to have databases avoid your domain as much as possible.

In this post, we’ll talk about how to figure out if your SQL Server needs more memory, and if there’s anything you can do to make better use of memory at the same time.

After all, you could be doing just fine.

(You’re probably not.)

Tale Of The Wait Stats


You can look at wait stats related to memory and query performance by running sp_PressureDetector.

It’ll give you some details about wait stats that relate to CPU and memory pressure. You wanna pay attention to the memory and disk waits, here. I had to squish it a little, but if you’re unfamiliar you can use the “description” column to better understand which ones to pay attention to.

SQL Server Wait Stats

Some important metrics to note here:

  • How do wait times relate to server uptime?
  • How long on average do we wait on each of these?

This won’t tell the whole story, of course, but it is a reasonable data point to start with. If your workload isn’t a 24×7 slog, though, you might need to spend more time analyzing waits for queries as they run.

In this example, it’s my local SQL instance, so it hasn’t been doing much work since I restarted it. Sometimes, you gotta look at what queries that are currently running are waiting on.

For that, go grab sp_WhoIsActive. If you see queries constantly waiting on stuff like this, it might be a sign you need more memory, because you have to keep going out to disk to get what queries need to use.

SQL Server Wait Stats
telling myself

It could also be a sign of other things, like queries and indexes that need tuning, but if it’s sustained like this then that’s not entirely likely.

It’s much more likely a memory deficiency, but it’s up to you to investigate further on your system.

How Is SQL Server Using Memory Now?


Current memory utilization can be a good way to find out if other things are using memory and taking valuable space away from your buffer pool.

A lot of folks out there don’t realize how many different things SQL Server has to share memory across.

And, hey, yeah, sp_PressureDetector will show you that, too. Here’s a “normal” run:

SQL Server’s buffer pool is uninfringed upon by other consumers! Great. But sometimes queries ask for memory grants, and that’s where things can get perilous.

SQL Server Memory Clerks
i feel good

You may sometimes see Ye Olde First Memory Bank Of Motherboard loan out a swath to one or more queries:

SQL Server Memory Clerks
dramarama

The difference here? The buffer pool is reduced by ~9GB to accommodate a query memory grant.

sp_PressureDetector will show you the queries doing that, too, along with query plans.

sp_WhoIsActive
everyone is gone

It’ll also show you memory available in resource pools for granting out to queries. On this server, Max Server Memory is set to 50GB.

If you’re shocked that SQL Server is willing to give out 37GB of that to query memory grants, you haven’t been hanging around SQL Server long enough.

And you wonder why I worry about memory!

Does SQL Server Need More Memory?


If your queries are constantly waiting on:

  • Reading pages from disk (PAGEIOLATCH_…)
  • Query memory grants (RESOURCE_SEMAPHORE)
  • Compiling query plans (RESOURCE_SEMAPHORE_QUERY_COMPILE)

Then there’s a pretty good chance that it does, especially if data just plain outpaces memory by a good margin (like 3:1 or 4:1 or more).

You also have some options for making better use of your current memory, too.

  • Check critical queries for indexing opportunities  (there may not always be a missing index request, but seasoned query tuners can spot ones the optimizer doesn’t)
  • Apply PAGE compression to existing row store indexes to make them smaller on disk and in memory
  • Check the plan cache for queries asking for large memory grants, but not using all of what’s granted to them

You can check the plan cache using a query like this. It’ll look for queries that ask for over 5GB of memory, and don’t use over 1GB of it.

WITH 
    unused AS
(
    SELECT TOP (100)
        oldest_plan = 
            MIN(deqs.creation_time) OVER(),
        newest_plan = 
            MAX(deqs.creation_time) OVER(),
        deqs.statement_start_offset,
        deqs.statement_end_offset,
        deqs.plan_handle,
        deqs.execution_count,
        deqs.max_grant_kb,
        deqs.max_used_grant_kb,
        unused_grant = 
            deqs.max_grant_kb - deqs.max_used_grant_kb,
        deqs.min_spills,
        deqs.max_spills
    FROM sys.dm_exec_query_stats AS deqs
    WHERE (deqs.max_grant_kb - deqs.max_used_grant_kb) > 1024.
    AND   deqs.max_grant_kb > 5242880.
    ORDER BY unused_grant DESC
)
SELECT      
    plan_cache_age_hours = 
        DATEDIFF
        (
            HOUR,
            u.oldest_plan,
            u.newest_plan
        ),
    query_text = 
        (
            SELECT [processing-instruction(query)] =
                SUBSTRING
                (
                    dest.text, 
                    ( u.statement_start_offset / 2 ) + 1,
                    (
                        ( 
                            CASE u.statement_end_offset 
                                 WHEN -1 
                                 THEN DATALENGTH(dest.text) 
                                 ELSE u.statement_end_offset 
                            END - u.statement_start_offset 
                        ) / 2 
                    ) + 1
                )
                FOR XML PATH(''), 
                    TYPE
        ),
    deqp.query_plan,
    u.execution_count,
    u.max_grant_kb,
    u.max_used_grant_kb,
    u.min_spills,
    u.max_spills,
    u.unused_grant
FROM unused AS u
OUTER APPLY sys.dm_exec_sql_text(u.plan_handle) AS dest
OUTER APPLY sys.dm_exec_query_plan(u.plan_handle) AS deqp
ORDER BY u.unused_grant DESC
OPTION (RECOMPILE, MAXDOP 1);

This will get you the top (up to!) 100 plans in the cache that have an unused memory grant, ordered by the largest difference between grant and usage.

What you wanna pay attention to here:

  • How old the plan cache is: if it’s not very old, you’re not getting the full picture
  • How big the memory grants are: by default, the max is ~25% of max server memory

Controlling Memory Grants


If you’re looking for ways to control memory grants that doesn’t involved a bunch of query and index tuning, you have a few options:

  • Resource Governor: Enterprise Edition only, and usually applies to the whole workload
  • MIN_GRANT_PERCENT and MAX_GRANT_PERCENT query hints: You usually wanna use both to set a proper memory grant, just setting an upper level isn’t always helpful
  • Batch Mode Memory Grant Feedback: Requires Batch Mode/Columnstore, only helps queries between executions, usually takes a few tries to get right

For Resource Governor, you’ll wanna do some analysis using the query in the previous section to see what a generally safe upper limit for memory grants is. The more memory you have, and the higher your max server memory is, the more insane 25% is.

SQL Server Resource Governor
signs and numbers

Again, just be cautious here. If you change this setting based on a not-very-old plan cache, you’re not gonna have a clear pictures of which queries use memory, and how much they use. If you’re wondering why I’m not telling you to use Query Store for this, it’s because it only logs how much memory queries used, not how much they asked for. It’s pretty ridiculous.

After you make a change like this, or start using those query hints, you’ll wanna do some additional analysis to figure out if queries are spilling to disk. You can change the query above to something like this to look at those:

WITH 
    unused AS
(
    SELECT TOP (100)
        oldest_plan = 
            MIN(deqs.creation_time) OVER(),
        newest_plan = 
            MAX(deqs.creation_time) OVER(),
        deqs.statement_start_offset,
        deqs.statement_end_offset,
        deqs.plan_handle,
        deqs.execution_count,
        deqs.max_grant_kb,
        deqs.max_used_grant_kb,
        unused_grant = 
            deqs.max_grant_kb - deqs.max_used_grant_kb,
        deqs.min_spills,
        deqs.max_spills
    FROM sys.dm_exec_query_stats AS deqs
    WHERE deqs.min_spills > (128. * 1024.) 
    ORDER BY deqs.max_spills DESC
)
SELECT      
    plan_cache_age_hours = 
        DATEDIFF
        (
            HOUR,
            u.oldest_plan,
            u.newest_plan
        ),
    query_text = 
        (
            SELECT [processing-instruction(query)] =
                SUBSTRING
                (
                    dest.text, 
                    ( u.statement_start_offset / 2 ) + 1,
                    (
                        ( 
                            CASE u.statement_end_offset 
                                 WHEN -1 
                                 THEN DATALENGTH(dest.text) 
                                 ELSE u.statement_end_offset 
                            END - u.statement_start_offset 
                        ) / 2 
                    ) + 1
                )
                FOR XML PATH(''), 
                    TYPE
        ),
    deqp.query_plan,
    u.execution_count,
    u.max_grant_kb,
    u.max_used_grant_kb,
    u.min_spills,
    u.max_spills,
    u.unused_grant
FROM unused AS u
OUTER APPLY sys.dm_exec_sql_text(u.plan_handle) AS dest
OUTER APPLY sys.dm_exec_query_plan(u.plan_handle) AS deqp
ORDER BY u.max_spills DESC
OPTION (RECOMPILE, MAXDOP 1);

Small spills aren’t a big deal here, but you’ll definitely wanna pay attention to larger ones. This is set to find ones that are over 1GB, which is still pretty small, but could be meaningful.

If you notice a lot more queries spilling in a substantial way, you may have capped the high end of query memory grants too low.

Recap


Memory is something that I see people struggle to right-size, forecast, and understand the physics of in SQL Server. The worst part is that hardly anything in this post applies to Standard Edition, which is basically dead to me.

The main things to keep an eye on are:

  • Wait stats overall, and for running queries
  • Large unused memory grants in the plan cache
  • Size of data compared to size of memory

If you need help with this sort of thing, hit the link below to drop me a line about consulting.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

A Resource Governor Request: Cap Total Query Memory Grants (Without Capping Other Stuff)

Easy Peas


I know what you’re gonna say: You’re gonna say, Erik, you can do this by setting the Maximum Memory Percent for each and every resource pool other than the internal one.

And I’ll tell you something wacky: That limits more than just total query memory grants, and all I want is an easy and straightforward way to tell SQL Server that I don’t want it to give up huge swaths of my buffer pool to query memory grants.

While the Memory Grant Percent setting makes it really easy to cap the total memory grant a single query can ask for, nothing does a singular job of controlling how much total memory queries can ask for as a whole, without also stifling other caches that contribute to Stolen Server Memory, like the plan cache. A very big thank you to LMNOP(b|t) for helping me figure that out.

The other downside is that you’d have to set that cap for each pool, and that’s exhausting. Tiresome. Easy to get wrong. It’d be a whole lot easier and cleaner to set that globally, without also nerfing a bunch of other potentially useful caches.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

Signs You Need More Memory In Your SQL Server

Guessotron


Most of these signs have to do with wait stats. One could venture out and say that if you have way less memory than data, you need more memory, but… If the server is sitting around bored, who cares?

If we’re going to spend money on memory, let’s make sure it’ll get used. When I’m talking to people with performance problems that memory would solve, here are some of the top reasons.

You’re In The Cloud Where Storage Sucks

Okay, okay, storage can totally suck other places, too. I’ve seen some JBOD setups that would make you cry, and some of them weren’t in the cloud. Where you need to differentiate a little bit here is that memory isn’t going to help slow writes directly. If you add a bunch more memory and free up some network bandwidth for writes by focusing the reads more from the buffer pool, it might.

Look, just avoid disk as much as possible and you’ll be happy.

You’re Using Column Store And/Or Batch Mode

Good column store compression can often rely on adequate memory, but you also need to account for the much larger memory grants that batch mode queries ask for. As more and more workloads move towards SQL Server 2019 and beyond, query memory needs are going to go up because Batch Mode On Row Store will become more common.

You’re Waiting On RESOURCE_SEMAPHORE A Lot

This wait shows up when a bunch of queries are contending for memory grants, but SQL Server has given out all it can. If you run into these a lot, it’s a pretty good sign you need more memory. Especially if you’ve already tuned queries and indexes a bunch, or you’re dealing with a vendor app where they refuse to fix anything.

Other things that might help? The MAX_GRANT_PERCENT hint or Resource Governor

You’re Waiting On RESOURCE_SEMAPHORE_QUERY_COMPILE A Lot

This is another “queue” wait, but it’s for query compilation rather than query execution. Having more memory can certainly help this quite a bit, but so can simplifying queries so that the amount of memory SQL Server has to throw at compiling them chills out a little. You can start by reconsidering those views nested 10 levels deep and the schema design that leads you to needing a 23 table join to construct one row.

You’re Waiting On PAGEIOLATCH_SH Or PAGEIOLATCH_EX A Lot

These waits show up when data pages your query needs aren’t already there. The more you see these, the more latency you’re adding to your workload by constantly shuffling out to disk to get them. Of course, there’s other stuff you can do, like clean up unused and overlapping indexes, compress your indexes, etc. But not everyone is comfortable with or able to do that.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

When Query Spills Cause tempdb Contention And Performance Issues In SQL Server

Temptables


I was helping a client troubleshoot a scalability issue recently, where during one point in their workload tempdb went out of control.

What was puzzling them was that nothing in the stored procedure did anything with temporary objects, cursors, or any of the other obvious things that go to el baño público. And the query plan itself was a pretty simple little loopy thing.

It wasn’t until I grabbed a quick sample of actual execution plans with Extended Events that the problem became more obvious.

Despite the fact that the query was only passing around 7 rows, we were getting a one row estimate, and there was a spilling Sort operator.

Chance Of Cloudballs


There were a couple compounding issues here, of course, like that their database is hosted in the cloud.

It doesn’t matter which cloud you talk about, this is a problem they all have and I see people face constantly.

I/O sucks, up there.

It was a lot harder to get this to repro on my local hardware, because I have very nice drives.

Let’s take a look!

Bleak


If I run this query, it’ll spill a little bit. I have the MAXDOP 1 hint on there to artificially slow it down. Without that, the fast query and I/O make the contention harder to hit.

SELECT TOP (1000) 
    u.Id 
FROM dbo.Users AS u
ORDER BY u.Reputation
OPTION(MAXDOP 1);
SQL Server Query Plan
just like me

I stuck it in a stored procedure to make calling it from ostress a little bit easier.

No Return


After running for a bit, eventually we hit GAM contention:

SQL Server tempdb Contention
tuesday

We can see all the classic signs of tempdb contention, arising from spills.

It’s worth noting that enabling in-memory tempdb seems to alleviate this issue for the most part. I’d show you a picture of that, but it’s not worth 1000 words.

Fixing The Spill


Normally, I wouldn’t bother fixing a spill of that size, but in this case it was worthwhile because it was harming concurrency.

The quick fix was to add a MIN_GRANT_PERCENT hint. The long-term fix is to add an index to support the Sort operation.

Good ol’ indexes, putting data in order.

There are many reasons why a spill may occur. Poor estimates are quite common, but the reason for that poor estimate has about a billion potential root causes.

Even if your query gets enough memory, sometimes you may still see spills because certain operators had to share memory.

It’s a wild world.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.

How Do I Estimate How Much Memory My SQL Server Needs?

It’s Brand New


If you’re looking to scope out hardware for SQL Server, whether it’s in the cloud, or… not in the cloud, you’re gonna wanna get memory right.

Lots of people complain about SQL Server being a memory hog, without pausing to reflect on the fact that they’re data hogs.

You don’t like SQL Server taking up all that memory? Drop all those big databases.

Problem solved.

Considerate


There are a number of things you can look at on your current server that tie in to how much memory you’ll need in the future:

  • Wait Stats
  • Size of data
  • Current indexing
  • Data growth rate

Waits

I have wait stats there because the first thing you should look at to see if you have adequate memory is PAGEIOLATCH_** waits. If they’re more than 10% of your server’s uptime, you might be losing a whole bunch of query performance to getting the data you need into memory. Of course, from here you need to monitor when those waits are happening to figure out if they’re affecting critical user queries, or just happening overnight during maintenance.

Sizes

The size of your data is going to depend on how big your tables are, and how many indexes you have on them. If you have a lot of unused or duplicative indexes, you’re going to have a much bigger database than is necessary. That’s going to inflate the amount of memory you currently “need”, because all of those cruft indexes are going to be competing for space in the buffer pool.

Indexes

An easy first step to making the best possible use of the RAM you currently have is to make sure you have only the most beneficial indexes in place. Once you’ve got that done, you can go even further by looking at compression. Compressed indexes squish things down on disk and in the buffer pool.

Growths

Some database have predictable growth patterns based on regular usage. It’s up to you to trend that stuff for the most part.

There’s a disk usage report under each database’s standard reports that you can get to with a right click, but it only has growths that are currently in the standard trace file currently. They could be aged out when you go looking for them.

Another idea would be to look at backup sizes over the past year or so to see how they’ve grown.

Other databases are unpredictable based on acquisitions, large customer onboarding, big sales, sudden success, etc.

Of course, if you haven’t tended to indexes or compression like I listed above, these trends may not hold up well compared to after doing them. This is something I’d do last, after taking care of the index stuff.

All This And More


An additional consideration that adds quite a bit of complication is query memory grants.

The difficulty here is twofold, because you may have query and index tuning to do to reduce the memory grants queries ask for, and when memory pressure arises on a server the plan cache (where most people wanna go look for queries asking for large memory grants) will be getting cleared out constantly. That can make tracking them hard.

I know, you’re thinking that Query Store is awesome for this, but it’s not. It only tracks memory that queries used, not what they were granted. A query can ask for a huge memory grant, only use a small fraction of it, and there’s no sign of that in Query Store. Brick and mortar, indeed.

Most people aren’t ready to truly speculate on hardware needs when I talk to them, but they can get there. If you want a basic guess? Try to get enough memory to cache 50-75% of your server’s data in memory. That should get you enough to cache what you need, deal with memory growth, and deal with memory grants.

And you know, you’ll probably wanna pad that a little bit if you’re going to the cloud, where storage gear and networking is basically electric porridge.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that and need to solve performance problems quickly.