sp_QuickieStore: Find Your Worst Performing Queries During Working Hours

sp_QuickieStore: Find Your Worst Performing Queries During Working Hours



Thanks for watching!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Catch the latest episode of the @sqldatapartners podcast, guest starring yours truly

Ziggy Played Guitar


I was recently honored to be a guest on the SQL Data Parters podcast. We talked about SQL Server performance tuning scripts, consulting, and responsible levels of meat.

Remember to rate and like and subscribe and all that jazz.

Thanks for listening!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Join me at DataTune in Nashville, March 8-9 2024

Spring Training


This March, I’ll be presenting my full day training session The Foundations Of SQL Server Performance Tuning.

All attendees will get free access for life to my SQL Server performance tuning training. That’s about 25 hours of great content.

Get your tickets here for this event, taking place Friday, March 8th-9th 2024 at Belmont University – Massey Center 1900 Belmont Blvd, Nashville, TN 37212

Here’s what I’ll be presenting:

The Foundations Of SQL Server Performance Tuning

Session Abstract:

Whether you want to be the next great query tuning wizard, or you just need to learn how to start solving tough business problems at work, you need a solid understanding of not only what makes things fast, but also what makes them slow.

I work with consulting clients worldwide fixing complex SQL Server performance problems. I want to teach you how to do the same thing using the same troubleshooting tools and techniques I do.

I’m going to crack open my bag of tricks and show you exactly how I find which queries to tune, indexes to add, and changes to make. In this day long session, you’re going to learn about hardware, query rewrites that work, effective index design patterns, and more.

Before you get to the cutting edge, you need to have a good foundation. I’m going to teach you how to find and fix performance problems with confidence.

Event Details:

Get your tickets here for this event!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Indexing SQL Server Queries For Performance: Fixing Predicate Key Lookups

Make Things Fast


Our very lofty goal as query tuners is to make the time from request to result shorter. Sometimes that requires a bit of sacrifice on the server side:

  • Queries going parallel, using more CPU to reduce wall clock time
  • Queries using more memory to avoid spilling to disk
  • Queries using space in tempdb via a #temp table to store results
  • Queries having better indexes available to them for searching

Just about every query tuning technique requires sacrificing some resource on the server (and sometimes more than one resource) to achieve our very lofty goal.

The only exclusion is fixing some miserably-written query to bring it back to sensibility. That requires only a sacrifice on your side.

Indexes are not free rides, of course, though much of the cost attributed to them is overblown. I’m not here to give you license to add 100 indexes to a table, but I am giving you permission to judge indexes by quality (read vs write usage) over quantity.

Anyway, let’s look at some of my favorite index design patterns when I’m tuning queries for clients.

Many of the issues are going to contextually framed by issues with parameter sensitivity in order to exacerbate them. You may see similar bad judgement without parameter sensitivity, of course. Either the optimizer screws up some cardinality estimation, or you do something that screws up cardinality estimation for the optimizer. Common causes for that are local variables, table variables, non-SARGable predicates, overly complicated queries, etc.

I don’t write queries that do stupid things, except to show you when things are stupid.

In this post, I’m going to cover the fixing predicate key lookups! Because why not? We’re already here. Stuck together.

Forever.

Key Lookups In General


Key lookups represent a choice made by the optimizer between:

  • Clustered index scan
  • Nonclustered index seek/scan + Clustered index lookup

Lookups can be quite useful to avoid creating quite wide nonclustered index, but the optimizer is heavily biased against them. It does not like random I/O.

Typically lookups get chosen when a relatively small number of rows are expected to be retrieved from the nonclustered index.

Bad estimates in general, and self-inflicted bad estimates from the list above, can contribute to lookups being chosen inappropriately.

Also like I said above, parameter sensitivity is a very easy way to demonstrate the problem.

Before we go on though, let’s talk about the two things lookups can be assigned to do:

  • Retrieve columns from the clustered index that aren’t part of the nonclustered index definition
  • Evaluate predicates in the clustered index that aren’t part of the nonclustered index definition

Often, fixing lookups only to avoid retrieving columns is a last resort for me, because it can mean greatly expanding the number of columns included in a nonclustered index.

But fixing lookups that have predicates in them is quite appealing to me, because a good index should support locating and filtering as much data is possible for important queries.

Okay, now we can go on.

Predicate Key Lookups: Good


Let’s start with this index:

CREATE INDEX
    p
ON dbo.Posts
    (Score, CreationDate, PostTypeId)
WITH
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);
GO

And let’s start with this procedure:

CREATE OR ALTER PROCEDURE
    dbo.PredicateKeyLookup
(
    @Score integer,
    @CreationDate datetime,
    @PostTypeId integer,
    @OwnerUserId integer
)
AS
BEGIN
    SET NOCOUNT, XACT_ABORT ON;
   
    SELECT TOP (100)
        p.*
    FROM dbo.Posts AS p
    WHERE p.Score >= @Score
    AND   p.CreationDate >= @CreationDate
    AND   p.PostTypeId = @PostTypeId
    AND   p.OwnerUserId = @OwnerUserId
    ORDER BY
        p.Score DESC,
        p.CreationDate DESC,
        p.Id DESC;
END;
GO

I know, selecting star is a bad practice, but I’m going to share something shocking with you (and I’ll blog about this later, too): Lookups are assigned the same optimizer cost unit, whether you select one column, or 1000 columns.

The thing to note right here, is that the stored procedure filters on all the columns in our index, plus it has an additional filter on the OwnerUserId column. That (along with the select list columns) is the work our lookup will have to do.

Starting with a query that selects a small number of rows:

EXEC dbo.PredicateKeyLookup
    @Score = 0,
    @CreationDate = '20131101',
    @PostTypeId = 1,
    @OwnerUserId = 39677;

The query plan looks like this, and is very fast. Yay. We smart.

sql server query plan
easy lookup

Zooming in on the lookup a bit, here’s what it’s doing:

sql server query plan

  1. The clustered primary key on the Posts table is Id, which the nonclustered index inherits as a final key column in the nonclustered index (seek predicate), so this is how we match rows across the two
  2. The clustered index has all these columns in it that are not part of the nonclustered index definition, so we can retrieve any rows matched via the seek predicate
  3. The clustered index has OwnerUserId in it as well, which is also not part of the nonclustered index definition, so we can additionally filter any rows matched via the seek predicate

Wordy, I know, for bullet points, but the goal of these posts is to describe things to developers who may have never encountered such things.

Predicate Key Lookups: Bad


If we change our execution parameters to ones that are not selective at all, we’ll use the same execution plan, but speed will get a bit worse.

EXEC dbo.PredicateKeyLookup
    @Score = 0,
    @CreationDate = '20080101',
    @PostTypeId = 2,
    @OwnerUserId = 22656;

To run down what changed:

  • CreationDate will fetch several more years of posts (2008 vs 2013)
  • We’re looking for answers (PostTypeId 2) from questions (PostTypeId 1)
  • The OwnerUserId is now a much more active user on the site

Now, rather than finishing in ~300ms, the query takes ~3.5 seconds.

sql server query plan
degrading

The majority of the time is spent looping and looking up, which is not what we want at all. Again, indexes should help us find and filter as much data as possible in one place.

You can think of each index as a separate data source. If you need two data sources on one table to filter things, you’re leaving yourself open to performance issues.

Predicate Key Lookups: Fixing


In this case, we can fix things well-enough by changing the index to this:

CREATE INDEX
    p
ON dbo.Posts
    (Score, CreationDate, PostTypeId, OwnerUserId)
WITH
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE, DROP_EXISTING = ON);
GO

Now, either execution, in either execution order, is fast (enough).

sql server query plan
good enough plan

Now, for sure, with enough domain knowledge, and enough understanding of the queries, we could do things differently.

  • If we know most/all queries will filter on OwnerUserId/PostTypeId
  • We’re allowed to create a completely new index to help this query if it’s important enough
CREATE INDEX
    p
ON dbo.Posts
    (OwnerUserId, PostTypeId, Score, CreationDate)
WITH
    (SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE, DROP_EXISTING = ON);
GO

In this index, OwnerUserId and PostTypeId are the leading columns. The way b-tree indexes work, equality predicates on leading column(s) maintain sorting of any following columns.

Since we’re ordering by Score and CreationDate, this is important (for this query, anyway). Now, both plans look like this.

sql server query plan
faster!

Predicate Key Lookups: Thoughts


Often, fixing a predicate key lookup by adding the filtering column to the nonclustered index your query is already using is good enough.

Other times, to squeeze every last bit of performance out of things, we may need to make additional adjustments. Changing existing indexes in a way that rearranges key column order can be perilous.

While the query you’re working on may work way better, others may not be so happy with your tinkering. Again, this is where you need additional domain knowledge.

If you find yourself wanting to really change things like that, you may be safer creating a new index instead.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Indexing SQL Server Queries For Performance: Common Questions

Over And Over


Some of the best questions I get some clients, conference attendees, and random email, are about how to design indexes.

A lot of developers out there have a rather foggy picture of exactly how indexes work. They’re all seen phone books, and drawings of B-Tree indexes, but some common things still escape them.

In this post, I’m going to talk about a few things like I’m speaking to someone who has never created a table before.

Even if you’ve created lots of tables and indexes, you may find some interesting points in here, if you pay close attention.

The topics I’m going to go over are:

  • Heaps
  • Clustered indexes
  • Nonclustered indexes
  • Index design concepts

Hopefully it helps to clear some things up!

What’s A Heap?


A heap is a table that doesn’t have a clustered index on it. It may have a nonclustered primary key, nonclustered indexes, constraints of all variety, foreign keys, etc.

Heaps are not indexes. Heaps are unordered data structures.

There are good reasons for some tables to be heaps, but most tables shouldn’t be. A good example is a staging table for ETL loads.

Dumping a bunch of data into a heap typically works faster because data doesn’t need to be sorted to conform to index key ordering. More indexes also means more transaction log writes.

I don’t want to get into minimal logging, because that can be a really tricky thing to get in many circumstances. If you want to read more about that, you can follow this link: Minimal Logging with INSERT…SELECT into Heap Tables

Heaps behave different in some ways that are surprising, though when you update or delete (not truncate) rows from them:

  • Updates move rows around and leave pointers that need to be followed in their place
  • Deletes may remove all rows from data pages but not deallocate them from the table (unless you use a TABLOCK hint, or the delete is escalated to use a table lock naturally)

This isn’t something that happens in tables with clustered indexes on them, and they can have surprising impacts on query performance.

If you update variable length data in your heaps a whole bunch, you can end up with a lot of row movement, which will make later scans of the heap do a lot more reads to follow all those pointers to the location of the new rows.

Same thing with deletes. When you scan through a heap (because there’s no indexed data you can seek to), you may end up reading lots of empty pages to satisfy a complete scan of the data.

I don’t have a good explanation for why heaps are designed that way for updates. Perhaps it’s an optimization for large updates to not automatically split pages, which adds a bit of logging overhead. For deletes, though, I believe it’s so pages are at the ready for new data being loaded in without having to allocate new pages. Since heaps have no order, you can stick data anywhere.

One additional note here is that even with a TABLOCK hint, deletes may not immediately deallocate pages if you’re using Read Committed Snapshot Isolation, or Snapshot Isolation.

In general, there’s not a lot of sense to leaving tables as heaps in SQL Server, at least for OLTP workloads.

What’s A Clustered Index?


A clustered index logically orders the data in your table by the clustered index key column(s). Physical ordering is a separate issue.

I often hear people ask if clustered indexes are a copy of the table, and the answer is… kind of, but only in the way that nonclustered indexes are also copies of the data. Adding nonclustered indexes to a heap would likewise be a copy of data in the heap.

However, a clustered index is not a copy of a heap.

In other words, a single table cannot exist as a clustered and heaped version of itself simultaneously. You are either clustered or heaped.

A good clustered index (often on an identity column) does not necessarily have to provide optimized search access to the data stored in it. Often, the choice is to optimize the insert workload.

To pick a good clustered index, you should usually follow the NUDES acronym:

  • Narrow: Numbers, dates, not string-ish
  • Unique: Identities or sequences are good for this
  • Distinct: If not unique, then as distinct as possible
  • Ever: Increasing (append only is the goal, here
  • Static: Definitely not a column you ever update

The goal is to append unique, static values to the end of the list, that take up very few bytes so that intermediate (not leaf) pages can be more densely packed.

If you have a non-surrogate (natural) key that accomplishes this, great. If not, surrogate keys (again, identity columns) are a good choice to check these boxes. Clustered indexes do not have to be unique, nor do they have to be primary keys, but they are most often tightly coupled in SQL Server relational designs.

In most OLTP workloads, lacking a natural key to cluster on is fine, because you can create nonclustered indexes to optimize searches, joins, and other relational activities that benefit from ordering (group by, order by, partition by, etc.). If your OLTP queries end up scanning clustered indexes a lot, it’s usually a good sign that you haven’t designed nonclustered indexes well.

This assumes they have a where clause on them, of course.

What’s A Nonclustered Index?


Nonclustered indexes are separate copies of the table data, whether it’s clustered or heaped. On disk, in memory, in the transaction log, with different statistics, column ordering, and data pages assigned to them.

We’ll talk about some design strategies next, but the ideal usage for nonclustered indexes is to optimize data access, so your query’s where clause can locate data quickly. There are many other considerations and uses (for instance, supporting foreign keys, or backing up unique constraints), but if you’re very new to indexing, focus on helping your queries get to the data they’re searching for quickly.

A common misconception is that just because a column is in the key of an index, it will help the query be more efficient, regardless of what position in key column order it is, but that’s absolutely not true.

I like to think of key column order as sort of like following a recipe. Let’s say you’re following one where the start is a big piece of ~whatever~, and the end is cooked and cubed pieces of ~whatever~

The initial steps in the recipe are:

  • Defrost ~whatever~
  • Cube ~whatever~
  • Cook ~whatever~

If you don’t defrost ~whatever~ first, cubing it is going to take a long time (forget skipping to cooking it). If you don’t cube ~whatever~, cooking it is going to take a long time.

This is a bit like the way index key column order works (though not a perfect metaphor). If you don’t access the leading key, accessing any following keys is less efficient.

Each column is a gatekeeper to the next.

Nonclustered Index Design Patterns


Clustered indexes are the foundation and frame of a table. Nonclustered indexes are where we go for specific purposes, to find specific things.

If we didn’t have things organized in smaller chunks, we’d end up running around the whole place looking for toilet paper after we found a toilet in the dining room.

It’s a situation no one wants to encounter. Your queries are no different. The less running around looking for things they have to do, the happy they are.

In most cases, the fastest way to reduce the time they spend looking for things is to create indexes that match the goals of your where clause.

There is a persistent and rather idiotic strain of advice that the most selective column should always come first. That’s likely true if the key of the index is unique, and you’re searching equality predicates for a single row.

It’s just that most queries I see aren’t doing that — queries that do that (something like a primary key lookup) already perform well — the types of queries I deal with are usually dealing with a ton of rows.

There’s a second persistent piece of goofiness that inequality predicates should always be indexed for after equality predicates. That’s certainly not true when a small range (say the last hour of data) matches very few rows, but an equality predicate (say a bit column) would match tons of rows.

One important thing to remember about nonclustered indexes is that they inherit things from both clustered and heaped tables: identifiers.

  • If your table is a heap, there’s a row identifier (RID) that ends up in your nonclustered indexes
  • If your table is clustered, the clustered index key(s) end up in your nonclustered indexes
    • For non-unique nonclustered indexes, they end up as a final key column
    • For unique nonclustered indexes, they end up as an included column

Tomorrow, we’ll look at some index design patterns in practice.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Some Questions I’ve Answered Recently On Database Administrators Stack Exchange

Good Citizen


Here are some interesting questions that I’ve answered on dba.stackexchange.com:

I should probably get a real job.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Free SQL Server Troubleshooting Stored Procedures

Giveaway


I’ve been doing a lot of work on all of my free SQL Server troubleshooting stored procedures lately.

If you haven’t used them, or haven’t even heard of them, now’s a good time to talk about what they are, what they do, and some of the newer features and functionality.

I hate to be one of those “bug fixes and performance improvements” people, but I am really happy with the way a lot of these are working.

It’s not that I’m not constantly looking for ways to improve, it’s just that for a lot of them there aren’t really “new” things I can go find and show you, or new ways to look at old things that I think would be helpful.

New things tend to come with new versions of SQL Server. If you don’t like it, tell Microsoft to ship more often. Ha ha ha.

For a full overview of what all of them do and all that, head to the README file.

For everyone else, feel free to follow along.

Fresh Install


I used to get really annoyed with myself, because it took a lot of clicking to install or update all the scripts.

Now, thanks to GitHub magic (actions), and a world-renowned super genius (Drew Furgiuele b|t, who also has an awesome podcast that you should listen to), you can get all updates in a single script.

Because I’m forgetful, I wanted this to happen every time I merged into the main branch without me having to remember to rebuild the main file.

That’s what this whole thing does.

What a gem, that Drew.

sp_HealthParser


I’m going to start by reintroducing a couple of the newer procedures that I released leading up to my PASS Precon.

The first one is sp_HealthParser, which digs deep into the system health extended event for all sorts of gruesome details about what’s going on in your SQL Server.

I’d been meaning to do this for a long time, and finally had the time and patience to write all the necessary XQuery to do it. Once I started writing it, I started kicking myself for not doing it sooner.

There’s so much great information in there:

  • Queries with significant waits
  • Waits by count
  • Waits by duration
  • Potential I/O issues
  • CPU issues
  • Memory issues
  • System health (duh) issues
  • Blocking
  • Deadlocking

The really cool thing about the wait stats is that they get logged in a way that you can see them over time, and see when your system was busy.

The most common way I execute it is like so:

EXEC dbo.sp_HealthParser
    @warnings_only = 1,
    @skip_locks 1;

There are a number of sections in there that have a column which will flip to WARNING when something happens that your SQL Server doesn’t like, and a lot of the time I wanna focus in on just that.

I also usually skip the blocking and deadlocking sections, in favor of setting up specific extended events to monitor them.

Also because the blocked process report that it captures is missing a lot of details that the real blocked process report has in it.

That being said, if you’re looking at a server for the first time, you may want to see what’s in there.

sp_LogHunter


Another newer procedure is sp_LogHunter. The idea here is that SQL Server’s error log is a horrible window into your server’s soul, but searching through it is a nightmare.

  • You can only search for a single thing at a time
  • The results usually exclude a lot of useful information around any hits for your search
  • Often using the GUI is incredibly slow and involves a lot of clicking

What I wanted was a stored procedure that looked for a whole bunch of the awful stuff I’ve seen in error logs over the years, and the results to get returned in chronological order, all together.

That’s what this does. There are a bunch of configuration options, but I usually just fire it off to look through the last 7 or so days of log data. But you can choose specific time windows, look for custom message entries, and more.

EXEC dbo.sp_LogHunter;

sp_HumanEvents


Extended Events are great, but for all the XQuery you have to write to get data back in something that isn’t the cursed-by-old-and-new-gods GUI.

One of the hardest stored procedures I’ve ever written, sp_HumanEvents, does all that for you. Not for every extended event, but for the ones I found myself using most often with clients.

  • Query performance
  • Wait stats
  • Compiles
  • Recompiles
  • Blocking

This thing can do a ton of different stuff, but the most common thing I do with it is to troubleshoot slow stored procedures that clients have me working on.

In SSMS, there’s no way to only return some query plans, and a lot of stored procedures I end up working on have a lot of queries in them (some very fast, some very slow), and I want to focus on the slower ones.

To get set up to do that, this is what I run:

EXEC sp_HumanEvents
    @event_type = 'query',                   
    @query_duration_ms = 1000,               
    @session_id = N'58',                    
    @keep_alive = 1;

That will create an extended event that monitors a single session for any queries that run over 1 second.

From there, all you have to do is watch live data, and then run the procedure to get a ton of detail about them.

sp_HumanEventsBlockViewer


For years, I’ve been working on sp_BlitzLock to troubleshoot deadlocks in SQL Server. One day, after going through a lot of yuck to do the same thing with the blocked process report, I decided to write sp_HumanEventsBlockViewer.

It was originally intended as a companion script to look at the blocked process report extended event that sp_HumanEvents sets up, but it quickly became its very own island unto itself.

Now, you do need to enable the blocked process report, and you do need to set up an extended event target to capture it, but once you do the amount of detail you can get about blocking problems on your server is incredible.

To see how to do that, check out the README file. Once that’s done, just point this baby at it and let’er rip. Not RIP 🪦

EXEC dbo.sp_HumanEventsBlockViewer
    @session_name = N'blocked_process_report';

sp_PressureDetector


sp_PressureDetector, at least according to Google’s search reports, is my most popular script.

This one got a lot of work before my PASS Precon. Not to add stuff to it really, but to make the results as clear as possible.

The idea behind this one is to show you resource pressure currently happening on your SQL Server, and when possible, queries that are causing it.

  • CPU
  • Memory
  • Disk
  • tempdb

It will also give you a lot of good information about the server itself, like how much data is stored on it, and configuration details that most often matter for hardware.

You don’t have to do much else with it aside from hit F5.

EXEC dbo.sp_PressureDetector;

sp_QuickieStore


An ode to the dismal shortcomings of the Query Store GUI, sp_QuickieStore comes to the rescue.

By default, it gives you the 10 worst queries from the last 7 days by average CPU. There are about 100 ways to search for, include, and exclude certain queries from the results.

Seriously. Just look at all the parameters for that. You can find what hurts, research a stored procedure, search across all databases with Query Store enabled, and way more.

The thing that I kept running into, though, is that the default results would often include a lot of results that weren’t pertinent. Overnight problems and whatnot.

For years I’ve been saying “anyone can find a slow query”, because that’s really easy. But you need to find the queries that matter to you.

A little before Christmas, I decided I should live by my own rules, and added a few new parameters.

Here’s how they work:

EXEC dbo.sp_QuickieStore
    @workdays = 1,
    @work_start = '9am',
    @work_end = '5pm';

When you use these, it will only show you queries that happen on weekdays (respecting local DATEFIRST settings), with configurable time frames.

By default, it look from 9am to 5pm, and those hours will automagickally convert from your local time zone to UTC (because that’s how Query Store stores dates). But you can adjust that to your needs.

If you want to find everything that happens every weekday morning, afternoon, or all day, you can do that now.

Time Flies


The first commit I made to sp_PressureDetector was back in December of 2019. Looking at the code now is a touch sentimental, and more than a touch embarrassing.

My formatting game, along with the small amount of details returned, aren’t really anything I’m psyched about. But I’m very proud of what these have all turned into.

I get decent traffic to my repo. There’s a dbatools cmdlet to install my scripts. I get nice emails from folks out there who have found and solved problems with them, and I’ve solved hundreds of client problems with them.

It may all sound a little braggy, and I’m totally okay with that. I’m super proud of them these days.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Join me at DataTune in Nashville, March 8-9 2024

Spring Training


This March, I’ll be presenting my full day training session The Foundations Of SQL Server Performance Tuning.

All attendees will get free access for life to my SQL Server performance tuning training. That’s about 25 hours of great content.

Get your tickets here for this event, taking place Friday, March 8th-9th 2024 at Belmont University – Massey Center 1900 Belmont Blvd, Nashville, TN 37212

Here’s what I’ll be presenting:

The Foundations Of SQL Server Performance Tuning

Session Abstract:

Whether you want to be the next great query tuning wizard, or you just need to learn how to start solving tough business problems at work, you need a solid understanding of not only what makes things fast, but also what makes them slow.

I work with consulting clients worldwide fixing complex SQL Server performance problems. I want to teach you how to do the same thing using the same troubleshooting tools and techniques I do.

I’m going to crack open my bag of tricks and show you exactly how I find which queries to tune, indexes to add, and changes to make. In this day long session, you’re going to learn about hardware, query rewrites that work, effective index design patterns, and more.

Before you get to the cutting edge, you need to have a good foundation. I’m going to teach you how to find and fix performance problems with confidence.

Event Details:

Get your tickets here for this event!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Wrap Up And Final Thoughts

Thanks For Reading!


Here’s the full list of posts from the series:

I will eventually revisit these to record some material on them, and perhaps expand on it.

But this is good enough for now.

Follow On


By far, the most commented on post in the series is the one about formatting. It’s mostly people continuing to be wrong about it, but it was interesting to note.

The general idea of the series was to teach developers about the types of things I always seem to be fixing and adjusting, so that I can hopefully fix really interesting problems in the future.

Of course, that all depends on folks finding these and reading them. If that were the general sway of the world, I’d probably never had been in business in the first place.

So if you’re here and reading this, you’re special.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The Art Of The SQL Server Stored Procedure: Cursors And Loops

I Don’t See Nothin’ Wrong


You will, for better or worse, run into occasions in your database career that necessitate the use of loops and cursors.

While I do spend a goodly amount of time reworking this sort of code to not use loops and cursors, there are plenty of reasonable uses for them.

In most cases, I would prefer to see developers write a loop manually, because there are many difficult to understand cursor options that change their behavior, and can really mess with query performance.

Those are just a couple examples from my site, without getting into all the many other variations in cursor types.

One very important distinction in cursor types is what set of data the queries in them will see.

As an example, static cursors make a copy of the results of the select, and then use that copy (like a temporary table) to work off of. Dynamic cursors read the underlying data on each iteration.

Especially in cases where your cursors may modify data, and you expect changes to the data to be reflected in future iterations of the cursor, static cursors are likely not a good choice.

Likewise, if you’re already putting data into a temporary table for the cursor to work off of, there is likely no benefit to a static cursor type because you’re just making another copy of the temporary table’s contents.

As I’ve said before, it’s up to you to test different options to see which works correctly and performs the best for your needs.

Also as I’ve said before, never trust the defaults. When you just write DECLARE CURSOR with no options specified, SQL Server may make a lot of bad decisions.

Appropriate Uses


In general, I’d expect many readers to understand when a loop or cursor is the only (or best) way to accomplish what you need.

I use them in several of my stored procedures because they’re the only sensible way to get things done.

Notably, in sp_LogHunter:

DECLARE
    c
CURSOR
    LOCAL /*Nothing outside of this needs to reference it*/
    SCROLL /*I want the FETCH FIRST command available to me*/
    DYNAMIC /*I don't want to make another copy of my temp table*/
    READ_ONLY /*Okay, perhaps this is over communicating*/
FOR
SELECT
    command
FROM #search;

The only real additional point I’d make here is that much like with dynamic SQL, if you’re going to use these options, you’re doing yourself (and anyone else who many need to run or manage your code) a great service by building verbose debugging options and error handling into it.

Knowing things like:

  • Starting row counts
  • Iteration count/progress
  • Current parameter/variable values
  • Rows in play per iteration
  • Iteration timing

Are all quite useful landmarks in your running code, help you find logical errors, and which parts you may be getting stuck on from a performance point of view.

Also from sp_LogHunter:

IF @debug = 1
BEGIN
    RAISERROR('log %i of %i', 0, 1, @l_log, @h_log) WITH NOWAIT; /*Which log file I'm working on, out of how many I need to process*/
    RAISERROR('search %i of %i', 0, 1, @l_count, @t_searches) WITH NOWAIT; /*Which search I'm working on, out of how many I need to run*/
    RAISERROR('@c: %s', 0, 1, @c) WITH NOWAIT; /*The current search command syntax*/
END;

These are all things that helped me while writing it for reasons listed above.

You, like me, may be surprised where your loops take you when they finally sally forth.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.