Blog

Pen Testing SQL Server Stored Procedures

The Old Joke Goes


A security engineer walks into a bar and then

  • Runs into a bar.
  • Crawls into a bar.
  • Dances into a bar.
  • Flies into a bar.
  • Jumps into a bar.

And orders:

  • a beer.
  • 2 beers.
  • 0 beers.
  • 99999999 beers.
  • a lizard in a beer glass.
  • -1 beer.

When you’re designing and testing SQL Server stored procedures (or views or functions or queries), you need to do the same thing.

While most of it isn’t a security concern, though it may be if you’re using Row Level Security, Dynamic Data Masking, or Encrypted Columns, you should try executing it as other users to make sure access is correct.

When I’m writing stored procedures for myself or for clients, here’s what I do.

Try To Reveal Bad Parameter Sniffing


Sometimes it’s easier than others, but here’s what you should try:

  • For equality predicates, run some count queries against those columns to find data skew
  • For range predicates (like dates) try small and large ranges to see if the plan changes
  • Try passing in NULL or blank values, especially for any string parameters

When you do this, grab and compare the execution plans. If you get crazy different plans, see what happens when you don’t recompile, and plans get shared across executions.

If performance is bad, think about these things:

  • Can you improve indexing?
  • Do you need a temp table?
  • How many CTEs do you need to throw in the garbage?
  • Should you use dynamic SQL to generate different plans?
  • Maybe a recompile hint would be good for a specific query?

This is at the top of my list for new code, because I don’t want it to get called in a weird way and go haywire. That’s not what people pay consultants for.

Speaking Of Dynamic SQL


If you’re not sure if yours is safe from SQL injection, now is the time to find out.

Especially for long-ish string parameters, try passing in some nefarious commands. In general, what you don’t want to see is dynamic SQL like this:

DECLARE
    @s nvarchar(MAX) = N'',
    @d nvarchar(40) = N'Jon Skeet';

SELECT
    @s += N'
SELECT
    c = COUNT_BIG(*)
FROM dbo.Users AS u
WHERE u.DisplayName = ''' + @d + N''';'

EXEC sys.sp_executesql
    @s;

This is unsafe dynamic SQL, because it accepts user input and concatenates it into a string.

There are safe ways to accept user input, as long as either:

  • The user input dictates a static string to append to the dynamic SQL
  • The user input is parameterized within the dynamic SQL

Something like this is an example of taking user input and having it dictate a static string:

DECLARE
    @s nvarchar(MAX) = N'',
    @t nvarchar(40) = N'Votes';

IF @t = N'Users'
BEGIN
    SELECT
        @s += N'
    SELECT
        c = COUNT_BIG(*)
    FROM dbo.Users AS u;'
END;

IF @t = N'Votes'
BEGIN
    SELECT
        @s += N'
    SELECT
        c = COUNT_BIG(*)
    FROM dbo.Votes AS v;'
END;

EXEC sys.sp_executesql
    @s;

But this is a case where you should see what happens when you pass a lizard in a beer glass.

And of course, parameterized dynamic SQL looks like this:

DECLARE
    @s nvarchar(MAX) = N'',
    @d nvarchar(40) = N'Jon Skeet';

SELECT
    @s += N'
SELECT
    c = COUNT_BIG(*)
FROM dbo.Users AS u
WHERE u.DisplayName = @d;'

EXEC sys.sp_executesql
    @s,
    N'@d nvarchar(40)',
    @d;

Run It From The Application


In SQL Server, there are these things called ANSI settings, and they can really screw with performance and execution plans.

Even if you’re not using indexed views, computed columns, or filtered indexes, you may see oddball things if you’re testing in SSMS and running code from somewhere else.

This is what SSMS uses, and what SQL Server needs to effectively use those indexed views, computed columns, and filtered indexes.

+-------------------------+----------------+
|       SET options       | Required value |
+-------------------------+----------------+
| ANSI_NULLS              | ON             |
| ANSI_PADDING            | ON             |
| ANSI_WARNINGS 1         | ON             |
| ARITHABORT              | ON             |
| CONCAT_NULL_YIELDS_NULL | ON             |
| NUMERIC_ROUNDABORT      | OFF            |
| QUOTED_IDENTIFIER       | ON             |
+-------------------------+----------------+

You’ll need to check your application(s) to see what they’re using, and make adjustments where necessary.

Bonus points if you’re using the Python or JDBC drivers, and you turn off those pesky implicit transactions.

This Is A Good Start


You may have other things to test, like if you’re offloading reads in an Availability Group, using security features, or other logical constructs.

If your code has a lot of variable assignment in it, you may want to override the lookups with static values to see what happens if some out of band values (especially NULLs) come around.

I do this a lot in the SQL Server Troubleshooting stored procedures that I write to make sure things go down the right path and the right things happen.

You should too.

No one expects the unexpected.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

You Should Use sp_HumanEvents To Make Extended Events Easy #tsqltuesday

Another One


This month, Grant has chosen the form of his destructor T-SQL Tuesday topic as Extended Events.

While my relationship with Extended Events is complicated for many reasons:

  • Awful documentation
  • Hardly any guidance on usage
  • Almost nothing useful about what type of target to use when
  • Everything stored in XML
  • Slow, unfriendly GUI in SSMS

My need to use them while consulting outweighs my gripes and grievances about how Microsoft has chosen to write about, use, and present the data to you.

That’s where my stored procedure sp_HumanEvents comes in handy. It’s totally free, and open source. I built it because I needed to be able to get things going quickly without a lot of fuss and clicking around.

It will set up and pull data to help you track down issues in the following areas:

  • Blocking
  • Query Performance
  • Compiles
  • Recompiles
  • Wait stats

I chose to leave deadlocks out of it, because the system health extended event session captures a good deal of those by default. It has a very limited amount of data from a toned-down blocked process report in it too, but it’s missing a lot of good information.

You can find the full documentation for it here, and also by using the @help parameter in the procedure itself.

My Favorite Martian


While it can do many things, the way I use it most often is to capture the long-running queries and plans from a stored procedure I have running in SSMS, so I can filter out the little query plans that don’t need my attention, and focus on the really bad parts.

To do that, just run this code:

EXEC sp_HumanEvents
    @event_type = 'query',                   
    @query_duration_ms = 5000,               
    @session_id = N'58',                    
    @keep_alive = 1;

That will set up an extended event that captures the following events, focused in on a single session id, and only collect details about queries that run for more than 5 seconds.

Don’t forget to replace the @session_id with whatever your session ID is running locally.

  • sqlserver.module_end    
  • sqlserver.rpc_completed  
  • sqlserver.sp_statement_completed
  • sqlserver.sql_statement_completed
  • sqlserver.query_post_execution_showplan   

Some of them are a bit duplicative, but this was really designed to be flexible for a lot of different situations.

Once that runs, look for a session called keeper_HumanEvents_query in your Extended Event sessions in SSMS.

From there, you can right click to Watch Live Data, and then start running whatever is in your query window.

Every time a query takes more than 5 seconds, you’ll see the statement and query plan entries show up in the data viewer.

SQL Server Extended Events
viewme

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Why NOLOCK Is My Favorite SQL Server Query Hint

Because I’m A Consultant


And generally, the more NOLOCK hints I see, the more money I know I’m going to make.

It shows me four things right off the bat:

  • The developers need a lot of training
  • The code needs a lot of tuning
  • The indexes need a lot of adjusting
  • There are probably some serious bugs in the software

Perhaps the only other thing that signals just how badly someone needs a lot of help is hearing “we’re an Entity Framework only shop”.

Cha-ching.

Because No One Knows What It Does


With the utmost confidence, I’ve heard hundreds of developers say “it keeps my query from taking locks”.

Would that I could take that confidence and share a shred of it with everyone in the world, we could conquer space in a fortnight.

So I get to do fun things like explain to them how it still takes some locks, but mostly how it ignores locks taken by other queries, and that’s why it can:

  • See the same row twice with the same values
  • See the same row twice with different values
  • Miss rows entirely

And in highly transactional systems, that can cause real problems. Even in systems where the load isn’t anything insane, it can cause all sorts of bugs and race conditions.

And best of all, because I get to hang around deleting those hints after switching over to Read Committed Snapshot Isolation.

Because I Get To Keep Writing About It


And no matter how much I do, no one listens. At all. Ever.

Week in and week out, I work with brand new people who have no idea that their results are full of incorrect, inconsistent data, and general bugginess.

I could schedule this same post every week for the next five years, and I’d still see the same amount of people with the same problems, over and over again.

Seriously. I love it. It’s the gift that keeps on giving.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

A Reintroduction To sp_PressureDetector

A Reintroduction To sp_PressureDetector



 

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

How I Set Up Debugging In SQL Server Stored Procedures

How I Set Up Debugging In SQL Server Stored Procedures


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Future Proof Your SQL Server Transaction Log Files For Instant File Initialization?

Finger Snap


Look, I don’t blame you if you haven’t dug deep into what SQL Server 2022 has to offer just yet. It’s hard enough to keep up with all the problems fixed and caused by cumulative updates.

One thing you may want to pay attention to is how transaction log files are grown and VLFs are created, especially for new databases.

First, all new databases created on SQL Server 2022 have autogrowth set to 64MB. This allows for two things:

  1. Those growths can take advantage of instant file initialization
  2. That 64MB growth increment will create a single VLF

Now, you might be excited about log files getting instant file initialization. But it only works for those small growths. Perhaps only in Azure would a growth of 64MB not be instant anyway.

I don’t know anyone else who spends a lot of time waiting for database growths of 64MB to finish.

Transaction log files cannot be initialized instantaneously, however, starting with SQL Server 2022 (16.x), instant file initialization can benefit transaction log autogrowth events up to 64 MB. The default auto growth size increment for new databases is 64 MB. Transaction log file autogrowth events larger than 64 MB cannot benefit from instant file initialization.

Other changes to VLF creation are something along the lines of:

  • <= 64MB: 1 VLF
  • >64MB and <=1GB: 8 VLFs
  • > 1GB: 16VLFs

If your log file autogrowth settings are larger than 1GB, either because you set them that way, or because your files are growing by a percentage, you may end up with some rather large VLFs.

To see what your current settings are, run this query:

SELECT
    database_name = 
        DB_NAME(mf.database_id),    
    logical_name = 
        mf.name,
    file_name = 
        mf.physical_name,
    size_gb = 
        (mf.size * 8) / 1024 / 1024,
    max_size_gb =
        CONVERT
        (
            bigint,
            CASE
                WHEN mf.max_size = -1
                THEN 0
                ELSE (mf.max_size * 8.) / 1024 / 1024
            END
        ),
    autogrowth_mb = 
        CASE 
            WHEN mf.is_percent_growth = 1
            THEN RTRIM(mf.growth) + N'%'
            WHEN (mf.growth * 8 / 1024) < 1024
            THEN RTRIM((mf.growth * 8) / 1024) + ' MB'
            WHEN (mf.growth * 8 / 1024) >= 1024
            THEN RTRIM((mf.growth * 8) / 1024 / 1024) + ' GB'
         END,
    usage = 
        CASE
            WHEN mf.type = 0
            THEN 'data'
            WHEN mf.type = 1
            THEN 'log'
            WHEN mf.type = 2
            THEN 'filestream'
            WHEN mf.type = 3
            THEN 'nope'
            WHEN mf.type = 4
            THEN 'fulltext'
        END
FROM sys.master_files AS mf
WHERE mf.database_id > 4
AND   mf.type = 1
ORDER BY
    mf.database_id,
    mf.type,
    mf.file_id
OPTION(RECOMPILE);

 

If you’re planning on moving to SQL Server 2022, now’s the time to make adjustments so you’re not caught off guard.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Why You Should Build Debugging Into Your SQL Server Stored Procedures

Quotable


SQL Server is pretty good at a lot of things, but there’s no great way to debug logic or performance issues in longer stored procedures.

Whenever I’m writing a stored procedure, especially when dynamic SQL is involved, I add in a debug parameter. Sometimes I’ll do something more granular, and add in specific parameters for debugging performance vs debugging other things, like:

  • Temp table contents
  • Parameter values
  • Variable values
  • Loop progress
  • Current dynamic statement
  • Current point in the stored procedure (sometimes with a time stamp)

This can really save your hide when you hit a reproducible issue. Proper error handling is part of proper debugging, too.

I generally detest PowerShell, but I love that some commands have a -WhatIf parameter to test what would happen. For normal SQL Server queries, you don’t get that.

But if you’re writing dynamic SQL that may execute database commands (backup, checkdb, settings changes, etc.), you should also include a parameter that prevents dynamic SQL execution so you can debug without worrying about wreaking havoc.

Should You Use XACT_ABORT?


This depends a little bit. You probably want to use this in 99% of the stored procedures that you write that contain more than a single select statement.

The only case I can think of where you wouldn’t want to use this is in a set of informational queries where one of them failing is inconsequential.

A good example of this is sp_PressureDetector, where you want all the other results even if one of the individual queries fails for some reason.

This might make some results look weird sometimes, if there’s an error when you’re populating a temp table for example, where it ends up empty because of some population failure.

Later on, when you select from it or join to it, the results may be disappointing.

PRINT vs RAISERROR


In most cases, RAISERROR is your best choice for debugging (I’m not talking about error handling best practices here).

Despite some frustrations with it, it’s the easiest way to debug most state conditions for parameters, variables, and overall procedure progress.

The main problem with it is that there’s a maximum error message length of 2,047. This can be less than ideal when you’re dealing with debugging long dynamic SQL statements.

Then you’re looking at using a stored procedure like Helper_LongPrint, or rolling your own PRINT/SUBSTRING combination (maybe even in a loop) to output the full query.

A simplified version would look something like this:

DECLARE 
    @s nvarchar(MAX) = REPLICATE('A', 4000) + REPLICATE('B', 4000),
    @len bigint = 0,
    @block bigint = 1;

SELECT
    @len = LEN(@s);

WHILE @block < @len
BEGIN
    PRINT SUBSTRING(@s, @block, 4000)
    
    SELECT
        @block += 4000
END;

Admittedly, I usually just brute force long printing in most of my stored procedures because I have a good sense of how long the output query will be.

If yours will vary, the looping option is the best choice.

Table Of Contents


If you populate temporary objects in your stored procedures, it’s a helpful debugging step to view the contents to make sure they’re correct, or… even there at all!

This is how I do it when I’m feeling really motivated:

  • Check to see if there’s anything in the temp table
  • Select the contents of the temp table
  • Add an extra column to the select list to tell me which temp table I’m selecting from
  • If it’s empty, select a different message to tell me that
CREATE TABLE 
    #
(
    _ sql_variant
);

DECLARE 
    @debug bit = 1;

IF @debug = 1 
BEGIN 
    IF EXISTS
    (
        SELECT 
            1/0 
        FROM # AS _
    ) 
    BEGIN 
        SELECT 
            table_name = '#', 
            _.* 
        FROM # AS _; 
    END; 
    ELSE
    BEGIN
        SELECT 
            info = '# is empty!' 
    END
END;

This is especially useful when there are a lot of temp tables involved.

Depending on the procedure, sometimes I’ll put this right after the temp table is populated, other times I’ll put all of the temp tables used at the end.

Valuables


I often find myself declaring variables and setting them to the result of a select, and sometimes manipulating passed in parameters.

To make sure they’re ending up as I expect them to, I’ll usually stick something like this at the end of the procedure to validate them:

DECLARE
    @a int = 1,
    @b int = 2;

IF @debug = 1
BEGIN
    SELECT
        a = @a,
        b = @b;
END;

This can help you figure out where things are going right, wrong, sideways, pear shaped, or plaid.

If there’s a loop involved, I’ll stick them inside there to make sure we’re progressing, and I didn’t do anything stupid that got me stuck inside a loop of the infinite variety.

Inside a loop, I’ll use RAISERROR. You can see that in sp_LogHunter.

It’s a good time. A cursor inside a while loop.

Don’t tell anyone.

Progress Bar


Sometimes it’s helpful to know which point in a stored procedure, or some branched code you’ve hit.

This is admittedly fueled in part by self-gratification. Seeing things make progress successfully, do the right thing, and loops work and finish is quite satisfying.

That’s why I’ll stick little waypoints in my procedures like this:

IF @debug = 1
BEGIN
    RAISERROR('We did it!', 0, 1) WITH NOWAIT;
END;

Again, strongly prefer RAISERROR to PRINT here, because the WITH NOWAIT line just about guarantees the message will show up when it happens.

Print statements can get stuck waiting for buffers and other oddities before showing up.

But Is It Fast?


Stored procedures are often full of tiny little queries that don’t really matter, and in really long stored procedures, returning a plan to SSMS for every single query that runs is a recipe for crashsaster.

If I have no idea what’s slow, I’ll use sp_HumanEvents to troubleshoot performance.

But I usually have a sense of which parts can get slow, so I’ll add performance feedback to the list, too.

IF @debug = 1
BEGIN
    SELECT
        query = 'starting super important query now!',
        start_time = SYSDATETIME();
    SET STATISTICS XML ON;
END;

SELECT
    x = 'this is a super important query';

IF @debug = 1
BEGIN
    SELECT
        query = 'finished super important query now!',
        end_time = SYSDATETIME();
    SET STATISTICS XML OFF;
END;

Since there are usual multiple queries that I’m interested in, I find it good to add a select to tell me which query the plan is for in the results so that I know which one to click on.

Adding in the start and end times is a good way to quickly identify which returned plan(s) took the longest.

It’s A Lot Like Life


Adding in good debugging steps certainly adds more time and effort to the process, but it pays off when you have to figure out what’s going wrong.

You probably don’t need it in every single stored procedure, especially the simple ones, but the longer and more involved your code gets, the more critical this becomes.

A lot of this comes from my experiences working on open source scripts that other folks might have to troubleshoot someday. You’re not only doing yourself a favor, but you’re helping anyone else who needs to work on your code.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Catch Me On Dear SQL DBA With Kendra Little!

hit record


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Index Tuning Essentials: Fixing Blocking Demo

Away Days


It’s August, and that means one thing: Family Vacation. I’m taking this month off from regular blogging, and posting some of my paid beginner training content for you to enjoy, while I enjoy not blogging.

Thanks for watching!

Index Tuning Essentials: Fixing Blocking Demo


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Index Tuning Essentials: Fixing Blocking

Away Days


It’s August, and that means one thing: Family Vacation. I’m taking this month off from regular blogging, and posting some of my paid beginner training content for you to enjoy, while I enjoy not blogging.

Thanks for watching!

Index Tuning Essentials: Fixing Blocking


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.