SQL Server Performance Tuning and Monitoring
https://www.mssqltips.com/sqlservertutorial/276/sql-server-performance-tuning-and-monitoring-tutorial/
This is where performance monitoring and tuning come into play. Usually the first signs of performance issues surface from user complaints. A screen that used to load immediately now takes several seconds. Or a report that used to take a few minutes to run now takes an hour. As I mentioned these issues usually arise from user complaints, but with a few steps and techniques you can monitor these issues and tune accordingly, so that your database applications are always running at peak performance.
In this tutorial we will cover some of the common issues with performance such as:
In this section we will look at the following tools to give you an introduction as to what they are used for an how you can use them to collect performance related data.
DMVs are a great tool to help troubleshoot performance related issues and once you understand their power they will become a staple for your Database Administration.
The DMVs are broken down into the following categories:
Here are some of the more useful DMVs that you should familiarize yourself with:
Profiler is a GUI based tool that runs a SQL Server trace to capture the metrics listed above as well additional data. This data can then be used to determine where your SQL Server performance issues are related to your TSQL code. Running a trace without using Profiler is known as a Server Side Trace. You can create and start the trace using TSQL commands instead of having to use the GUI.
Once you have Profiler launched you need to:
Here is an example of data that is collected:
Under the Events Selection tab you can select events, columns and set filters as shown below.
Once you have the settings you want you can
run the trace.
Start > Control Panel > Administrative Tools > Performance Monitor or you can launch PerfMon.exe. Once this is running you will get an interface like the following:
Above is the default look and feel when you
launch this tool. Here we can see there is one counter "%
Processor Time" that is being tracked. For this counter we
can see the following items:
From this window we can select additional
counters such as Memory, Physical Disk and SQL Server specific counters.
To add a counter select the counter and click the Add button. The below
screen shot shows multiple counters that have been selected. Click OK
when you are done to start capturing this data.
The other thing you will want to do is change
your duration and frequency for collecting data. By default it is set to
sample the data every 1 second for a duration of 100 seconds. To change
this right click on the graph and select Properties and a new
window like the following will appear. If you click on the General tab
you can set the sampling settings as shown below. In addition there are several
other properties you can modify in this window.
Once you click OK these settings will go in
to effect immediately.
There are several reports related to
performance that can be used to see current activity as well as historical
activity. Here is a list of some of the available reports.
Take the time to explore these reports to
determine which report best suits your performance monitoring needs.
SQL Server can create execution plans in two ways:
Here is a simple query and its execution plan. To include the Actual Execution Plan press CTRL + M in the query window and then execute the T-SQL code.
So you can see that once we have identified
what queries are taking a long time using Profiler we can then look at the
query execution plan to determine what needs to be tuned to make the query
perform better. As with most things the more you use execution plans the
easier it gets to identify the issue and what can be done to resolve the issue.
Note that not all execution plans are this simple and sometimes they are very difficult to read and interpret, so for additional information read this tutorial Graphical Query Plan Tutorial.
Here is an example of a query and how we can use DTA to analyze the query and make recommendations. From within a query window right click and select the DTA option as shown.
Here are the available options for tuning.
After you select the specific options click
on Start Analysis and this will run the DTA tool to identity any potential
improvements.
Here we can see that DTA recommends adding a new index for table DimProduct.
The Database Engine Tuning Advisor can also
be launched from within SSMS by clicking on Tools > Database Engine Tuning
Advisor.
We will cover the following topics:
When locks are held for a long period of time they cause blocking, which means one process has to wait for the other process to finish with the data and release the lock before the second process can continue. This is similar to deadlocking where two processes are waiting on the same resource, but unlike deadlocking, blocking is resolved as soon as the first process releases the resource.
It probably makes sense that locks are held when updating data, but locks are also used when reading data. When data is updated an Update lock is used and when data is read a Shared lock is used. An Update lock will create an exclusive lock on the data for this process and a Shared lock allows other processes that use a Shared lock to access the data as well and when two processes are trying to access the same data this is where the locking and blocking occurs.
Here are various ways you can identify blocking for your SQL Server instance.
By default when this occurs, your application may see or handle the error, but there is nothing that is captured in the SQL Server Error Log or the Windows Event Log to let you know this occurred. The error message that SQL Server sends back to the client is similar to the following:
To turn these on you can issue the following commands in a query window or you can add these as startup parameters. If these are turned on from a query window, the next time SQL Server starts these trace flags will not be active, so if you always want to capture this data the startup parameters is the best option.
In this section will look at how to find these issues and how to resolve them.
To find these issues you can start by running Profiler or setting up a server side trace and look for statements that have high read values. Once you have identified the statements then you can look at the query plan to see if there are scans occurring.
Here is a simple query that we can run. First use Ctrl+M to turn on the actual execution plan and then execute the query.
In this next example I created a new copy of
the Person.Contact table without a clustered index and then ran the query.
In this next example we include a WHERE
clause for the query.
Let's do the same thing for our
Person.Contact2 table.
Another thing you could do is use the Database
Engine Tuning Advisor to see if it gives you any suggestions. If I
select the query in SSMS, right click and select Analyze Query in Database Engine Tuning Advisor the tools starts up and I can select the options and
start the analysis.
Below is the suggestion this tool provides and we can see that recommends creating a new index, so you can see that using both tools can be beneficial.
A key lookup occurs when data is found in a non-clustered index, but additional data is needed from the clustered index to satisfy the query and therefore a lookup occurs. If the table does not have a clustered index then a RID Lookup occurs instead.
In this section we will look at how to find Key/RID Lookups and ways to eliminate them.
So let's take a look at an example query and the query plan. Before we do this we want to first add the nonclustered index on LastName.
So if we change the query as follows and run
this again you can see that the Key Lookup disappears, because the index
includes all of the columns.
If we run both of these queries at the same
time in one batch we can see the improvement by removing these two operators.
This should make sense that since the index
includes LastName and that is the only column that is being used for both the
SELECTed columns and the WHERE clause the index can handle the entire
query. Another thing to be aware of is that if the table has a clustered
index we can include the clustered index column or columns as well without
doing a Key Lookup.
The Person.Contact table has a clustered index on ContactID, so if we include this column in the query we can still do just an Index Seek.
So that's great if that is all you need, but
what if you need to include other columns such as FirstName. If we change the
query as follows then the Key Lookup comes back again.
The syntax for the the index with included columns is as follows:
The benefit of having more indexes is that SQL Server can access the data quickly if an appropriate index exists. The downside to having too many indexes is that SQL Server has to maintain all of these indexes which can slow things down and indexes also require additional storage. So as you can see indexing can both help and hurt performance.
In this section we will focus on how to identify indexes that exist, but are not being used and therefore can be dropped to improve performance and decrease storage requirements.
If you query the table and return all columns, the output may be confusing. So the query below focuses on a few key columns. To learn more about the output for all columns you can check out Books Online.
Since I/O is such an important part of SQL Server performance you need to make sure your disk subsystem is not the bottleneck. In the old days this was much easier to do, since most servers had local attached storage. These days most SQL Servers use SAN or NAS storage or to further complicate things more and more SQL Servers are running in a virtualized environment.
There are basically two options that you have to monitor I/O bottlenecks, SQL Server DMVs and Performance Monitor counters. There are other tools as well, but these are two options that will exist in every SQL Server environment.
The output also includes I/O stall information for reads, writes and total. The I/O stall is the total time, in milliseconds, that users waited for I/O to be completed on the file. By looking at the I/O stall information you can see how much time was waiting for I/O to complete and therefore the users were waiting.
The data that is returned from this DMV is cumulative data, which means that each time you restart SQL Server the counters are reset. Since the data is cumulative you can run this once and then run the query again in the future and compare the deltas for the two time periods. If the I/O stalls are high compared to the length of the that time period then you may have an I/O bottleneck.
There are several counters related to I/O and they are located under Physical Disk and Logical Disk. The Physical Disk performance object consists of counters that monitor hard or fixed disk drive on a computer. The Logical Disk performance object consists of counters that monitor logical partitions of a hard or fixed disk drives. For the most part, they both contain the same counters. In most cases you will probably use the Physical Disk counters. Here is a partial list of the available counters.

Now that storage could be either local, SAN, NAS, etc... these two counters
are helpful to see if there is a bottleneck:
Below you can see the Disk tab that shows current processes using disk, the active disk files and storage at the logical and physical level. The Response Time (ms) is helpful to see how long it is taking to service the I/O request.
Here are some additional articles about I/O performance.
Overview
SQL Server is a great platform to get your database application up and running fast. The graphical interface of SQL Server Management Studio allows you to create tables, insert data, develop stored procedures, etc... in no time at all. Initially your application runs great in your production, test and development environments, but as use of the application increases and the size of your database increases you may start to notice some performance degradation or worse yet, user complaints.This is where performance monitoring and tuning come into play. Usually the first signs of performance issues surface from user complaints. A screen that used to load immediately now takes several seconds. Or a report that used to take a few minutes to run now takes an hour. As I mentioned these issues usually arise from user complaints, but with a few steps and techniques you can monitor these issues and tune accordingly, so that your database applications are always running at peak performance.
In this tutorial we will cover some of the common issues with performance such as:
- deadlocks
- blocking
- missing and unused indexes
- I/O bottlenecks
- poor query plans
- statistics
- wait stats
- fragmentation
1. Dynamic Management Views
Overview
In order to monitor and improve performance for your SQL Server environment you need to know what tools are available and how to use each of these tools.In this section we will look at the following tools to give you an introduction as to what they are used for an how you can use them to collect performance related data.
- Dynamic Management Views (DMVs) and System Catalog Views
- Profiler and Server Side Traces
- Windows Performance Monitor
- Built in performance reports in SSMS
- Query Plans
- Database Tuning Advisor
2. Dynamic Management Views
Overview
With the introduction of SQL Server 2005, Microsoft introduced Dynamic Management Views (DMVs) which allow you to get better insight into what is happening in SQL Server. Without these new tools a lot of the information was unavailable or very difficult to obtain.DMVs are a great tool to help troubleshoot performance related issues and once you understand their power they will become a staple for your Database Administration.
Explanation
The DMVs were introduced in SQL 2005 and with each new release, Microsoft has been adding additional DMVs to help troubleshoot issues. DMVs actually come in two flavors DMVs (dynamic management views) and DMFs (dynamic management functions) and are sometimes classified as DMOs (dynamic management objects). The DMVs act just like any other view where you can select data from them and the DMFs require values to be passed to the function just like any other function.The DMVs are broken down into the following categories:
- Change Data Capture Related Dynamic Management Views
- Change Tracking Related Dynamic Management Views
- Common Language Runtime Related Dynamic Management Views
- Database Mirroring Related Dynamic Management Views
- Database Related Dynamic Management Views
- Execution Related Dynamic Management Views and Functions
- Extended Events Dynamic Management Views
- Full-Text Search Related Dynamic Management Views
- Filestream-Related Dynamic Management Views (Transact-SQL)
- I/O Related Dynamic Management Views and Functions
- Index Related Dynamic Management Views and Functions
- Object Related Dynamic Management Views and Functions
- Query Notifications Related Dynamic Management Views
- Replication Related Dynamic Management Views
- Resource Governor Dynamic Management Views
- Service Broker Related Dynamic Management Views
- SQL Server Operating System Related Dynamic Management Views
- Transaction Related Dynamic Management Views and Functions
- Security Related Dynamic Management Views
Here are some of the more useful DMVs that you should familiarize yourself with:
- sys.dm_exec_cached_plans - Cached query plans available to SQL Server
- sys.dm_exec_sessions - Sessions in SQL Server
- sys.dm_exec_connections - Connections to SQL Server
- sys.dm_db_index_usage_stats - Seeks, scans, lookups per index
- sys.dm_io_virtual_file_stats - IO statistics for databases and log files
- sys.dm_tran_active_transactions - Transaction state for an instance of SQL Server
- sys.dm_exec_sql_text - Returns TSQL code
- sys.dm_exec_query_plan - Returns query plan
- sys.dm_os_wait_stats - Returns information what resources SQL is waiting on
- sys.dm_os_performance_counters - Returns performance monitor counters related to SQL Server
Additional Information
Here are some additional articles about DMVs.- Dynamic Management Views and Functions in SQL Server 2005
- Understanding Catalog Views in SQL Server 2005 and 2008
- Additional DMV tips
3. Profiler and Server Side Traces
Overview
SQL Server provides a great tool that allows you to see what statements are running on your SQL Server as well as collecting metrics such as duration, number of reads, number of writes, the machine that ran the query, etc... this tool is known as Profiler.Profiler is a GUI based tool that runs a SQL Server trace to capture the metrics listed above as well additional data. This data can then be used to determine where your SQL Server performance issues are related to your TSQL code. Running a trace without using Profiler is known as a Server Side Trace. You can create and start the trace using TSQL commands instead of having to use the GUI.
Explanation
Most people begin using Profiler to run a trace, because the GUI is pretty easy to get a trace setup and running. Once you understand the advantages of using a server side trace you will begin to use these more frequently unless you are troubleshooting an issue that is occurring at that exact time.Profiler
The Profiler tool can be launched in one of these ways:- In SSMS, select Tools > SQL Server Profiler from the menu

- You can also launch Profiler from the Windows menu

- Connect to the SQL Server instance you want to trace

- A Trace Properties window will open and you can click Run to start the trace with the default settings


Selecting Events, Columns and Filters
Ideally you want to select only the Events you want to capture along with setting Filters to minimize the amout of data that is returned. If you just use the standard settings you usually end up with too much information and it is hard to determine what is going on.Under the Events Selection tab you can select events, columns and set filters as shown below.

Events
A good starting place is to capture just these two events. These will show you all completed batches and metrics related to the batch. A batch is basically a set of work, like a stored procedure, that contains mulieple statements.- Stored Procedures - RPC:Completed
- TSQL - SQL:BatchCompleted
Columns
As far as columns go just select all columns and once you see the data that is captured you can reduce the amount of columns you are capturing.Filters
Filters allow you to further define what is captured. To set filters click on Column Filters. So if you only want to capture data for a specific process you can filter on SPID as an example. Another good starting point is to filter on Duration. I like to set the value to 500 to only show statements that take 500ms or longer. Again this is just a starting point.
Reading the Output
After you have collected the output you can see which statements are returned based on the events and filters that you have set. If you start with the two events above you can begin to take a look at the statements that have the longest duration and these are the statements that you should begin trying to tune to improve performance.4. Performance Monitor
Overview
Windows Performance Monitor or PerfMon is another great tool to capture metrics for your entire server. So far we discussed DMVs and Profiler which are great tools for finding out what is occurring within your SQL Server instance, but sometimes there are issues outside of SQL Server that may be causing performance issues. In this section we will take a look at PerfMon and how this tool can be used.Explanation
The Performance Monitor tool allows you to capture and graph many aspects for the Windows server. There are counters for .NET, Disks, Memory, Processors, Network, etc... as well several counters related to each instance of SQL Server on the box. If you have multiple instances running on one server, the counters are available for each instance so you can see what is occurring at the instance level.Launching Performance Monitor
You can launch Performance Monitor a few different ways.Start > Control Panel > Administrative Tools > Performance Monitor or you can launch PerfMon.exe. Once this is running you will get an interface like the following:

- Last - this is the last value that was captured for this counter
- Average - this is the average value for the duration
- Minimum - this is the minimum value for the duration
- Maximum - this is the maximum value for the duration
- Duration - this is the total collection time period and in this case it is 1:40 which is 1 minute and 40 seconds
Adding Counters and Changing Settings
To add a counter, right click on the big green plus button above the graph and a new window will open like the following:


Useful Counters
Once you start to explore all of the counters it can be overwhelming since there are so many to choose from, so here are a few counters that would make sense to begin capturing. Also, once you start collecting it is also difficult to tell if you have an issue or not based on the values that are returned. Since there are no hard and fast rules for all counters the best approach is to capture these values when your system is running fine, so you can create a baseline. Then you can use these baseline numbers when you start to capture data. You can find some information online about specific counters and threshold values for each counter.- Memory
- Available MBytes
- Physical Disk
- Avg. Disk sec/Read
- Avg. Disk sec/Write
- Processor
- % Processor Time
- SQL Server: Buffer Manager
- Page Life Expectancy
- Buffer cache hit ratio
- SQL Server: SQL Statistics
- Batch Requests/sec
- Compilations/sec
5. Built in Performance Reports in SSMS
Overview
Another way to get performance related information from SQL Server is to use the built-in performance reports. The reports were first introduced with SQL Server 2005 as an add-on, but are now standard with later versions. The reports provide useful information that can assist you in determining where your performance bottlenecks may be. The data from these reports is pulled from DMVs as well as the default trace that is always running.Explanation
To access the reports, open SSMS, right click on the SQL Server instance name and select Reports > Standard Reports as shown below.
- Server Dashboard
- Scheduler Health
- Memory Consumption
- Activity - All Blocking Transactions
- Activity - Top Sessions
- Activity - top Connections
- Top Connections by Block Transactions Count
- Top Transaction by Locks Count
- Performance - Batch Execution Statistics
- Performance - Object Execution Statistics
- Performance - Top Queries by Average CPU Time
- Performance - Top Queries by Average IO
- Performance - Top Queries by Total CPU Time
- Performance - Top Queries by Total IO

6. Query Execution Plans
Overview
Another useful tool that SQL Server offers is the ability to see query execution plans. An execution plan shows you how SQL Server breaks down a query and also identifies where issues might exist within the execution plan. Once you have identified statements that take a long time to complete, you can then look at the execution plan to determine tuning needs.Explanation
The Query Execution Plans describe the steps and the order used to access or modify data in the database. Once you have this information you can identify what parts of the query are slow.SQL Server can create execution plans in two ways:
- Actual Execution Plan - (CTRL + M) - is created after execution of the query and contains the steps that were performed
- Estimated Execution Plan - (CTRL + L) - is created without executing the query and contains an approximate execution plan
- Text Plans
- Graphical Plans
- XML Plans
Here is a simple query and its execution plan. To include the Actual Execution Plan press CTRL + M in the query window and then execute the T-SQL code.
-- query 1
SELECT ProductKey,ProductSubcategoryKey
FROM AdventureWorksDW..DimProduct
WHERE ProductKey<100
-- query 2
SELECT ProductKey,ProductSubcategoryKey
FROM AdventureWorksDW..DimProduct
WHERE Color<>'Silver'
Here we can see that query 1 is doing an
Index Scan and query 2 is doing a Clustered Index Scan. We can also see
that query 1 is 3% of the batch and query 2 is 97%. Also, we can see that
SQL Server is recommending that we add a new nonclustered index for query
2. So based on this output we know that query 2 is something that should
be addressed.
Note that not all execution plans are this simple and sometimes they are very difficult to read and interpret, so for additional information read this tutorial Graphical Query Plan Tutorial.
Additional Information
Here are some additional items related to Execution Plans.- Graphical Query Plan Tutorial
- SQL Server Query Execution Plans in SQL Server Management Studio
- DiDifferences Between SQL Server Query Plan Formats
- How to read SQL Server graphical query execution plans
- Understanding SQL Server Physical Joins
7.Database Engine Tuning Advisor
Overview
SQL Server also includes another performance tool called the Database Engine Tuning Advisor or DTA. This tool allows you to have SQL Server analyze one statement or a batch of statements that you captured by running a Profiler or server side trace. The tool will then go through each statement to determine where improvements can be made and then presents you with options for improvement.Explanation
The Database Engine Tuning Advisor is basically a tool that helps you figure out if additional indexes are helpful as well as partitioning. Here is a summary of the options:- Adding indexes (clustered, non-clustered, and indexed views)
- Adding partitioning
- Adding statistics including multi-column statistics which are not created automatically even when you have the AUTO_CREATE_STATISTICS database option set to ON
Here is an example of a query and how we can use DTA to analyze the query and make recommendations. From within a query window right click and select the DTA option as shown.


Here we can see that DTA recommends adding a new index for table DimProduct.

Additional Information
8. Performance Issues
Overview
There are several factors that can degrade SQL Server performance and in this section we will investigate some of the common areas that can effect performance. We will look at some of the tools that you can use to identify issues as well as review some possible remedies to fix these performance issues.We will cover the following topics:
- Blocking
- Deadlocks
- I/O
- CPU
- Memory
- Role of statistics
- Query Tuning Bookmark Lookups
- Query Tuning Index Scans
9. Troubleshooting Blocking
Overview
In order for SQL Server to maintain data integrity for both reads and writes it uses locks, so that only one process has control of the data at any one time. There are serveral types of locks that can be used such as Shared, Update, Exclusive, Intent, etc... and each of these has a different behavior and effect on your data.When locks are held for a long period of time they cause blocking, which means one process has to wait for the other process to finish with the data and release the lock before the second process can continue. This is similar to deadlocking where two processes are waiting on the same resource, but unlike deadlocking, blocking is resolved as soon as the first process releases the resource.
Explanation
As mentioned above, blocking is a result of two processes wanting to access the same data and the second process needs to wait for the first process to release the lock. This is how SQL Server works all of the time, but usually you do not see blocking because the time that locks are held is usually very small.It probably makes sense that locks are held when updating data, but locks are also used when reading data. When data is updated an Update lock is used and when data is read a Shared lock is used. An Update lock will create an exclusive lock on the data for this process and a Shared lock allows other processes that use a Shared lock to access the data as well and when two processes are trying to access the same data this is where the locking and blocking occurs.
Here are various ways you can identify blocking for your SQL Server instance.
sp_who2
In a query window run this command:
sp_who2
This is the output that is returned.
Here we can see the BlkBy column that shows SPID 60 is blocked by SPID 59.
Activity Monitor
In SSMS, right click on the SQL Server instance name and select Activity Monitor. In the Processes section you will see information similar to below. Here we can see similar information as sp_who2, but we can also see the Wait Time, Wait Type and also the resource that SPID 60 is waiting for.
Report - All Blocking Transactions
Another option is to use the built in reports in SSMS. Right click on the SQL Server instance name and select Reports > Standard Reports > Activity - All Block Transactions.
Querying Dynamic Management Views
You can also use the DMVs to get information about blocking.SELECT session_id, command, blocking_session_id, wait_type, wait_time, wait_resource, t.TEXT
FROM sys.dm_exec_requests
CROSS apply sys.dm_exec_sql_text(sql_handle) AS t
WHERE session_id > 50
AND blocking_session_id > 0
UNION
SELECT session_id, '', '', '', '', '', t.TEXT
FROM sys.dm_exec_connections
CROSS apply sys.dm_exec_sql_text(most_recent_sql_handle) AS t
WHERE session_id IN (SELECT blocking_session_id
FROM sys.dm_exec_requests
WHERE blocking_session_id > 0)
Here is the output and we can see the
blocking information along with the TSQL commands that were issued.
10. Tracing a SQL Server Deadlock
Overview
A common issue with SQL Server is deadlocks. A deadlock occurs when two or more processes are waiting on the same resource and each process is waiting on the other process to complete before moving forward. When this situation occurs and there is no way for these processes to resolve the conflict, SQL Server will choose one of processes as the deadlock victim and rollback that process, so the other process or processes can move forward.By default when this occurs, your application may see or handle the error, but there is nothing that is captured in the SQL Server Error Log or the Windows Event Log to let you know this occurred. The error message that SQL Server sends back to the client is similar to the following:
Msg 1205, Level 13, State 51, Line 3
Transaction (Process ID xx) was deadlocked on {xxx} resources with another process
and has been chosen as the deadlock victim. Rerun the transaction.
In this tutorial we cover what steps you can
take to capture deadlock information and some steps you can take to resolve the
problem.Explanation
Deadlock information can be captured in the SQL Server Error Log or by using Profiler / Server Side Trace.Trace Flags
If you want to capture this information in the SQL Server Error Log you need to enable one or both of these trace flags.- 1204 - this provides information about the nodes involved in the deadlock
- 1222 - returns deadlock information in an XML format
To turn these on you can issue the following commands in a query window or you can add these as startup parameters. If these are turned on from a query window, the next time SQL Server starts these trace flags will not be active, so if you always want to capture this data the startup parameters is the best option.
DBCC TRACEON (1204, -1)
DBCC TRACEON (1222, -1)
Here is sample output for each of the trace
flags.DBCC TRACEON (1222, -1)
Trace Flag 1222 Output

Trace Flag 1204 Output

Profiler / Server Side Trace
Profiler works without the trace flags being turned on and there are three events that can be captured for deadlocks. Each of these events is in the Locks event class.- Deadlock graph - Occurs simultaneously with the Lock:Deadlock event class. The Deadlock Graph event class provides an XML description of the deadlock.
- Lock: Deadlock - Indicates that two concurrent transactions have deadlocked each other by trying to obtain incompatible locks on resources that the other transaction owns.
- Lock: Deadlock Chain - Is produced for each of the events leading up to the deadlock.
Event Output
In the below image, I have only captured the three events mentioned above.
Deadlock Graph Output
Below is the deadlock graph which is the output for the Deadlock graph event. We can see on the left side that server process id 62 was selected as the deadlock victim. Also, if you hover over the oval with the X through it we can see the transaction that was running.
Finding Objects Involved in Deadlock
In all three outputs, I have highlighted the object IDs for the objects that are in contention. You can use the following query to find the object, substituting the object ID for the partition_id below.
SELECT OBJECT_SCHEMA_NAME([object_id]),
OBJECT_NAME([object_id])
FROM sys.partitions
WHERE partition_id = 289180401860608;
OBJECT_NAME([object_id])
FROM sys.partitions
WHERE partition_id = 289180401860608;
Saving Deadlock Graph Data in XML File
Since the deadlock graph data is stored in an XML format, you can save the XML events separately. When configuring the Trace Properties click on the Events Extraction Settings and enable this option as shown below.
11. Index Scans and Table Scans
Overview
There are several things that you can do to improve performance by throwing more hardware at the problem, but usually the place you get the most benefit from is when you tune your queries. One common problem that exists is the lack of indexes or incorrect indexes and therefore SQL Server has to process more data to find the records that meet the queries criteria. These issues are known as Index Scans and Table Scans.In this section will look at how to find these issues and how to resolve them.
Explanation
An index scan or table scan is when SQL Server has to scan the data or index pages to find the appropriate records. A scan is the opposite of a seek, where a seek uses the index to pinpoint the records that are needed to satisfy the query. The reason you would want to find and fix your scans is because they generally require more I/O and also take longer to process. This is something you will notice with an application that grows over time. When it is first released performance is great, but over time as more data is added the index scans take longer and longer to complete.To find these issues you can start by running Profiler or setting up a server side trace and look for statements that have high read values. Once you have identified the statements then you can look at the query plan to see if there are scans occurring.
Here is a simple query that we can run. First use Ctrl+M to turn on the actual execution plan and then execute the query.
SELECT * FROM Person.Contact
Here we can see that this query is doing a
Clustered Index Scan. Since this table has a clustered index and there is
not a WHERE clause SQL Server scans the entire clustered index to return all
rows. So in this example there is nothing that can be done to improve
this query.
SELECT * FROM Person.Contact2
Here we can see that this query is doing a
Table Scan, so when a table has a Clustered Index it will do a Clustered Index
Scan and when the table does not have a clustered index it will do a Table
Scan. Since this table does not have a clustered index and there is not a
WHERE clause SQL Server scans the entire table to return all rows. So
again in this example there is nothing that can be done to improve this query.
SELECT * FROM Person.Contact WHERE LastName =
'Russell'
Here we can see that we still get the
Clustered Index Scan, but this time SQL Server is letting us know there is a
missing index. If you right click on the query plan and select Missing Index Details you will get a new window with a script to create the missing index.
SELECT * FROM Person.Contact2 WHERE LastName =
'Russell'
We can see that we still have the Table Scan,
but SQL Server doesn't offer any suggestions on how to fix this.
Below is the suggestion this tool provides and we can see that recommends creating a new index, so you can see that using both tools can be beneficial.

Create New Index
So let's create the recommended index on Person.Contact and run the query again.USE [AdventureWorks]
GO
CREATE NONCLUSTERED INDEX [IX_LastName]
ON [Person].[Contact] ([LastName])
GO
SELECT * FROM Person.Contact WHERE LastName = 'Russell'
Here we can see the query plan has changed
and instead of a Clustered Index Scan we now have an Index Seek which is much
better. We can also see that there is now a Key Lookup operation which we
will talk about in the next section.
Summary
By finding and fixing your Index Scans and Table Scans you can drastically improve performance especially for larger tables. So take the time to identify where your scans may be occurring and create the necessary indexes to solve the problem. One thing that you should be aware of is that too many indexes also causes issues, so make sure you keep a balance on how many indexes you create for a particular table.12. Eliminating bookmark (key/rid) lookups
Overview
When we were looking at the index scan and table scan section we were able to eliminate the scan which was replaced with an index seek, but this also introduced a Key Lookup which is something else you may want to eliminate to improve performance.A key lookup occurs when data is found in a non-clustered index, but additional data is needed from the clustered index to satisfy the query and therefore a lookup occurs. If the table does not have a clustered index then a RID Lookup occurs instead.
In this section we will look at how to find Key/RID Lookups and ways to eliminate them.
Explanation
The reason you would want to eliminate Key/RID Lookups is because they require an additional operation to find the data and may also require additional I/O. I/O is one of the biggest performance hits on a server and any way you can eliminate or reduce I/O is a performance gain.So let's take a look at an example query and the query plan. Before we do this we want to first add the nonclustered index on LastName.
USE [AdventureWorks]
GO
CREATE NONCLUSTERED INDEX [IX_LastName]
ON [Person].[Contact] ([LastName])
GO
Now we can use Ctrl+M to turn on the actual
execution plan and run the select.SELECT * FROM Person.Contact WHERE LastName = 'Russell'
If we look at the execution plan we can see
that we have an Index Seek using the new index, but we also have a Key Lookup
on the clustered index. The reason for this is that the nonclustered
index only contains the LastName column, but since we are doing a SELECT * the
query has to get the other columns from the clustered index and therefore we have
a Key Lookup. The other operator we have is the Nested Loops this joins
the results from the Index Seek and the Key Lookup.
SELECT LastName FROM Person.Contact WHERE LastName = 'Russell'
Here we can see that we no longer have a Key
Lookup and we also no longer have the Nested Loops operator.
SELECT * FROM Person.Contact WHERE LastName = 'Russell'
SELECT LastName FROM Person.Contact WHERE LastName = 'Russell'
Below we can see that the first statement
takes 99% of the batch and the second statement takes 1%, so this is a big improvement.
The Person.Contact table has a clustered index on ContactID, so if we include this column in the query we can still do just an Index Seek.
SELECT ContactID, LastName FROM Person.Contact WHERE LastName = 'Russell'
Here we can see that we only need to do an
Index Seek to include both of these columns.
SELECT FirstName, LastName FROM Person.Contact WHERE LastName = 'Russell'
Luckily there are a few options to handle
this.Creating a Covering Index
A covering index basically does what it implies, it covers the query by including all of the columns that are needed. So if our need is to always include FirstName and LastName we can modify our index as follows to include both LastName and FirstName.DROP INDEX [IX_LastName] ON [Person].[Contact]
GO
CREATE NONCLUSTERED INDEX [IX_LastName]
ON [Person].[Contact] ([LastName], [FirstName])
GO
And if we look at the execution plan we can
see that we eliminated the Key Lookup once again.
Creating an Index with Included Columns
Another option is to use the included columns feature for an index. This allows you to include additional columns so they are stored with the index, but are not part of the index tree. So this allows you to take advantage of the features of a covering index and reduces storage needs within the index tree. Another benefit is that you can include additional data types that can not be part of a covering index.The syntax for the the index with included columns is as follows:
DROP INDEX [IX_LastName] ON [Person].[Contact]
GO
CREATE NONCLUSTERED INDEX [IX_LastName]
ON [Person].[Contact] ([LastName])
INCLUDE ([FirstName])
GO
Here we can see the exuection plan is the
same for both options.
13. Discovering Unused Indexes
Overview
To ensure that data access can be as fast as possible, SQL Server like other relational database systems utilizes indexing to find data quickly. SQL Server has different types of indexes that can be created such as clustered indexes, non-clustered indexes, XML indexes and Full Text indexes.The benefit of having more indexes is that SQL Server can access the data quickly if an appropriate index exists. The downside to having too many indexes is that SQL Server has to maintain all of these indexes which can slow things down and indexes also require additional storage. So as you can see indexing can both help and hurt performance.
In this section we will focus on how to identify indexes that exist, but are not being used and therefore can be dropped to improve performance and decrease storage requirements.
Explanation
When SQL Server 2005 was introduced it added Dynamic Management Views (DMVs) that allow you to get additional insight as to what is going on within SQL Server. One of these areas is the ability to see how indexes are being used. There are two DMVs that we will discuss. Note that these views store cumulative data, so when SQL Server is restated the counters go back to zero, so be aware of this when monitoring your index usage.DMV - sys.dm_db_index_operational_stats
This DMV allows you to see insert, update and delete information for various aspects for an index. Basically this shows how much effort was used in maintaining the index based on data changes.If you query the table and return all columns, the output may be confusing. So the query below focuses on a few key columns. To learn more about the output for all columns you can check out Books Online.
SELECT OBJECT_NAME(A.[OBJECT_ID]) AS [OBJECT NAME],
I.[NAME] AS [INDEX NAME],
A.LEAF_INSERT_COUNT,
A.LEAF_UPDATE_COUNT,
A.LEAF_DELETE_COUNT
FROM SYS.DM_DB_INDEX_OPERATIONAL_STATS (db_id(),NULL,NULL,NULL ) A
INNER JOIN SYS.INDEXES AS I
ON I.[OBJECT_ID] = A.[OBJECT_ID]
AND I.INDEX_ID = A.INDEX_ID
WHERE OBJECTPROPERTY(A.[OBJECT_ID],'IsUserTable') = 1
Below we can see the number of Inserts,
Updates and Deletes that occurred for each index, so this shows how much work
SQL Server had to do to maintain the index. 
DMV - sys.dm_db_index_usage_stats
This DMV shows you how many times the index was used for user queries. Again there are several other columns that are returned if you query all columns and you can refer to Books Online for more information.SELECT OBJECT_NAME(S.[OBJECT_ID]) AS [OBJECT NAME],
I.[NAME] AS [INDEX NAME],
USER_SEEKS,
USER_SCANS,
USER_LOOKUPS,
USER_UPDATES
FROM SYS.DM_DB_INDEX_USAGE_STATS AS S
INNER JOIN SYS.INDEXES AS I ON I.[OBJECT_ID] = S.[OBJECT_ID] AND I.INDEX_ID = S.INDEX_ID
WHERE OBJECTPROPERTY(S.[OBJECT_ID],'IsUserTable') = 1
AND S.database_id = DB_ID()
Here we can see seeks, scans, lookups and
updates. - The seeks refer to how many times an index seek occurred for that index. A seek is the fastest way to access the data, so this is good.
- The scans refers to how many times an index scan occurred for that index. A scan is when multiple rows of data had to be searched to find the data. Scans are something you want to try to avoid.
- The lookups refer to how many times the query required data to be pulled from the clustered index or the heap (does not have a clustered index). Lookups are also something you want to try to avoid.
- The updates refers to how many times the index was updated due to data changes which should correspond to the first query above.

Identifying Unused Indexes
So based on the output above you should focus on the output from the second query. If you see indexes where there are no seeks, scans or lookups, but there are updates this means that SQL Server has not used the index to satisfy a query but still needs to maintain the index. Remember that the data from these DMVs is reset when SQL Server is restarted, so make sure you have collected data for a long enough period of time to determine which indexes may be good candidates to be dropped.14. Investigating I/O bottlenecks
Overview
SQL Server is usually a high I/O activity process and in most cases the database is larger than the amount of memory installed on a computer and therefore SQL Server has to pull data from disk to satisfy queries. In addition, since the data in databases is constantly changing these changes need to be written to disk. Another process that can consume a lot of I/O is the TempDB database. The TempDB database is a temporary working area for SQL Server to do such things as sorting and grouping. The TempDB database also resides on disk and therefore depending on how many temporary objects are created this database could be busier than your user databases.Since I/O is such an important part of SQL Server performance you need to make sure your disk subsystem is not the bottleneck. In the old days this was much easier to do, since most servers had local attached storage. These days most SQL Servers use SAN or NAS storage or to further complicate things more and more SQL Servers are running in a virtualized environment.
Explanation
There are several different methods that can be used to track I/O performance, but as mentioned above with SAN / NAS storage and virtualized SQL Server environments, this is getting harder and harder to track as well as the rules have changed as far as what should be tracked to determine if there is an I/O bottleneck. The advantage is that there are several tools available at both the storage level and the virtual level to aid in performance, but we will not cover these here.There are basically two options that you have to monitor I/O bottlenecks, SQL Server DMVs and Performance Monitor counters. There are other tools as well, but these are two options that will exist in every SQL Server environment.
DMV - sys.dm_io_virtual_file_stats
This DMV will give you cumulative file stats for each database and each database file including both the data and log files. Based on this data you can determine which file is the busiest from a read and/or write perspective.The output also includes I/O stall information for reads, writes and total. The I/O stall is the total time, in milliseconds, that users waited for I/O to be completed on the file. By looking at the I/O stall information you can see how much time was waiting for I/O to complete and therefore the users were waiting.
The data that is returned from this DMV is cumulative data, which means that each time you restart SQL Server the counters are reset. Since the data is cumulative you can run this once and then run the query again in the future and compare the deltas for the two time periods. If the I/O stalls are high compared to the length of the that time period then you may have an I/O bottleneck.
SELECT
cast(DB_Name(a.database_id) as varchar) as Database_name,
b.physical_name, *
FROM
sys.dm_io_virtual_file_stats(null, null) a
INNER JOIN sys.master_files b ON a.database_id = b.database_id and a.file_id = b.file_id
ORDER BY Database_Name
Here is partial output from the above
command.
Performance Monitor
Performance Monitor is a Windows tool that let's you capture statistics about SQL Server, memory usage, I/O usage, etc... This tool can be run interactively using the GUI or you can set it up to collected information behind the scenes which can be reviewed at a later time. This tool is found in the Control Panel under Administrative tools.There are several counters related to I/O and they are located under Physical Disk and Logical Disk. The Physical Disk performance object consists of counters that monitor hard or fixed disk drive on a computer. The Logical Disk performance object consists of counters that monitor logical partitions of a hard or fixed disk drives. For the most part, they both contain the same counters. In most cases you will probably use the Physical Disk counters. Here is a partial list of the available counters.


- Avg. Disk sec/Read is the average time, in seconds, of a read of data from the disk.
- Avg. Disk sec/Write is the average time, in seconds, of a write of data to the disk.
Resource Monitor
Another tool that you can use is the Resource Monitor. This can be launched from Task Manager or from the Control Panel.Below you can see the Disk tab that shows current processes using disk, the active disk files and storage at the logical and physical level. The Response Time (ms) is helpful to see how long it is taking to service the I/O request.

Additional Information
I/O issues may not always be a problem with your disk subsystem. Just because you see a slow down or I/O waits occurring there may be other issues that you need to consider such as missing indexes, poorly written queries, fragmentation or out of date statistics. We will cover these topics as well in this tutorial.Here are some additional articles about I/O performance.
Comments
Post a Comment