Wednesday, February 13, 2008

Recovery models

Simple Recovery Model

This recovery model facilitates the maintenance of a database by making the transaction log virtually maintenance free. There are limitations placed on the recoverability of a database if this recovery model is used.

Bulk-Logged Recovery Model

A database in this recovery model will have minimum logging for bulk import operations. Space allocation and deallocation is only logged for bulk import operations. Point-in-time and point-of-failure recovery may be possible when a database is in Bulk-Logged Recovery model.

Full Recovery Model

SQL Server performs full transaction logging for any bulk load operations if a database is in Full Recovery model. Transaction log backups should be performed at regular intervals for maximum recoverability. This model provides the safest mode of operation for production systems.




Complete/Full - Backs up the entire database.

Differential - Backs up only modified extents since the previous complete backup.

Transaction Log - Backs up the active portion and truncates the inactive portion of the transaction log.

File / Filegroup - Backs up individual files and filegroups within a database.

T-SQL and DB tales

create database a
on
(
name = 'a_data',
filename = 'C:\a.mdf',
size = 2MB, --all default sizes are in MB
maxsize = 10MB, --Unlimited
filegrowth = 2
)
Log on
(
name = 'a_log',
filename = 'C:\a.ldf',
size = 1MB,
maxsize = 5,
filegrowth = 2
)



create database b
on
(
name = 'b_data',
filename = 'C:\b.mdf',
size = 2MB, --all default sizes are in MB
maxsize = 10MB,
filegrowth = 2
),
filegroup new
(
name = 'b_data2',
filename = 'C:\b.ndf',
size = 2MB, --all default sizes are in MB
maxsize = 10MB,
filegrowth = 2
)
Log on
(
name = 'b_log',
filename = 'C:\b.ldf',
size = 1MB,
maxsize = 5,
filegrowth = 2
)


backup database b to disk = 'C:\b.bak'

restore database c from disk = 'C:\b.bak'
with
move 'b_data' to 'C:\c.mdf',
move 'b_data2' to 'C:\c.ndf',
move 'b_log' to 'C:\c.ldf'

alter database c add filegroup new2

alter database c
add file
(
name = 'c_data3',
filename = 'C:\c3.ndf',
size = 2MB, --all default sizes are in MB
maxsize = 10MB,
filegrowth = 2
) to filegroup new2



backup database a to disk = 'c:\a.bak' with differential


restore database b from disk = 'c:\a.bak' with file = 1, norecovery,
move 'a_data' to 'D:\Data\b.mdf',
move 'a_log' to 'E:\Log\b.ldf'


restore database b from disk = 'c:\a.bak' with file = 2, norecovery,
move 'a_data' to 'D:\Data\b.mdf',
move 'a_log' to 'E:\Log\b.ldf'

restore database b with recovery

Server Startup parameters

Default parameters

-d master_file_path
The fully qualified path for the master database file (typically, C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\Data\master.mdf). If you do not provide this option, the existing registry parameters are used.

-e error_log_path
The fully qualified path for the error log file (typically, C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ERRORLOG). If you do not provide this option, the existing registry parameters are used.

-l master_log_path
The fully qualified path for the master database log file (typically C:\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\Data\mastlog.ldf).



-c
Shortens startup time when starting SQL Server from the command prompt. Typically, the SQL Server Database Engine starts as a service by calling the Service Control Manager. Because the SQL Server Database Engine does not start as a service when starting from the command prompt, use -c to skip this step.

-f
Starts an instance of SQL Server with minimal configuration. This is useful if the setting of a configuration value (for example, over-committing memory) has prevented the server from starting.

-g memory_to_reserve
Specifies an integer number of megabytes (MB) of memory that SQL Server will leave available for memory allocations within the SQL Server process, but outside the SQL Server memory pool. The memory outside of the memory pool is the area used by SQL Server for loading items such as extended procedure .dll files, the OLE DB providers referenced by distributed queries, and automation objects referenced in Transact-SQL statements. The default is 256 MB.

-h
Reserves virtual address space for Hot Add memory metadata when AWE is enabled with 32-bit SQL Server 2005. Required for Hot-Add memory with 32-bit AWE, but consumes about 500 MB of virtual address space and makes memory tuning more difficult. Not required for 64-bit SQL Server. Hot Add Memory is only available for Windows Server 2003, Enterprise and Datacenter editions.

-m
Starts an instance of SQL Server in single-user mode. When you start an instance of SQL Server in single-user mode, only a single user can connect, and the CHECKPOINT process is not started. CHECKPOINT guarantees that completed transactions are regularly written from the disk cache to the database device.

-n
Does not use the Windows application log to record SQL Server events. If you start an instance of SQL Server with -n, we recommend that you also use the -e startup option. Otherwise, SQL Server events are not logged.

-s
Allows you to start a named instance of SQL Server 2005. Without the -s parameter set, the default instance will try to start. You must switch to the appropriate BINN directory for the instance at a command prompt before starting sqlservr.exe. For example, if Instance1 were to use \mssql$Instance1 for its binaries, the user must be in the \mssql$Instance1\binn directory to start sqlservr.exe -s instance1.

-T trace#
Indicates that an instance of SQL Server should be started with a specified trace flag (trace#) in effect.

-x
Disables the keeping of CPU time and cache-hit ratio statistics. Allows maximum performance.

Attach/Detach

Detaching a Database

To detach an SQL Server 2000 database, you can use the sp_detach_db system stored procedure. This stored procedure can also run UPDATE STATISTICS on all tables before detaching. The syntax for sp_detach_db system stored procedure is as follows:

sp_detach_db [ @dbname = ] 'dbname'
[ , [ @skipchecks = ] 'skipchecks' ]

In the above, the parameters to the stored procedure are as follows:

* [@dbname =] 'dbname' is the database name. 'dbname'is nvarchar(128), a default value is NULL.
* [@skipchecks =] 'skipchecks' The 'skipchecks' parameter indicates will be can UPDATE STATISTICS run or skipped. The 'skipchecks' is nvarchar(10), a default value is NULL. If 'skipchecks' is true, UPDATE STATISTICS is skipped. If 'skipchecks' is false, UPDATE STATISTICS is run.


The following example detaches the pubs database and run UPDATE STATISTICS on all tables before detaching:

EXEC sp_detach_db 'pubs', 'false'

Attaching a Database

When you attach a database, you must specify at least the name and physical location of the primary data file. If one or more of database files have changed location since the database was detached, you must specify the name and physical location of these files in addition to the primary file.

To attach SQL Server 2000 database, you can use the sp_attach_db system stored procedure. The syntax for sp_attach_db system stored procedure is as follows:

sp_attach_db [ @dbname = ] 'dbname',
[ @filename1 = ] 'filename_n' [ ,...16 ]

In the above command:

* [@dbname =] 'dbname'is the database name. dbname is nvarchar(128), a default value is NULL.
* [@filename1 =] 'filename_n'Is the database file name. filename_n is nvarchar(260), a default value is NULL. There can be up to 16 file names specified.


This is the example to attach the pubs database which contain two files pubs.mdf and pubs_log.ldf from the C:\MSSQL\Data directory:

EXEC sp_attach_db @dbname = 'pubs',
@filename1 = 'C:\MSSQL\Data\pubs.mdf',
@filename2 = 'C:\MSSQL\Data\pubs_log.ldf'

Attaching a Single-File Database

A single-file database is a database that have only one data file. When a database comprises only one data file, the database can be attached to an instance of SQL Server 2000 without using the transaction log file. When the data file will be attached, SQL Server will create a new transaction log file automatically.

To attach a single-file database, you can use the sp_attach_single_file_db system stored procedure. The syntax for sp_attach_single_file_db system stored procedure is as follows:

sp_attach_single_file_db [ @dbname = ] 'dbname'
, [ @physname = ] 'physical_name'

* [@dbname =] 'dbname' is the database name. 'dbname' is nvarchar(128), a default value is NULL.
* [@physname =] 'phsyical_name' is the database file name. 'phsyical_name' is nvarchar(260), a default value is NULL.


This is the example to attach only one data file of the pubs database from the C:\MSSQL\Data directory:

EXEC sp_attach_single_file_db @dbname = 'pubs',
@physname = 'C:\MSSQL\Data\pubs.mdf'

Move system databases

Prerequisites
• Make a current backup of all databases, especially the master database, from their current location.
• You must have system administrator (sa) permissions.
• You must know the name and the current location of all data files and log files for the database.


Moving user databases
The following example moves a database that is named mydb. This database contains one data file, Mydb.mdf, and one log file, Mydblog.ldf. If the database that you are moving has more data files or log files, specify the files in a comma-delimited list in the sp_attach_db stored procedure. The sp_detach_db procedure does not change regardless of how many files the database contains because the sp_detach_db procedure does not list the files.
1. Detach the database as follows:

use master
go
sp_detach_db 'mydb'
go

2. Next, copy the data files and the log files from the current location (D:\Mssql7\Data) to the new location (E:\Sqldata).
3. Re-attach the database. Point to the files in the new location as follows:

use master
go
sp_attach_db 'mydb','E:\Sqldata\mydbdata.mdf','E:\Sqldata\mydblog.ldf'
go

Verify the change in file locations by using the sp_helpfile stored procedure:

use mydb
go
sp_helpfile
go

The filename column values should reflect the new locations.


Moving system - Make sure that the SQL Server Agent is not currently running.


Moving the model database

To move the model database, you must start SQL Server together with the -c option, the -m option, and trace flag 3608. Trace flag 3608 prevents SQL Server from recovering any database except the master database.

After you add the -c option, the -m option, and trace flag 3608, follow these steps:
1. Stop and then restart SQL Server.
2. Detach the model database by using the following commands:

use master
go
sp_detach_db 'model'
go

3. Move the Model.mdf and Modellog.ldf files from the D:\Mssql7\Data folder to the E:\Sqldata folder.
4. Reattach the model database by using the following commands:

use master
go
sp_attach_db 'model','E:\Sqldata\model.mdf','E:\Sqldata\modellog.ldf'
go

5. Remove -c -m -T3608 from the startup parameters in SQL Server Enterprise Manager or in SQL Server Configuration Manager.
6. Stop and then restart SQL Server. You can verify the change in file locations by using the sp_helpfile stored procedure. For example, use the following command:

use model
go
sp_helpfile
go


Moving the MSDB database

To move the MSDB database, you must startt SQL Server together with the -c option, the -m option, and trace flag 3608. Trace flag 3608 prevents SQL Server from recovering any database except the master database. To add the -c option, the -m option, and trace flag 3608, follow the steps in the "Moving the model database" section. After you add the -c option, the -m option and trace flag 3608, follow these steps:
1. Stop, and then restart SQL Server.
2. Make sure that the SQL Server Agent service is not currently running.
3. Detach the msdb database as follows:

use master
go
sp_detach_db 'msdb'
go

4. Move the Msdbdata.mdf and Msdblog.ldf files from the current location (D:\Mssql8\Data) to the new location (E:\Mssql8\Data).
5. Remove -c -m -T3608 from the startup parameters box in Enterprise Manager.
6. Stop and then restart SQL Server.

Note If you try to reattach the msdb database by starting SQL Server together with the -c option, the -m option, and trace flag 3608, you may receive the following error message:
Server: Msg 615, Level 21, State 1, Line 1
Could not find database table ID 3, name 'model'.
7. Reattach the msdb database as follows:

use master
go
sp_attach_db 'msdb','E:\Mssql8\Data\msdbdata.mdf','E:\Mssql8\Data\msdblog.ldf'
go

Note If you use this procedure together with moving the model database, you are trying to detach the msdb database while you detach the model database. When you do this, you must reattach the model database first, and then reattach the msdb database. If you reattach the msdb database first, you receive the following error message when you try to reattach the model database:
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
In this case, you must detach the msdb database, reattach the model database, and then reattach the msdb database,

After you move the msdb database, you may receive the following error message:
Error 229: EXECUTE permission denied on object 'ObjectName', database 'master', owner 'dbo'.
This problem occurs because the ownership chain has been broken. The database owners for the msdb database and for the master database are not the same. In this case, the ownership of the msdb database had been changed. To work around this problem, run the following Transact-SQL statements. You can do this by using the Osql.exe command-line utility (SQL Server 7.0 and SQL Server 2000) or the Sqlcmd.exe command-line utility (SQL Server 2005):

USE MSDB
Go
EXEC sp_changedbowner 'sa'
Go


Moving the master database

1. Change the path for the master data files and the master log files in SQL Server Enterprise Manager.

Note You may also change the location of the error log here.
2. Right-click the SQL Server in Enterprise Manager and then click Properties.
3. Click Startup Parameters to see the following entries:

-dD:\MSSQL7\data\master.mdf
-eD:\MSSQL7\log\ErrorLog
-lD:\MSSQL7\data\mastlog.ldf

-d is the fully qualified path for the master database data file.

-e is the fully qualified path for the error log file.

-l is the fully qualified path for the master database log file.
4. Change these values as follows:
a. Remove the current entries for the Master.mdf and Mastlog.ldf files.
b. Add new entries specifying the new location:

-dE:\SQLDATA\master.mdf
-lE:\SQLDATA\mastlog.ldf

5. Stop SQL Server.
6. Copy the Master.mdf and Mastlog.ldf files to the new location (E:\Sqldata).
7. Restart SQL Server.



Moving the tempdb database

You can move tempdb files by using the ALTER DATABASE statement.
1. Determine the logical file names for the tempdb database by using sp_helpfile as follows:

use tempdb
go
sp_helpfile
go

The logical name for each file is contained in the name column. This example uses the default file names of tempdev and templog.
2. Use the ALTER DATABASE statement, specifying the logical file name as follows:

use master
go
Alter database tempdb modify file (name = tempdev, filename = 'E:\Sqldata\tempdb.mdf')
go
Alter database tempdb modify file (name = templog, filename = 'E:\Sqldata\templog.ldf')
go

You should receive the following messages that confirm the change:
Message 1
File 'tempdev' modified in sysaltfiles. Delete old file after restarting SQL Server.
Message 2
File 'templog' modified in sysaltfiles. Delete old file after restarting SQL Server.
3. Using sp_helpfile in tempdb will not confirm these changes until you restart SQL Server.
4. Stop and then restart SQL Server.

Few Q/A

How do you implement one-to-one, one-to-many and many-to-many relationships while designing tables?

One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships.
One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships.
Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table.

What's the difference between a primary key and a unique key?

Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key allows one NULL only.


Define candidate key, alternate key, composite key.

A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys.

A key formed by combining at least two or more columns is called composite key.


What are defaults? Is there a column to which a default can't be bound?

A default is a value that will be used by a column, if no value is supplied to that column while inserting data. IDENTITY columns and timestamp columns can't have defaults bound to them.


What is a transaction and what are ACID properties?

A transaction is a logical unit of work in which, all the steps must be performed or none. ACID stands for Atomicity, Consistency, Isolation, Durability. These are the properties of a transaction.

Atomicity
A transaction must be an atomic unit of work; either all of its data modifications are performed, or none of them is performed.

Consistency
When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction's modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.

Isolation
Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either recognizes data in the state it was in before another concurrent transaction modified it, or it recognizes the data after the second transaction has completed, but it does not recognize an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.

Durability
After a transaction has completed, its effects are permanently in place in the system. The modifications persist even in the event of a system failure.



What's the difference between DELETE TABLE and TRUNCATE TABLE commands?

DELETE TABLE is a logged operation, so the deletion of each row gets logged in the transaction log, which makes it slow. TRUNCATE TABLE also deletes all the rows in a table, but it won't log the deletion of each row, instead it logs the deallocation of the data pages of the table, which makes it faster. Of course, TRUNCATE TABLE can be rolled back.


What is an index? What are the types of indexes? How many clustered indexes can be created on a table? I create a separate index on each column of a table. what are the advantages and disadvantages of this approach?

Indexes in SQL Server are similar to the indexes in books. They help SQL Server retrieve the data quicker.

Indexes are of two types. Clustered indexes and non-clustered indexes. When you craete a clustered index on a table, all the rows in the table are stored in the order of the clustered index key. So, there can be only one clustered index per table. Non-clustered indexes have their own storage separate from the table data storage. Non-clustered indexes are stored as B-tree structures (so do clustered indexes), with the leaf level nodes having the index key and it's row locater. The row located could be the RID or the Clustered index key, depending up on the absence or presence of clustered index on the table.

If you create an index on each column of a table, it improves the query performance, as the query optimizer can choose from all the existing indexes to come up with an efficient execution plan. At the same t ime, data modification operations (such as INSERT, UPDATE, DELETE) will become slow, as every time data changes in the table, all the indexes need to be updated. Another disadvantage is that, indexes need disk space, the more indexes you have, more disk space is used.


What are the steps you will take to improve performance of a poor performing query?

This is a very open ended question and there could be a lot of reasons behind the poor performance of a query. But some general issues that you could talk about would be: No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins, too much normalization, excess usage of cursors and temporary tables.

Some of the tools/ways that help you troubleshooting performance problems are: SET SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO ON, SQL Server Profiler, Windows NT /2000 Performance monitor, Graphical execution plan in Query Analyzer.


What is a deadlock and what is a live lock? How will you go about resolving deadlocks?

Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other's piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user's process.

A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.


What are statistics, under what circumstances they go out of date, how do you update them?

Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query.

Some situations under which you should update statistics:
1) If there is significant change in the key values in the index
2) If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated
3) Database is upgraded from a previous version



• What are the temporary tables? What are temporary variables? Where they are stored? What is the purpose of Temporary tables? Can we use a temporary table that is created in one stored procedure in another stored procedure? What is the scope of temp tables?


Temporary tables are used to hold temporary data for the connection that creates the table. These tables get created in the tempdb system database.

There is one more interesting difference from a permanent – a foreign key cannot be defined on a temporary table.

No we cannot use a temp table created in another SP.

The temp table exists only for the connection that creates it. No other connection can see the same temp table.

• What are the indexes? Types of indexes? What is the difference between the clustered and non-clustered index? When they are used?

An Index stores the location of the row being looked up. It speeds up data retrieval as the engine does not have to perform a complete ‘table scan’ to find the row being queried.

A clustered index physically sorts the table upon the member columns. A non-clustered index stores the key value and the row index where to find that row.

A nice example to explain this difference is a book index that stores terms with page numbers and behaves like non-clustered index; while a phone book stores names and numbers – if the number is stored with the name – behaves like a clustered index.


• What are the advantages of Stored Procedures? When are they used?

1. Stored procedure execution plans can be reused, staying cached in SQL Server's memory, reducing server overhead.
2. Reduce network traffic
3. Provide better security to your data. If you use stored procedures exclusively, you can remove direct SELECT, INSERT, UPDATE, and DELETE rights from the tables and force developers to use stored procedures as the method for data access.
4. Encapsulate business logic or decision making processes

• What is Query Plan? Can you see a query plan? Where it is stored?

A query plan details how the SQL Query Optimizer will run the query. The optimizer plans which portion of the query to execute first and which index to use. Depending on the way the tables are set up, the nature of the query, the indexes available, etc. SQL Server will determine the optimal technique - be it to scan through the table from start to finish, to use one or more specific indexes or various other ways.

SET SHOWPLAN_TEXT ON – TSQL, Include Actual Execution Plan - SSMS

The query plan is stored in the SQL Server memory’s Procedure Cache.

• What is pre-compilation?

Whenever a stored procedure is executed for the first time, its execution plan is compiled and stored in the SQL Server’s procedure cache. For any subsequent run of the stored procedure, it doesn’t need to compile it again as it is already pre-compiled. This is unlike any embedded statement in the application that needs to be compiled each time it is fired.

• What is the SQL server Database structure? What are four types of databases?
• What is the use of tempdb database?


Stores temporary data (intermediate results of large sort operations and server’s internal operations) and database objects (temporary tables, stored procedures)

Maximum Capacity Specifications

Property SQL 2000 SQL 2005
Bytes per GROUP BY, ORDER BY 8,060 8,060
Bytes per index key 900 900
Bytes per foreign key 900 900
Bytes per primary key 900 900
Bytes per row 8,060 8,060
Clustered indexes per table 1 1
Columns per index key 16 16
Columns per foreign key 16 16
Columns per primary key 16 16
Columns per base table 1,024 1,024
Columns per SELECT statement 4,096 4,096
Columns per INSERT statement 1,024 1,024
Database size 1,048,516 TB 524,258 terabytes
Databases per instance of SQL Server 32,767 32,767
Filegroups per database 32,767 32,767
Files per database 32,767 32,767
File size (data) 32 terabytes 16 terabytes
File size (log) 32 terabytes 2 terabytes
Foreign key table references per table 253 253
Nested stored procedure levels 32 32
Nested subqueries 32 32
Nested trigger levels 32 32
Nonclustered indexes per table 249 249
Rows per table Limited by available storage


The sum of the number of all objects in a database cannot exceed 2,147,483,647

Tuesday, February 12, 2008

Normalize life

1NF Eliminate Repeating Groups -
Make a separate table for each set of related attributes, and give each table a primary key.

2NF Eliminate Redundant Data -
If an attribute depends on only part of a multi-valued key, remove it to a separate table.

3NF Eliminate Columns Not Dependent On Key -
If attributes do not contribute to a description of the key, remove them to a separate table.

BCNF Boyce-Codd Normal Form -
If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.

4NF Isolate Independent Multiple Relationships -
No table may contain two or more 1:n or n:m relationships that are not directly related.

5NF Isolate Semantically Related Multiple Relationships -
There may be practical constrains on information that justify separating logically related many-to-many relationships.

ONF Optimal Normal Form -
a model limited to only simple (elemental) facts, as expressed in Object Role Model notation.

DKNF Domain-Key Normal Form -
a model free from all modification anomalies.


http://support.microsoft.com/kb/283878

Friday, February 1, 2008

Performance Counters

SQLServer: Buffer Manager - Buffer cache hit ratio
The percentage of pages that were found in the memory. Higher the value the better. Preferred around the 90% mark. This included the availability of the procedure and the data cache in the system.

SQLServer: General Statistics - User Connections
The number of users currently connected to the SQL Server.

SQLServer:Access Methods - Full Scans / sec
Value greater than 1 or 2 indicates that we are having table / Index page scans. We need to analyze how this can be avoided.

SQLServer:Access Methods - Table Lock Escalations/sec
This gives us the number of times a table lock was asked for in a second. A high number needs a revisit to the query and the indexes on the table.

Processor - %Processor Time
The percentage of time the processor spent executing a non-idle thread. This value is subtracted from the time the processor was idle 100 percent. This is an indicator to the overall CPU utilization in the system.

Processor - %Interrupt Time
The percentage of time the processor spent servicing hardware interrupts. This can be any activity on the server machine. This needs to be 0 while our testing process.

Processor - Processor Queue Length
This counter indicates the number of threads that are waiting in the processor queue. It can be also interpreted as the number of threads that are waiting to be run by the processor. If this value is greater than the number of processor then we have a CPU bottleneck in the system.

Processor - Context Switches/sec
A typical context switch occurs when the OS or the application is forced to change the executing thread on one processor to another thread executed on another processor. This value has to be as small as possible. Context switches are not avoidable in multi-processor machines. Hence any value below 10000 is fine.

PhysicalDisk - %Disk Read Time
Time spent by the disk for read operation. We can have disk by disk analysis also to narrow down any read IO bottlenecks.

PhysicalDisk - %Disk Write Time
Time spent by the disk for write operation. We can have disk by disk analysis also to narrow down any write IO bottlenecks.

PhysicalDisk - Avg. Disk Queue Length
Average number of requests that waited for the read and write requests in the particular disk. A high value suggests we have an IO bottleneck.

Memory - Page Faults/sec
Total number of faulted pages handled by the processor per second. This value needs to as small as possible.

Memory - Pages/sec
The number of pages written to disk or from disk to resolve page faults. This would be the sum of page reads/sec and page writes/sec counter.

Lock-Unlock!

There are two scopes to this discussion - Lock modes & Lock granularity.

But before it, we begin with a prelude to Lock Escalation.

The process of converting many fine-grain locks into fewer coarse-grain locks, reducing system overhead while increasing the probability of concurrency contention, is known as
Lock Escalation.

As the Database Engine acquires low-level locks, it also places intent locks on the objects containing the lower-level objects:

1. When locking rows or index key ranges, the Database Engine places an intent lock on the pages containing the rows or keys.

2. When locking pages, the Database Engine places an intent lock on the higher level objects containing the pages. In addition to placing an intent object (OBJECT) lock on the table, intent heap or B-tree (HOBT) locks are requested on:

a.The nonclustered index if the pages are nonclustered index pages.

b. The clustered index if the pages are clustered index pages. This includes data pages in tables that have a clustered index.

c. The heap of data pages if the pages are data pages in a table that does not have a clustered index.



Lock escalation is initiated at either of these times:

1. When a single Transact-SQL statement acquires 5,000 locks on a single table or index.

2. When the number of locks in an instance of the Database Engine exceeds memory or configuration thresholds.


Assume that a single SELECT statement is a join that accesses three tables in this sequence: TableA, TableB, and TableC. The statement acquires 3,000 row locks in the clustered index for TableA and 5,000 row locks in the clustered index for TableB, but has not yet accessed TableC. When the Database Engine detects that the statement has acquired the 5,000 row locks in TableB, it attempts to escalate all locks held by the session in TableB. It also attempts to escalate all locks held by the session in TableA, but since the number of locks on TableA is < 5000, the escalation will not succeed. No lock escalation is attempted for TableC because it had not yet been accessed when the escalation occurred.

1. The simplest and safest way to prevent lock escalation is to keep transactions short and to reduce the lock footprint of expensive queries so that the lock escalation thresholds are not exceeded.

Break up large batch operations into several smaller operations.

2. Reduce the query's lock footprint by making the query as efficient as possible. Large scans or large numbers of Bookmark Lookups may increase the chance of lock escalation; additionally, it increases the chance of deadlocks, and generally adversely affects concurrency and performance. After you find the query that causes lock escalation, look for opportunities to create new indexes or to add columns to an existing index to remove index or table scans and to maximize the efficiency of index seeks.

3. Lock escalation cannot occur if a different SPID is currently holding an incompatible table lock. Lock escalation always escalates to a table lock, and never to page locks. Additionally, if a lock escalation attempt fails because another SPID holds an incompatible TAB lock, the query that attempted escalation does not block while waiting for a TAB lock. Instead, it continues to acquire locks at its original, more granular level (row, key, or page), periodically making additional escalation attempts. Therefore, one method to prevent lock escalation on a particular table is to acquire and to hold a lock on a different connection that is not compatible with the escalated lock type. For example, assume that you must run a batch job that modifies a large number of rows in the mytable table and that has caused blocking that occurs because of lock escalation. If this job always completes in less than an hour, you might create a Transact-SQL job that contains the following code, and schedule the new job to start several minutes before the batch job's start time:

BEGIN TRAN
SELECT * FROM mytable (UPDLOCK, HOLDLOCK) WHERE 1=0
WAITFOR DELAY '1:00:00'
COMMIT TRAN

This query acquires and holds an IX lock on mytable for one hour, which prevents lock escalation on the table during that time. This batch does not modify any data or block other queries

May disable lock escalation by enabling trace flag 1211. However, this trace flag disables all lock escalation globally in the instance of SQL Server.

Lock escalation serves a very useful purpose in SQL Server by maximizing the efficiency of queries that are otherwise slowed down by the overhead of acquiring and releasing several thousands of locks. Lock escalation also helps to minimize the required memory to keep track of locks.


To reduce the amount of locking by:

1. Using an isolation level that does not generate shared locks for read operations.

2. READ COMMITTED isolation level when the READ_COMMITTED_SNAPSHOT database option is ON.

3. SNAPSHOT isolation level.

4. READ UNCOMMITTED isolation level. This can only be used for systems that can operate with dirty reads.


Using the PAGLOCK or TABLOCK table hints to have the Database Engine use page, heap, or index locks instead of row locks. Using this option, however, increases the problems of users blocking other users attempting to access the same data and should not be used in systems with more than a few concurrent users.


You can also use trace flags 1211 and 1224 to disable all or some lock escalations.


Lock Modes

1. Shared (S)
Used for read operations that do not change or update data, such as a SELECT statement. No other transactions can modify the data while shared (S) locks exist on the resource.


2. Update (U)
Used on resources that can be updated. Prevents a common form of deadlock that occurs when multiple sessions are reading, locking, and potentially updating resources later.

Update (U) locks prevent a common form of deadlock. In a repeatable read or serializable transaction, the transaction reads data, acquiring a shared (S) lock on the resource (page or row), and then modifies the data, which requires lock conversion to an exclusive (X) lock. If two transactions acquire shared-mode locks on a resource and then attempt to update data concurrently, one transaction attempts the lock conversion to an exclusive (X) lock. The shared-mode-to-exclusive lock conversion must wait because the exclusive lock for one transaction is not compatible with the shared-mode lock of the other transaction; a lock wait occurs. The second transaction attempts to acquire an exclusive (X) lock for its update. Because both transactions are converting to exclusive (X) locks, and they are each waiting for the other transaction to release its shared-mode lock, a deadlock occurs.

To avoid this potential deadlock problem, update (U) locks are used. Only one transaction can obtain an update (U) lock to a resource at a time. If a transaction modifies a resource, the update (U) lock is converted to an exclusive (X) lock.



3. Exclusive (X)
Used for data-modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time.
With an exclusive (X) lock, no other transactions can modify data; read operations can take place only with the use of the NOLOCK hint or read uncommitted isolation level.


4. Intent
Used to establish a lock hierarchy. The types of intent locks are: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).

Intent locks serve two purposes:

To prevent other transactions from modifying the higher-level resource in a way that would invalidate the lock at the lower level.


To improve the efficiency of the Database Engine in detecting lock conflicts at the higher level of granularity.


For example, a shared intent lock is requested at the table level before shared (S) locks are requested on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because the Database Engine examines intent locks only at the table level to determine if a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine if a transaction can lock the entire table.



5. Schema
Used when an operation dependent on the schema of a table is executing. The types of schema locks are: schema modification (Sch-M) and schema stability (Sch-S).

Schema modification (Sch-M) locks are used when performing a table data definition language (DDL) operation, such as adding a column or dropping a table. During the time the schema modification (Sch-M) lock is held, concurrent access to the table is prevented. This means that all operations outside of the schema modification (Sch-M) lock will be blocked until the lock is released.

Schema stability (Sch-S) locks are used when compiling queries. Schema stability (Sch-S) locks do not block any transactional locks, including exclusive (X) locks. Therefore, other transactions can continue to run while a query is being compiled, including transactions with exclusive (X) locks on a table. However, DDL operations cannot be performed on the table.


6. Bulk Update (BU)
Used when bulk copying data into a table and the TABLOCK hint is specified.

7. Key-range
Protects the range of rows read by a query when using the serializable transaction isolation level. Ensures that other transactions cannot insert rows that would qualify for the queries of the serializable transaction if the queries were run again.


Now the granularity:

1. RID - A row identifier used to lock a single row within a heap.

2. KEY - A row lock within an index used to protect key ranges in serializable transactions.

3. PAGE - An 8-kilobyte (KB) page in a database, such as data or index pages.

4. EXTENT - A contiguous group of eight pages, such as data or index pages.

5. HOBT - A heap or B-tree. A lock protecting an index or the heap of data pages in a table that does not have a clustered index.

6. TABLE - The entire table, including all data and indexes.

7. FILE - A database file.

8. APPLICATION - An application-specified resource.

9. METADATA - Metadata locks.

10. ALLOCATION_UNIT - An allocation unit.

11. DATABASE - The entire database.