Thursday, December 2, 2010
Inserting into a table with single column and identity
CREATE TABLE T (A INT IDENTITY (1,1))
Hmmmm..it is ridiculously simple as:
INSERT T DEFAULT VALUES
Sunday, September 19, 2010
Table variable indexes
DECLARE @test TABLE
(
a INT PRIMARY KEY,
b varchar(50),
UNIQUE (b)
)
Generally, temp tables perform better in situations where an index is needed.
RAID for SQL Server
RAID 0 (disk striping)
This RAID level offers the highest throughput from a write perspective, but there is no redundancy built in. Since your backup files are your last line of defense and there is not a redundant set of data, I would not recommend this option.
RAID 1 (disk mirroring)
This level also offers a high level of I/O throughput as well as a redundant copy. The downside is that the size of the data partition using RAID 1 will be limited based on your physical disk size. One option would be to write your backup file to multiple RAID 1 arrays that are independent of each other and, possibly, to use different controllers or channels to further increase the I/O throughput.
RAID 5 (striping with parity)
With RAID 5 there is a write penalty because of the need to keep the parity in check. If you are really looking for faster backup processing, this RAID level does not make sense.
RAID 10 (mirroring with striping)
With RAID 10, you get the advantage of both RAID 0 and RAID 1. The downside is that it becomes very expensive to implement. But, the upside is your I/O throughput should be very fast. Since RAID 10 uses all of the drives in the array, to gain higher I/O rates, the more drives in the array will increase performance.
Recommended: Consider using independent pair of RAID1 volumes over RAID10
In addition to RAID levels, here are other things to take into consideration:
Controllers
In setting up your RAID arrays, try to isolate the different types of I/O traffic to different controller cards or channels on the controller cards. Also, the better the controller card, the better the performance.
Disk speeds
Disk drives come in multiple speeds. SCSI drives currently run as fast as 15 K rpm and are faster than the IDE drives. In most cases, enterprise-wide servers will probably be using SCSI technology which is the better choice.
Disk size
You can now get drives that are several GBs in size. This is great if you want to store a lot of data that is not accessed very often. But for writing out a large amount of data as fast as possible, large disk size doesn't help. What will be more advantageous are more drives in the array versus a couple of very large drives.
Refer: http://blogs.zdnet.com/Ou/?p=484
Tempdb - Separate RAID0
Data/Log- RAID 10 but pricey, or go for independent RAID1 arrays
Friday, September 10, 2010
Performance counters
Performance Object | Counter | Instance | Remarks |
Memory | Pages/Sec | | Rate of hard page faults; Average value <> |
Physical Disk | Average Disk Queue Length | | Average number of queued disk requests; Average value <> |
Processor | % Processor Time | _Total | Average value <> |
SQL Server: Access Methods | Freespace scans/sec | | The areas where the count increases need to be monitored and proper indexes introduced to avoid such scans. |
SQL Server: Access Methods | Full scans/sec | | The areas where the count increases need to be monitored and indexes revisited. |
SQL Server: Buffer Manager | Buffer Cache Hit Ratio | | % of requests served out of the buffer cache; Average value >= 90% |
SQL Server: Locks | Lock timeouts/sec | _Total | Should be 0 |
SQL Server: Locks | Lock wait time (ms) | _Total | Should be low |
SQL Server: Memory Manager | Memory Grants Pending | | Number of processes waiting for workspace memory grant; Average value = 0 |
SQL Server: SQL Statistics | SQL Re-compilations/sec | | Should be 0 or near it |
System | Processor Queue Length | | Number of requests outstanding on the processor: Average value <> |
System | Context Switches/sec | | Rate of switching processors from one thread to another: Average value <> |
Wednesday, August 18, 2010
Statistics test - when does it get updated?
declare @i int
select @i = 1
while @i <10000
begin
insert t select @i,'a' + convert(varchar(7),@i), @i + 100
select @i = @i + 1
end
select * from t
create clustered index ix_t_a on t(a)
create index ix_t_b on t(c)
dbcc show_statistics(t,'ix_t_a')
select * from sys.stats where object_id = object_id('t')
select * from t where c = 111
update statistics t with column
select name, auto_created, stats_date(object_id, stats_id) as update_date from sys.stats
where object_id = object_id('t')
name auto_created update_date
ix_t_a 0 2010-08-05 18:53:52.730
ix_t_b 0 2010-08-05 18:54:03.690
name auto_created update_date
_WA_Sys_00000002_09CAFFB2 1 2010-08-05 18:55:01.667
select * from t where b = 'a245'
alter index all on t rebuild
drop table t
Thursday, July 1, 2010
Performance story
http://www.sql-server-performance.com/articles/per/Identify_Missing_Indexes_Using_SQL_Server_DMVs_p1.aspx
http://blogs.msdn.com/b/queryoptteam/archive/2006/04/06/570176.aspx
http://blogs.msdn.com/b/heynen/archive/2008/06/28/best-practices-for-dynamic-management-views.aspx
Top 10 worst CPU consuming queries on the SQL Server (since the last time the server was restarted)
SELECT TOP 10
sql.text as sql
, qp.query_plan
, creation_time
, last_execution_time
, execution_count
, (total_worker_time / execution_count) as avg_cpu
, total_worker_time as total_cpu
, last_worker_time as last_cpu
, min_worker_time as min_cpu
, max_worker_time as max_cpu
, (total_physical_reads + total_logical_reads) as total_reads
, (max_physical_reads + max_logical_reads) as max_reads
, (total_physical_reads + total_logical_reads) / execution_count as avg_reads
, max_elapsed_time as max_duration
, total_elapsed_time as total_duration
, ((total_elapsed_time / execution_count)) / 1000000 as avg_duration_sec
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(sql_handle) sql
CROSS APPLY sys.dm_exec_query_plan (plan_handle) qp
ORDER BY qs. total_worker_time DESC
Best practices and bottleneck identification
http://www.sqlserver-easy.com/content/view/33/44/
Performance dashboard query:
SELECT migs.avg_user_impact,
*
FROM sys.dm_db_missing_index_details AS mid
INNER JOIN sys.dm_db_missing_index_groups AS mig
ON mig.index_handle = mid.index_handle
INNER JOIN sys.dm_db_missing_index_group_stats AS migs
ON mig.index_group_handle=migs.group_handle
ORDER BY migs.avg_user_impact desc
http://searchsqlserver.techtarget.com/resources/Microsoft-SQL-Server-Performance-Monitoring-and-Tuning?adg=301324&bucket=ETA
Finding the indexes not used:
select i.name as indexname,
object_name(s.[object_id]) as tablename, s.database_id, s.[object_id], s.User_seeks, last_user_seek, user_scans, last_user_scan,
user_lookups, Last_user_lookup, user_updates, last_user_update
from sys.dm_db_index_usage_stats s
join sys.indexes i on s.object_id = i.object_id
and s.index_id = i.index_id
where OBJECTPROPERTY(s.OBJECT_ID,'IsUserTable') = 1
and user_scans + user_lookups + user_seeks = 0
Tuesday, March 30, 2010
Notes on Index Rebuild/Reorganize
ALTER INDEX ALL ON dbo.tTable REBUILD
GO
ALTER INDEX ALL ON dbo.tTable REORGANIZE
GO
The differences between the two:
1) Index rebuild re-creates the index internally again and when that has been achieved, it drops the existing index.(Therefore the rebuild needs space for storing the new index.) Index reorganize is the process of physically re-organizing the leaf nodes of the index.
2) During index rebuild, the statistics are also re-computed – same as when a new index gets created. Reorganize on the other hand does not update the statistics. Reorganize essentially just swaps one page with another and thus does not require free space for this operation like rebuild does. Infact, reorganize can free up some pages as it does the reorg in two phases – compaction and defrag. A reorganize can remove almost all of the logical fragmentation but it cannot necessarily fix extent fragmentation in which the previous and the next extents are physically contiguous.
3) Another point to note is that an index (clustered or non-clustered) cannot be built online if it contains LOB data (text, ntext, varchar(max), nvarchar(max), varbinary(max), image and xml data type columns). The ALTER INDEX…REORGANIZE command shown above is the same as DBCC INDEXDEFRAG but there is one difference. ALTER INDEX…REORGANIZE has some additional features like large objects compaction (LOB_COMPACTION). And this is an online operation.
4) Regarding partitions of an index, if an index has multiple partitions, then you cannot rebuild a single partition online. You can reorganize a single index partition online. If you want to rebuild an index with multiple partitions in an online environment, you need to rebuild the entire index which means rebuilding all the partitions.
Reorganizing an index defragments the leaf level of clustered and nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical order (left to right) of the leaf nodes. Having the pages in order improves index-scanning performance. The index is reorganized within the existing pages allocated to it; no new pages are allocated. If an index spans more than one file, the files are reorganized one at a time. Pages do not migrate between files.
Reorganizing also compacts the index pages. Any empty pages created by this compaction are removed providing additional available disk space. Compaction is based on the fill factor value in the sys.indexes catalog view.
The reorganize process uses minimal system resources. Also, reorganizing is automatically performed online. The process does not hold long-term blocking locks; therefore, it will not block running queries or updates.
Reorganize an index when the index is not heavily fragmented. See the previous table for fragmentation guidelines. However, if the index is heavily fragmented, you will achieve better results by rebuilding the index.
http://technet.microsoft.com/en-us/library/ms189858.aspx
----------------
further notes - to rework
Types of searches
* Full scans
* Binary searches
* Linear interpolation - fixed record size, monotonely increasing key
Singleton Lookup
* Index Seek / Clustered Index Seek
* Find record using index pages
Allocation Order Scan
* Table Scan or unordered Clustered/Index Scan
* Pages splits during allocation order scans
* Table lock helps (TABLOCK, TABLOCKX) but not required
Advanced scanning
* Also known as "Merry-go-round" scan
* Multiple scans on the same table
* First scan starts, second scan joins in the middle, later scans the rest
* Enterprise edition only
* allocation order scans only
* See http://www.sqlmag.com/Article/ArticleID/49285/sql_server_49285.html
* See http://msdn2.microsoft.com/en-us/library/ms191475.aspx
Range Scan
* Index Scan / Clustered Index Scan
* Find first, then follw next page pointer
Readahead
* Pre-reading of pages in the leaf level during Range Scans
* 1, 8 or 32 pages (8KB, 64KB or 256KB) - only if pages are contiguous
* 128 pages (1024KB) only in Enterprise edition
* Logical fragmentation prevents optimal readahead
* See http://msdn.microsoft.com/en-us/library/ms191475.aspx
Logical Fragmentation
* Next page pointer does not point to next physical page
* DBCC SHOWCONTIG is deprecated now
* Look at avg_fragmentation_in_percent in sys.dm_db_index_physical_stats
* Not relevant if you have a small number of pages (less than 100?)
* Not relevant if all pages are already in the buffer pool (in cache)
* Logical IOs are not necessarily physical IOs
When to do something about it
* When to rebuild/reorganize? It always depends...
* Different for data warehouse or OLTP, for instance
* If you must have a number: 5-30% - ALTER INDEX REORGANIZE
* If you must have a number: >30% - ALTER INDEX REBUILD WITH (ONLINE = ON)*
* See http://msdn.microsoft.com/en-us/library/ms189858.aspx
* Many just rebuild everything every week (could be totally unnecessary)
Extent Fragmentation
* Extents are not contiguous
* Affects readahead, but not much (can read 8 but not 32 pages)
* In a heap, logical fragmentation is actually extent fragmentation
* With multiple files, round robin does not necessarily mean fragmentation
* Interesting KB: http://support.microsoft.com/kb/329526
Page Density
* How much space is actually in use in the page
* Affected by row size (5K fixed-length row leaves 3K unused in every page)
* Affected by page splits
* Affected by FILLFACTOR
* In the end, low page density leads to more IOs
* This is actually a kind of fragmentation
Page Splits
* Page is full and you need to add something (INSERT, UPDATE with larger data)
* Take half the rows to a new page, fix up all the pointers in both
* Could lead to page splits to non-leaf levels, all the way up to the root
* All fully logged, not matter what log level you are in
* Typically creates two low-density pages and are not contiguous
* Does not happen with a heap, only with a clustered index
* Does not happen with a key that is ever increasing (like identity)
Things that cause fragmentation
* GUID as high order key (NEWSEQUENTIALID can help)
* Variable lenght colums
* Misconfigured FILLFACTOR
* Wide rows (with 5K rows being the typical example)
* Clustered indexes that cause random insertion order (sales with customer,date)
FILLFACTOR
* Leave space on pages to avoid page splits in leaf level
* Applies only on rebuild, not when doing INSERT/UPDATE
* PAD_INDEX will do the same thing for non-leaf level
* For data warehouse, no need for FILLFACTOR
* For OLTP, it's hard to calculate the ideal factor, no quick answer
* Look at how many splits, how frequently you can rebuild
* Pick a value, watch how it goes. Then change factor or rebuild frequency
* If you must choose a number, try 70%
Symptoms of at Fragmentation
* Longer run times
* More disk activity (SET STATISTICS IO ON)
* More logging activity
Looking at Fragmentation
* Query sys.dm_db_index_physical_stats (replaces DBCC SHOWCONTIG)
* Look at avg_fragmentation_in_percent for fragmentation
* Look at avg_page_space_used_in_percent for page density
* You can summarize the results, join with sys.indexes to show index names
* See sample query at http://www.sqlskills.com/blogs/paul/post/Can-GUID-cluster-keys-cause-non-clustered-index-fragmentation.aspx
Querying sys.dm_db_index_physical_stats
* It could be expensive, since it bring lots of pages into the buffer pool
* DETAILED (reads everything)
* SAMPLED (reads 1% of the pages if less than 10,000 pages)
* LIMITED (parent level of b-tree, same as DBCC SHOWCONTIG WITH FAST)
* Limited can't tell you page density, since it does not read the page level
Fixing fragmentation
* Three main choices
* Rebuild - ALTER INDEX ... REBUILD (replaces DBCC REINDEX)
* Recreate - CREATE INDEX WITH DROP_EXISTING
* Reorganize - ALTER INDEX ... REORGANIZE (replaces DBCC INDEXDEFRAG)
* DROP, CREATE (separately) not good due to possible issue with constraints
* You can also decide not to do anything at all
* You don't necessarily have to rebuild everything every day...
Rebuild
* Can be done online (unless you have LOB columns)
* Can use multiple CPUs (control with MAXDOP)
* Works single partition or all partitions
* Rebuilds index statistics (equivalent of a full scan)
* Does not update column statistics
* Can be minimally logged
* Atomic. If you interrupt, it rolls back.
* Online - short-hold Shared lock, short-hold SCHema_Mod lock
* Offline clustered - eXclusive lock
* Offline non-clustered - Shared lock
* Creates new before dropping old. Reads from the old index.
* Always rebuilds everything. Faster for largely fragmented indexes.
Online Index Rebuild
* 1) Preparation - New index is created and set to write-only
* very short Shared lock
* 2) Build - Data is inserted from source.
* Two copies of the data being updated
* Scans know they shouldn't use the other - "anti-matter" record
* 3) Final - Index metadata is updated.
* short-hold SCHema_Mod lock
* Old data is dropped
* Details at http://msdn.microsoft.com/en-us/library/ms191261.aspx
Recreate
* Basically same as rebuild
* Can move to a new location
* Can change the schema
* Good way to do a shrink without using shrink
Reorganize
* Is always online (even if you have LOB columns)
* Always single threaded
* Works single partition or all partitions
* Table IX lock - locks only TABLOCKs and escalated locks
* Addresses only existing fragmentation
* Faster for lightly fragemented indexes
* Does not use much extra space
Reorganize Phase 1 - Page compaction
* Moves data from neighbor pages to get closer to FILLFACTOR
* Leaf level only. Works on a sliding window (a few pages at a time)
* Tries eXclusive lock on the pages involved. If can't, moves on.
* If it can empty pages, it will deallocated them.
* It will also remove ghost records.
Reorganize Phase 2 - Page defragment
* Make logical order the same as the allocation order
* Leaf level only. Shuffle pages arounds without using extra space.
* Uses one extra free page (page type 19 - unlinked reorg page)
* Locks enough pages to get the operation done without blocking others.
* Reminded me of the "towers of hanoi" problem :-)
Wednesday, March 17, 2010
Statistics basic
select * from sys.stats where object_id = object_id('brkprd')
To retrieve detailed information about an index or statistic
dbcc show_statistics ('brkprd','IX_Brkprd_AcctIdIsNonZeroBP')
More on indexes
By default, a primary key constraint creates a clustered index.
In a nonclustered index, the lowest level of the index does not contain the data page of the table. Instead, it contains the information that allows SQL Server to navigate to the data pages it needs. For tables that have a clustered index, the leaf node of the nonclustered index contains the clustered index keys. In the previous example, the leaf node of a nonclustered index on the Customers table would contain the Customer_ID key.
If the underlying table does not have a clustered index (this data structure is known as a heap), the leaf node of the nonclustered index contains a row locator to the heap data pages.
Friday, March 12, 2010
Sorting Number Strings
create table t (a varchar(10))
insert t select '1a1'
insert t select '4*0'
insert t select '1aw0'
insert t select 's20'
insert t select 't02'
insert t select '11r1'
insert t select '11v4g7@4/'
1
02
4
10
11
20
111
select * from t order by len(a),a
a98a3455
select * from t order by a
select a,dbo.fn_StringNumSort(a) from t order by dbo.fn_StringNumSort(a)
select dbo.fn_StringNumSort('102a')
*/
Alter Function dbo.fn_StringNumSort
(
@mStringVar Varchar(max)
)
Returns Varchar(max)
As
Begin
--1. Shred chars and numbers
--2. Concat 10000000000 to numbers so that the total length stays 11 - Substring('10000000000', 1, 11-Len(Set of numbers))
--3. Cases - abc13ab, a1bc13a1b1, 1, a, 02, 1, 10
-- declare @mStringVar varchar(max)
-- select @mStringVar = '02'
declare @result varchar(max), @VarLen int, @StartPointer int, @EndPointer int;
select @result = '', @VarLen = Len(@mStringVar), @StartPointer = 1, @EndPointer = 1;
While @EndPointer <= @VarLen
Begin
--48 to 57 is the ASCII for numbers 0-9; find the first position of number
If ASCII(Substring(@mStringVar,@EndPointer,1)) >= 48 And ASCII(Substring(@mStringVar,@EndPointer,1)) <= 57 AND @EndPointer <= @VarLen
Begin
-- select 'num', @StartPointer, @EndPointer, ASCII(Substring(@mStringVar,@EndPointer,1))
--While there are numbers, move the pointer ahead till it reaches the next char
While ASCII(Substring(@mStringVar,@EndPointer,1)) >= 48 AND ASCII(Substring(@mStringVar,@EndPointer,1)) <= 57 AND @EndPointer <= @VarLen
select @EndPointer = @EndPointer + 1
--Handle if there is single number or multiple number - single number will return 0 for (@EndPointer-@StartPointer)
If @StartPointer < @EndPointer
select @result =
CASE LEN(@result)
When 0 Then Substring('10000000000', 1, 11-Len(Substring(@mStringVar,@StartPointer,@EndPointer-@StartPointer))) + Substring(@mStringVar,@StartPointer,@EndPointer-@StartPointer)
-- Else @result + '-' + Substring(@mStringVar,@StartPointer,@EndPointer-@StartPointer)
Else @result + Substring('10000000000', 1, 11-Len(Substring(@mStringVar,@StartPointer,@EndPointer-@StartPointer))) + Substring(@mStringVar,@StartPointer,@EndPointer-@StartPointer)
END
Else If @StartPointer = @EndPointer
select @result =
CASE LEN(@result)
When 0 Then Substring('10000000000', 1, 11-Len(Substring(@mStringVar,@StartPointer,1))) + Substring(@mStringVar,@StartPointer,1)
-- Else @result + '-' + Substring(@mStringVar,@StartPointer,1)
Else @result + Substring('10000000000', 1, 11-Len(Substring(@mStringVar,@StartPointer,1))) + Substring(@mStringVar,@StartPointer,1)
END
--Mark the start of the chars
select @StartPointer = @EndPointer
-- select @StartPointer, @EndPointer
-- select @result
End
Else
Begin
-- select 'char', @StartPointer, @EndPointer, ASCII(Substring(@mStringVar,@EndPointer,1)),Substring(@mStringVar,@EndPointer,1)
--While there are chars, move the pointer ahead till it reaches the next number
While (ASCII(Substring(@mStringVar,@EndPointer,1)) < 48 Or ASCII(Substring(@mStringVar,@EndPointer,1)) > 57) AND @EndPointer <= @VarLen
select @EndPointer = @EndPointer + 1
--Handle if there is single number or multiple chars - single chars will return 0 for (@EndPointer-@StartPointer)
If @StartPointer < @EndPointer
select @result =
CASE LEN(@result)
When 0 Then Substring(@mStringVar,@StartPointer,@EndPointer-@StartPointer)
Else @result + Substring(@mStringVar,@StartPointer,@EndPointer-@StartPointer)
END
Else If @StartPointer = @EndPointer
select @result =
CASE LEN(@result)
When 0 Then Substring(@mStringVar,@StartPointer,1)
Else @result + Substring(@mStringVar,@StartPointer,1)
END
--Mark the start of the numbers
select @StartPointer = @EndPointer
-- select @StartPointer, @EndPointer
-- select @result
End
End
Return (@result)
End
Monday, January 25, 2010
SQL 2008 What's new
Policy-Based Management enables the efficient management of multiple SQL Server instances from a single location. Easily create policies that control security, database options, object naming conventions, and other settings at a highly granular level. Policies can evaluate servers for compliance with a set of predefined conditions and prevent undesirable changes being made to servers.
Data compression
Data compression reduces the amount of storage space needed to store tables and indexes, which enables more efficient storage of data. Data Compression does not require changes be made to applications in order to be enabled.
Transparent Data Encryption
Transparent Data Encryption enables data to be stored securely by encrypting the database files. If the disks that contain database files become compromised, data in those files is protected because that data can only be de-encrypted by an authorized agent. SQL Server performs the encryption and de-encryption directly, so the process is entirely transparent to connecting applications. Applications can continue to read and write data to and from the database as they normally would. Backup copies of encrypted database files are also automatically encrypted.
Add Hot CPUs and Hot memory
Hot-add CPUs, a feature available with the 64-bit edition SQL Server Enterprise, allows CPUs to be dynamically added to servers as needed, without the need to shut down the server or limit client connections. Hot-add memory enables memory to be added in the same way.
Change Data Capture
Use Change Data Capture (CDC) to track changes to the data in your tables. CDC uses a SQL Server Agent job to capture insert, update and delete activity. This information is stored in a relational table, from where it can be accessed by data consumers such as SQL Server 2008 Integration Services. Use CDC in conjunction with Integration Services to incrementally populate data warehouses, enabling you to produce more frequent reports that contain up-to-date information. It also allows sync-enabled mobile and desktop applications to perform efficient data synchronization between client and server, without requiring changes to the database.
LINQ
Language Integrated Query (LINQ) is a .NET Framework version 3.5 feature that provides developers with a common syntax to query any data source from client applications. Using LINQ to SQL or LINQ to Entities, developers can select, insert, update, and delete data that is stored in SQL Server 2008 databases using any .NET programming language such as C# and VB.NET.
FILESTREAM data
FILESTREAM enables binary large object (BLOB) data to be stored in the Microsoft Windows NTFS file system instead of in a database file. Data that is stored using FILESTREAM behaves like any other data type and can be manipulated using T-SQL select, insert, update and delete statements. Unlike traditional BLOB storage, FILESTREAM data is logically shackled to the database while being stored efficiently outside the database in the NTFS file system. FILESTREAM data participates in all SQL Server transactions and backup operations, along with the rest of the database.
DATE / TIME Data Types
SQL Server 2008 introduces several new date and time based data types. DATETIME2 references the Coordinated Universal Time (UTC) instead of the system time for greater accuracy and can store date and time data to a precision of 100 nanoseconds. The new DATE and TIME data types enable you to store date and time data separately. The new DATETIMEOFFSET data type introduces time zone support by storing date, time and offset such as ‘plus 5 hours’.
Spatial data with GEOGRAPHY and GEOMETRY data types
New GEOGRAPHY and GEOMETRY data types allow spatial data to be stored directly in a SQL Server 2008 database. Use these spatial data types to work with location-based data that describes physical locations, such as longitude and latitude.
GEOGRAPHY enables you to represent three-dimensional geodetic data such as GPS applications use. GEOMETRY enables you to represent two-dimensional planar data such as points on maps. Spatial data types help you to answer questions like ‘How many of our stores are located within 20 miles of Seattle?’
SQL 2005 What's New
Database Mirroring
Extend log shipping capabilities with the database mirroring solution. You will be able to use database mirroring to enhance availability of your SQL Server systems by setting up automatic failover to a standby server.
Online Restore
With SQL Server 2005, database administrators are able to perform a restore operation while an instance of SQL Server is running. Online restore improves the availability of SQL Server because only the data being restored is unavailable; the rest of the database remains online and available.
Dedicated Administrator Connection
SQL Server 2005 provides a dedicated administrator connection that administrators can use to access a running server even if the server is locked or otherwise unavailable. This capability enables administrators to troubleshoot problems on a server by executing diagnostic functions or Transact-SQL statements.
Snapshot Isolation
Snapshot Isolation (SI) level is provided at the database level. With SI, users can access the last committed row using a transitionally consistent view of the database. This capability provides greater scalability.
Data Partitioning
Data partitioning is enhanced with native table and index partitioning that enables efficient manageability of large tables and indexes.
Development:
Hosted CommonLanguage Runtime
With SQL Server 2005 developers can create database objects using familiar languages such as Microsoft Visual C# .NET and Microsoft Visual Basic .NET. Developers can also create two new objects—user-defined types and aggregates.
Native XML Support
Native XML data can be stored, queried, and indexed in a SQL Server database—allowing developers to build new classes of connected applications around Web services and across any platform or device.
Security Enhancements
The security model in SQL Server 2005 separate users from objects, provides fine-grain access, and enables greater control of data access. Additionally, all system tables are implemented as views, providing more control over database system objects.
Transact-SQL Enhancements
SQL Server 2005 provides new language capabilities for developing scalable database applications. These enhancements include error handling, recursive query capabilities, relational operator PIVOT, APPLY, ROW_NUMBER and other row ranking functions, and more.
Reliable Messaging for Asynchronous Applications
Service Broker is a robust messaging infrastructure that provides reliable transactional delivery of critical messages between servers—with the scalable high-performance that is expected with asynchronous queuing.
Visual Studio Integration
Tight integration with Microsoft Visual Studio and the .NET Framework streamlines development and debugging of data-driven applications. Developers can build database objects, such as stored procedures, using any .NET language and can seamlessly debug across .NET and Transact-SQL (TSQL) languages.
Business Intelligence:
Integration Services (SSIS)
SQL Server Integration Services (SSIS) is a next generation data integration platform that can integrate data from any source. SSIS provides a scalable and extensible platform that empowers development teams to build, manage, and deploy integration solutions to meet unique integration needs.
Reporting Services
SQL Server Reporting Services is a comprehensive, server-based reporting solution designed to help you author, manage, and deliver both paper-based and interactive Web-based reports.
Saturday, January 23, 2010
Monday, January 18, 2010
Fixed Server Roles
Members of the dbcreator fixed server role can create, alter, drop, and restore any database.
The diskadmin fixed server role is used for managing disk files.
Members of the processadmin fixed server role can terminate processes that are running in an instance of SQL Server.
Members of the securityadmin fixed server role manage logins and their properties. They can GRANT, DENY, and REVOKE server-level permissions. They can also GRANT, DENY, and REVOKE database-level permissions. Additionally, they can reset passwords for SQL Server logins.
Members of the serveradmin fixed server role can change server-wide configuration options and shut down the server.
Members of the setupadmin fixed server role can add and remove linked servers, and also execute some system stored procedures.
Members of the sysadmin fixed server role can perform any activity in the server. By default, all members of the Windows BUILTIN\Administrators group, the local administrator's group, are members of the sysadmin fixed server role.
For details on role-level permission, execute sp_srvrolepermission rolename, e.g., sp_srvrolepermission processadmin
Trap InputBuffer for all processes
declare @sql varchar(50)
Create Table #inputbuffer ([EventType] nvarchar(30),[Parameters] int,[EventInfo] nvarchar(4000),[SPID] varchar(10))
declare activesp cursor fast_forward for
select spid from master.dbo.sysprocesses where dbid > 1
open activesp
fetch next from activesp into @spid
while @@fetch_status = 0
begin
set @sql = 'dbcc inputbuffer(' + @spid + ')'
insert into #inputbuffer([EventType], [Parameters], [EventInfo])
exec (@sql)
update #inputbuffer set spid = @spid where spid is null
fetch next from activesp into @spid
end
close activesp
deallocate activesp
if @@cursor_rows <> 0
begin
close sprocket
deallocate sprocket
end
select * from #inputbuffer
drop table #inputbuffer
GO
Sunday, January 17, 2010
Re-organize or Rebuild index?
from sys.dm_db_index_physical_stats(db_id(),NULL,NULL,NULL,'detailed') as A -- detailed shows the record_count
inner join sys.indexes as I
On A.object_id = I.object_id And A.index_id = I.index_id
avg_fragmentation_in_percent:
<=30 - Reorganize
>30 - Rebuild
Database Recovery phases
1. Data Copy: Involves copying all data, log, index pages from the backup file/media to the databases files.
2. Redo/Roll-forward: Involves applying the logged transactions to the data copied from the backup to roll forward that data to the recovery point. This includes uncommitted data too; therefore, the database is in an inconsistent state.
3. Undo: Involves rolling back the uncommitted transactions, bringing the database in a consistent state.
Finally, the Recovery brings the database online for users.
When NORECOVERY option is used, the restore process only completes the 1 phase - Data Copy. Therefore, neither committed transactions are rolled forward nor uncommitted transactions are undone/rolled back.
Saturday, January 16, 2010
Some exercises on backup/recovery
on
(
name = 'test_data',
filename = 'D:\Data\test.mdf',
size = 2,
maxsize = unlimited,
filegrowth = 10%
)
log on
(
name = 'test_log',
filename = 'E:\Log\test.ldf',
size = 1,
maxsize = unlimited,
filegrowth = 10
)
alter database test set recovery simple
alter database test set recovery bulk_logged
use test
create table t1 (c int)
backup database test to disk = 'F:\testfull2.bak' with init
backup database test to disk = 'F:\testdiff2.bak' with differential,init
backup log test to disk = 'F:\testlog3.bak'
insert t1 select 1
insert t1 select 2
select * from t1
restore headeronly from disk = 'F:\testdiff1.bak'
restore headeronly from disk = 'F:\testdiff2.bak'
restore headeronly from disk = 'F:\testfull1.bak'
restore headeronly from disk = 'F:\testfull2.bak'
restore headeronly from disk = 'F:\testlog1.bak'
restore headeronly from disk = 'F:\testlog2.bak'
restore headeronly from disk = 'F:\testlog3.bak'
drop database test
use master
restore database test from disk = 'F:\testfull2.bak' with norecovery
move 'test_data' to 'F:\Data\test_data.mdf',
move 'test_log' to 'F:\Log\test_log.ldf'
restore database test from disk = 'F:\testdiff2.bak' with norecovery
sp_helpdb test
restore log test from disk = 'F:\testlog3.bak' with norecovery
restore database test with recovery
drop database test
select * from t1
sp_detach_db test
create database test on (filename = 'F:\Data\test_data.mdf') log on (filename = 'F:\Log\test_log.ldf') for attach
Friday, January 15, 2010
Activity views
select * from sys.dm_exec_sessions
select * from sys.dm_exec_requests
More on Backups
1. Backs up any activity that took place during the backup
2. Backs up any uncommitted transactions in the transaction log. (When the database is recovered from this backup, the uncommitted transactions are rolled back.)
Transaction log backup results in:
1. Backs up the log from the last successful [Backup Log] statement to the end of the current log.
2. Truncates the log up to the beginning of the active portion of the log and discards the information in the inactive portion.
Active portion of the transaction log starts at the point of the oldest open transaction and continues to the end of the log.
A Tail log backup includes the portion of the log that has not been backed up (active portion) - this backup does not truncate the log and is used when the data files are damaged but the log is undamaged.
Differential backup results in:
1. Backs up the parts of the database that have changed since the last full database backup.
2. Backs up only the extents that changed since the last base backup.
3. Backs up the activity that took place during the differential backup and any uncommitted transactions as well - this is similar to full backup.
Simplifying Database Mail
1. Add the SQL Server Agent service account to the msdb users and then to the DatabaseMailUserRole.
2. Configure the Database Mail to create a default SQL Mail profile - keeping 'Anonymous Authentication' for the SMTP account.
3, Map this SMTP profile to the SQL Server Agent - Properties -> Alert System.
4. Restart Agent.
Saturday, January 9, 2010
Tail log
SQL Server 2005 and later versions usually require that you take a tail-log backup before you start to restore a database. The tail-log backup prevents work loss and keeps the log chain intact. When you are recovering a database to the point of a failure, the tail-log backup is the last backup of interest in the recovery plan. If you cannot back up the tail of the log, you can recover a database only to the end of the last backup that was created before the failure.
Not all restore scenarios require a tail-log backup. You do not have to have a tail-log backup if the recovery point is contained in an earlier log backup, or if you are moving or replacing (overwriting) the database and do not need to restore it to a point of time after the most recent backup. Also, if the log files are damaged and a tail-log backup cannot be created, you must restore the database without using a tail-log backup. Any transactions committed after the latest log backup are lost.
If the database is online, before starting a restore sequence, back up the tail of the log using WITH NORECOVERY whenever the next action you plan to perform on the database is a restore operation:
BACKUP LOG database_name TO
If the database is offline and does not start.
Try to take a tail-log backup. Because no transactions can occur at this time, using WITH NORECOVERY is optional. If the database is damaged, use either WITH CONTINUE_AFTER_ERROR or WITH NO_TRUNCATE.
BACKUP LOG database_name TO