Difference between HIGH AVAILABILITY and DISASTER RECOVERY
Its many time seen, so many people often confused or thought High Availability and Disaster Recovery (DR) are the same thing. A high-availability solution does not mean that you are prepared for a disaster. High availability covers hardware or system-related failures, whereas disaster recovery can be used in the event of a catastrophic failure due to environmental factors. Although some of the high-availability options may help us when designing our DR strategy, they are not the be-all and end-all solution.
The goal of high availability is to provide an uninterrupted user experience with zero data loss. According to Microsoft’s SQL Server Books Online, “A high-availability solution masks the effects of a hardware or software failure and maintains the availability of applications so that the perceived downtime for users is minimized”.
Disaster recovery is generally a part of a larger process known as Business Continuity Planning (BCP), which plans for any IT or non IT eventuality. A server disaster recovery plan helps in undertaking proper preventive, detective and corrective measures to mitigate any server related disaster. Here is the very good book “The Shortcut Guide To Untangling the Differences Between High Availability and Disaster Recovery” by Richard Siddaway for more details. As per my understanding below explanation is the best and quickest way to remember this is:
High Availability is @ LAN and Disaster Recovery is @ WAN
J
J
Temporary Stored Procedures
Posted: June 28, 2013 in Database AdministratorTags: # procedure, SQL Server Temporary procedure, SQL Server temporary staored procedure, SQL SERVER Temporary Stored Procedures, Stored procedure in tempdb, Temporary stored procedures
Temporary stored procedures are like normal stored procedures but,
as their name suggests, have short-term existence. There are two types
of temporary stored procedures as Private and global are analogous to
temporary tables, can be created with the # and ## prefixes added to the
procedure name. The symbol # denotes a local temporary stored procedure
while ## denotes a global temporary stored procedure. These procedures
do not exist after SQL Server is shut down.
Temporary stored procedures are useful when connecting to earlier versions of SQL Server that do not support the reuse of execution plans for Transact-SQL statements or batches. Any connection can execute a global temporary stored procedure. A global temporary stored procedure exists until the connection used by the user who created the procedure is closed and any currently executing versions of the procedure by any other connections are completed. Once the connection that was used to create the procedure is closed, no further execution of the global temporary stored procedure is allowed. Only those connections that have already started executing the stored procedure are allowed to complete. If a stored procedure not prefixed with # or ## is created directly in the tempdb database, the stored procedure is automatically deleted when SQL Server is shut down because tempdb is re-created every time SQL Server is started. Procedures created directly in tempdb exist even after the creating connection is terminated.
Temporary stored procedures are useful when connecting to earlier versions of SQL Server that do not support the reuse of execution plans for Transact-SQL statements or batches. Any connection can execute a global temporary stored procedure. A global temporary stored procedure exists until the connection used by the user who created the procedure is closed and any currently executing versions of the procedure by any other connections are completed. Once the connection that was used to create the procedure is closed, no further execution of the global temporary stored procedure is allowed. Only those connections that have already started executing the stored procedure are allowed to complete. If a stored procedure not prefixed with # or ## is created directly in the tempdb database, the stored procedure is automatically deleted when SQL Server is shut down because tempdb is re-created every time SQL Server is started. Procedures created directly in tempdb exist even after the creating connection is terminated.
Converting Milliseconds/Nanoseconds date string values to Date/Datetime
Posted: June 27, 2013 in Database AdministratorTags: "Msg 241, Conversion failed when converting date and/or time from character string, Converting Date string values's miliseconds to Date/Datetime, Converting Milliseconds date string values to Date/Datetime, Level 16, Line 1, Msg 241 Level 16 State 1 Line 1, State 1
Data type conversion is very frequent and common practices with data
to manipulate or reproduce data. As we regularly use conversion
functions CONVERT and CAST in our daily routine.
Sometime its very needed to convert any datetime value to string or Date string to Datetime.
Let see example below for Datetime to character string conversion
Suppose if it required to convert a date string to date/Datetime, usually we use
But now if milliseconds part is more than 3 digits, it will give error message as
Msg 241, Level 16, State 1, Line 1
Conversion failed when converting date and/or time from character string.
To resolve this, new data type DATETIME2 introduced from SQL Server 2008 onwards and beauty of DATETIME2 is that its supports fractional precision upto 7 digit. Let See examples
Sometime its very needed to convert any datetime value to string or Date string to Datetime.
Let see example below for Datetime to character string conversion
Suppose if it required to convert a date string to date/Datetime, usually we use
But now if milliseconds part is more than 3 digits, it will give error message as
Msg 241, Level 16, State 1, Line 1
Conversion failed when converting date and/or time from character string.
To resolve this, new data type DATETIME2 introduced from SQL Server 2008 onwards and beauty of DATETIME2 is that its supports fractional precision upto 7 digit. Let See examples
SQL Server 2014 – Code Name : Hekaton
Posted: June 26, 2013 in Database AdministratorTags: aspirational goal, brent ozar, Column Store, ColumnStore, data warehousing and business intelligence, Hekaton, In-memory OLTP, software, SQL Codename - Hekaton, SQL Server 2014, SQL Server 2014 Features, SSD, technology, Whats new in SQL Server 2014, XVelocity column
SQL Server 2014 CTP1 released on 26-Jun-2013, highly focused on data
warehousing and business intelligence (BI) enhancements made possible
through new in-memory capabilities built in to the core Relational
Database Management System (RDBMS). As memory prices have fallen
dramatically, 64-bit architectures have become more common and usage of
multicore servers has increased, Microsoft has sought to tailor SQL
Server to take advantage of these trends.
Hekaton is a Greek term for “factor of 100.” The aspirational goal of the team was to see 100 times performance acceleration levels. Hekaton also is a giant mythical creature, as well as a Dominican thrash-metal band, for what it’s worth.
In-Memory OLTP (formally known as code name “Hekaton”) is a new database engine component, fully integrated into SQL Server. It is optimized for OLTP workloads accessing memory resident data. In-Memory OLTP allows OLTP workloads to achieve remarkable improvements in performance and reduction in processing time. Tables can be declared as ‘memory optimized’ to take advantage of In-Memory OLTP’s capabilities. In-Memory OLTP tables are fully transactional and can be accessed using Transact-SQL. Transact-SQL stored procedures can be compiled into machine code for further performance improvements if all the tables referenced are In-Memory OLTP tables. The engine is designed for high concurrency and blocking is minimal. “Memory-optimized tables are stored completely differently than disk-based tables and these new data structures allow the data to be accessed and processed much more efficiently”
New buffer pool extension support to non-volatile memory such as solid state drives (SSDs) will increase performance by extending SQL Server in-memory buffer pool to SSDs for faster paging.
Here is very good explanation from Mr. Brent Ozar’s article on this ( http://www.brentozar.com/archive/2013/06/almost-everything-you-need-to-know-about-the-next-version-of-sql-server/ )
The New Feature xVelocity ColumnStore provides in-memory capabilities for data warehousing workloads that result in dramatic improvement for query performance, load speed, and scan rate, while significantly reducing resource utilization (i.e., I/O, disk and memory footprint). The new ColumnStore complements the existing xVelocity ColumnStore Index, providing higher compression, richer query support and updateability of the ColumnStore giving us the even faster load speed, query performance, concurrency and even lower price per terabyte.
Extending Memory to SSDs: Seamlessly and transparently integrates solid-state storage into SQL Server by using SSDs as an extension to the database buffer pool, allowing more in-memory processing and reducing disk IO.
Enhanced High Availability : New AlwaysOn features availability Groups now support up to 8 secondary replicas that remain available for reads at all times, even in the presence of network failures. Failover Cluster Instances now support Windows Cluster Shared Volumes, improving the utilization of shared storage and increasing failover resiliency. Finally, various supportability enhancements make AlwaysOn easier to use.
Improved Online Database Operations: includes single partition online index rebuild and managing lock priority for table partition switch, greatly increasing enterprise application availability by reducing maintenance downtime impact.
For more details please refer SQL_Server_Hekaton_CTP1_White_Paper and for SQL Server 2014 CTP1 software click on http://technet.microsoft.com/en-US/evalcenter/dn205290.aspx (The Microsoft SQL Server 2014 CTP1 release is only available in the X64 architecture.)
I appreciate your time, keep posting your comment. I will be very happy to review and reply to your comments/Questions/Doubts as soon as I can.
Hekaton is a Greek term for “factor of 100.” The aspirational goal of the team was to see 100 times performance acceleration levels. Hekaton also is a giant mythical creature, as well as a Dominican thrash-metal band, for what it’s worth.
In-Memory OLTP (formally known as code name “Hekaton”) is a new database engine component, fully integrated into SQL Server. It is optimized for OLTP workloads accessing memory resident data. In-Memory OLTP allows OLTP workloads to achieve remarkable improvements in performance and reduction in processing time. Tables can be declared as ‘memory optimized’ to take advantage of In-Memory OLTP’s capabilities. In-Memory OLTP tables are fully transactional and can be accessed using Transact-SQL. Transact-SQL stored procedures can be compiled into machine code for further performance improvements if all the tables referenced are In-Memory OLTP tables. The engine is designed for high concurrency and blocking is minimal. “Memory-optimized tables are stored completely differently than disk-based tables and these new data structures allow the data to be accessed and processed much more efficiently”
New buffer pool extension support to non-volatile memory such as solid state drives (SSDs) will increase performance by extending SQL Server in-memory buffer pool to SSDs for faster paging.
Here is very good explanation from Mr. Brent Ozar’s article on this ( http://www.brentozar.com/archive/2013/06/almost-everything-you-need-to-know-about-the-next-version-of-sql-server/ )
The New Feature xVelocity ColumnStore provides in-memory capabilities for data warehousing workloads that result in dramatic improvement for query performance, load speed, and scan rate, while significantly reducing resource utilization (i.e., I/O, disk and memory footprint). The new ColumnStore complements the existing xVelocity ColumnStore Index, providing higher compression, richer query support and updateability of the ColumnStore giving us the even faster load speed, query performance, concurrency and even lower price per terabyte.
Extending Memory to SSDs: Seamlessly and transparently integrates solid-state storage into SQL Server by using SSDs as an extension to the database buffer pool, allowing more in-memory processing and reducing disk IO.
Enhanced High Availability : New AlwaysOn features availability Groups now support up to 8 secondary replicas that remain available for reads at all times, even in the presence of network failures. Failover Cluster Instances now support Windows Cluster Shared Volumes, improving the utilization of shared storage and increasing failover resiliency. Finally, various supportability enhancements make AlwaysOn easier to use.
Improved Online Database Operations: includes single partition online index rebuild and managing lock priority for table partition switch, greatly increasing enterprise application availability by reducing maintenance downtime impact.
For more details please refer SQL_Server_Hekaton_CTP1_White_Paper and for SQL Server 2014 CTP1 software click on http://technet.microsoft.com/en-US/evalcenter/dn205290.aspx (The Microsoft SQL Server 2014 CTP1 release is only available in the X64 architecture.)
I appreciate your time, keep posting your comment. I will be very happy to review and reply to your comments/Questions/Doubts as soon as I can.
Checkpoint
Posted: June 24, 2013 in Database AdministratorTags: Checkpoint, Database Checkpoint, dirty pages, dirty read/write, SQL Server Checkpoint, when checkpoint happend, when checkpoint occure, why checkpoint in sql server
CHECKPOINT : As per MSDN/BOL , A checkpoint writes the current in-memory modified pages (known as dirty pages)
and transaction log information from memory to disk and, also, records
information about the transaction log because For performance reasons,
the Database Engine performs modifications to database pages in
memory—in the buffer cache—and does not write these pages to disk after
every changes made. Checkpoint
is the SQL engine system process that writes all dirty pages to disk
for the current database. The benefit of the Checkpoint process is to
minimize time during a later recovery by creating a point where all
dirty pages have been written to disk.
When CHECKPOINT happen?
When CHECKPOINT happen?
- A CHECKPOINT statement is explicitly executed. A checkpoint occurs in the current database for the connection.
- A minimally logged
operation is performed in the database; for example, a bulk-copy
operation is performed on a database that is using the Bulk-Logged
recovery model.
- Database files have been added or removed by using ALTER DATABASE.
- An instance of SQL
Server periodically generates automatic checkpoints in each database to
reduce the time that the instance would take to recover the database.
- A database backup is
taken. Before a backup, the database engine performs a checkpoint, in
order that all the changes to database pages (dirty pages) are contained
in the backup.
-
Stopping the server using any of the following methods, they it cause a checkpoint.
-
Using Shutdown statement,
-
Stopping SQL Server service through SQL Server configuration, SSMS, net stop mssqlserver and ControlPanel-> Services -> SQL Server Service.
-
When the “SHUTDOWN WITH NOWAIT” is used, it does not execute checkpoint on the database.
-
-
When the recovery internal server configuration is accomplished. This is when the active portion of logs exceeds the size that the server could recover in amount of time defined on the server configuration (recovery internal).
-
When the transaction log is 70% full and the database is in truncation mode.
-
The database is in truncation mode, when is in simple recovery model and after a backup statement has been executed.
-
An activity requiring a database shutdown is performed. For example, AUTO_CLOSE is ON and the last user connection to the database is closed, or a database option change is made that requires a restart of the database.
-
Long-running uncommitted transactions increase recovery time for all types of checkpoints.
The time required to perform a checkpoint depends directly of the amount of dirty pages that the checkpoint must write.
We
can monitor checkpoint I/O activity using Performance Monitor by
looking at the “Checkpoint pages/sec” counter in the SQL Server:Buffer
Manager object.Tracking data/log file size increment for a database
Posted: June 24, 2013 in Database AdministratorTags: backupfile, backupset, Data file growth hi History, DataBase Growth History, DB Growth History, Log file growth history, SQL Server Database Size growth, SQL Server Database Size increment, Tracking data/log file size increment for a database
Here is a script to find out database’s file (either Data file-MDF
or Log File – LDF ) growth details. To find this we can use MSDB’s two
tables, one is backupfile table which Contains one row for each data or
log file of a database and secondone is backupset table which Contains a row for each backup set.
Script is as below.
SELECT BS.database_name
,BF.logical_name
,file_type
,BF.file_size/(1024*1024) FileSize_MB
,BS.backup_finish_date
,BF.physical_name
FROM msdb.dbo.backupfile BF
INNER JOIN msdb.dbo.backupset BS ON BS.backup_set_id = BF.backup_set_id
AND BS.database_name = ‘VirendraTest’ – Database Name
WHERE logical_name = VirendraTest’ – DB’s Logical Name
ORDER BY BS.backup_finish_date DESC
Script is as below.
SELECT BS.database_name
,BF.logical_name
,file_type
,BF.file_size/(1024*1024) FileSize_MB
,BS.backup_finish_date
,BF.physical_name
FROM msdb.dbo.backupfile BF
INNER JOIN msdb.dbo.backupset BS ON BS.backup_set_id = BF.backup_set_id
AND BS.database_name = ‘VirendraTest’ – Database Name
WHERE logical_name = VirendraTest’ – DB’s Logical Name
ORDER BY BS.backup_finish_date DESC
SQL Server Management Studio Tip: Get a Customized New Query Window
Posted: June 10, 2013 in Database AdministratorTags: Customized new query windows, Customizing SSMS, Modified new query windows, SQL Server's SSMS new query windows, SSMS new query windows, SSMS tips
Here is a SSMS tips where we can customized our new query windows as,
For SQL Server 2008 (64 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\SqlWorkbenchProjectItems\Sql\SQLFILE.SQL
and modify the file as per your wish.
For SQL Server 2008 (32 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\SqlWorkbenchProjectItems\Sql\SQLFILE.SQL
and modify the file as per your wish.
For SQL Server 2012 (64 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql
and modify the file as per your wish.
For SQL Server 2012 (32 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql
and modify the file as per your wish.
Let See my Modified SQLFILE.SQL file as
Source : Mr. Amit Bansal’s Blog
For SQL Server 2008 (64 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\SqlWorkbenchProjectItems\Sql\SQLFILE.SQL
and modify the file as per your wish.
For SQL Server 2008 (32 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\SqlWorkbenchProjectItems\Sql\SQLFILE.SQL
and modify the file as per your wish.
For SQL Server 2012 (64 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql
and modify the file as per your wish.
For SQL Server 2012 (32 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql
and modify the file as per your wish.
Let See my Modified SQLFILE.SQL file as
Source : Mr. Amit Bansal’s Blog
SSRS Tutorial – Part 1
Posted: June 10, 2013 in Database AdministratorTags: SSRS, SSRS Tutorial, SSRS Tutorial Part 1, Step by step SSRS, Step by Step SQL Server Reporting services, SSRS Training
Hi,
Here is the SSRS tutorial. SSRS Tutorial – Part 1 ( Click here to download or Right Click -> Save target as..)
Thanks for various resources available on internet & BOL.
Please share your views on this.
Here is the SSRS tutorial. SSRS Tutorial – Part 1 ( Click here to download or Right Click -> Save target as..)
Thanks for various resources available on internet & BOL.
Please share your views on this.
Finding SQL Server Job details having no operator for notification
Posted: May 7, 2013 in Database AdministratorTags: Finding SQL Server Job details having no operator for notification, Job details, job notification, Job Operator, Operator, SQL server job, SQL Server notification details, Sysjob, Sysoperator
As a DBA, sometimes it’s may be happened you forgot to set a
notification to a job, here is a very simple script to find out which
job have notification or not.
Select SJ.Name Job_Name,
Case SJ.Enabled when 1 then ‘Enabled’ else ‘Disabled’ end Enable_Status,
SJ.description Job_Description,
SUSER_SNAME(owner_sid) Job_Owner,
SJ.date_created Job_Created_Date,
SJ.date_modified Job_Modified_Date,
SP.Name Operator_Name,
SP.email_address Emails,
SP.last_email_date Last_Email_Date,
SP.last_email_time Last_Email_Time
from msdb..sysjobs SJ
LEFT JOIN msdb..sysoperators SP on SP.ID = SJ.notify_email_operator_id
Select SJ.Name Job_Name,
Case SJ.Enabled when 1 then ‘Enabled’ else ‘Disabled’ end Enable_Status,
SJ.description Job_Description,
SUSER_SNAME(owner_sid) Job_Owner,
SJ.date_created Job_Created_Date,
SJ.date_modified Job_Modified_Date,
SP.Name Operator_Name,
SP.email_address Emails,
SP.last_email_date Last_Email_Date,
SP.last_email_time Last_Email_Time
from msdb..sysjobs SJ
LEFT JOIN msdb..sysoperators SP on SP.ID = SJ.notify_email_operator_id
Finding Database Access details for currently logged user
Posted: May 6, 2013 in Database AdministratorTags: Currently logged user db details, Finding DB Access details, Finding DB User, How to know Database Access Details, SQL Server Database Access Details
Sometimes it’s happened, a developer try to access a database but
he/she could not get succeed and after few R&D he/she came to know
that he/she has no access right for that particular DB. For same, a very
useful SQL Server’s function HAS_DBACCESS can
be used to get the list of all Databases having access details of
currently logged user. Here is a simple script to get the details as
Select Name, case HAS_DBACCESS(name) when 0 then ‘No Access’ else ‘Access’end AS DB_Access_Status from sys.databases
Note : if any Database is in RECOVERY Mode, it will shows as NO Access for that DB.
Select Name, case HAS_DBACCESS(name) when 0 then ‘No Access’ else ‘Access’end AS DB_Access_Status from sys.databases
Note : if any Database is in RECOVERY Mode, it will shows as NO Access for that DB.
SQL Server’s Error: System assertion check has failed
Posted: January 16, 2013 in Database AdministratorTags: A system assertion check has failed. Check the SQL Server error log for details. Typically- an assertion failure is caused by a software bug or data corruption. To check for database corruption- consi, assertion, DBCC CHECKDB, Level 20, Line 1, Msg 3624, SQL Serve Error : System assertion check has failed, SQL Server’s Error : System assertion check has failed, State 1, System assertion check has failed
Yesterday my one team guy came to me with an error as
SPID: XX
Process ID: XXXX
Msg 3624, Level 20, State 1, Line 1
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from Technical Support.
When we checked DBCC CHECKDB, results were as
DBCC CHECKDB WITH NO_INFOMSGS – reported no problems.
DBCC CHECKDB, report 0 errors, 0 consistency errors.
After analyzing queries which he was using, we came to know there was 4-5 joins were used and in two tables a comparing columns was different data types as INT and BIGINT. After changing INT to BIGINT our problem got resolved.
SPID: XX
Process ID: XXXX
Msg 3624, Level 20, State 1, Line 1
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from Technical Support.
When we checked DBCC CHECKDB, results were as
DBCC CHECKDB WITH NO_INFOMSGS – reported no problems.
DBCC CHECKDB, report 0 errors, 0 consistency errors.
After analyzing queries which he was using, we came to know there was 4-5 joins were used and in two tables a comparing columns was different data types as INT and BIGINT. After changing INT to BIGINT our problem got resolved.
The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.
Here’s an excerpt:
Here’s an excerpt:
600 people reached the top of Mt. Everest in 2012. This blog got about 2,100 views in 2012. If every person who reached the top of Mt. Everest viewed this blog, it would have taken 4 years to get that many views.Click here to see the complete report.
Changing Instance Name
Posted: December 28, 2012 in Database AdministratorTags: Change sql server instance name, Changing a name of SQL SERVER instance, Changing Instance Name, How to change Instance Name, How to change SQL Server Instance Name, SQL Server Instance Name
As per my personal observation/suggestion, Its much better
reinstall server again with new name and then detached DBs from OLD
instance and Attach with NEW Instance, because a instance name is
associated so many things like performance
counters, local groups for service start and file ACLs, service names
for SQL and related (agent, full text) services, SQL browser visibility,
service master key encryption, various full-text settings, registry
keys, ‘local’ linked server etc. Although, we can change Name as
following the below steps,
— For default instance
sp_dropserver ‘old_name’
go
sp_addserver ‘new_name’,‘local’
go
– For named instance
sp_dropserver ‘Server Name\old_Instance_name’
go
sp_addserver ‘ServerName\New Instance Name’,‘local’
go
Verify sql server instance configuration by running below queries
sp_helpserver
and then restarted the SQL server with following command at command prompt J
net stop MSSQLServerServiceName
net start MSSQLServerServiceName
— For default instance
sp_dropserver ‘old_name’
go
sp_addserver ‘new_name’,‘local’
go
– For named instance
sp_dropserver ‘Server Name\old_Instance_name’
go
sp_addserver ‘ServerName\New Instance Name’,‘local’
go
Verify sql server instance configuration by running below queries
sp_helpserver
Select @@SERVERNAME
net stop MSSQLServerServiceName
net start MSSQLServerServiceName
Finding Space Used,Space left on Data and Log files
Posted: December 26, 2012 in Database AdministratorTags: Finding space availability on MDF and LDF file, Finding Space Used, MDF and LDF file space details, Space left on Data and Log files, Space left on MDF/LDF
Here is a script from which we can easily find the Spaceused on MDF and LDF files.
Select DB_NAME() as [DATABASE NAME],
fileid as FILEID,
CASE WHEN groupid = 0 then ‘LOG FILE’ else ‘DATA FILE’ END as FILE_TYPE,
Name as PHYSICAL_NAME,
Filename as PHYSICAL_PATH,
Convert(int,round((sysfiles.size*1.000)/128.000,0)) as FILE_SIZE,
Convert(int,round(fileproperty(sysfiles.name,‘SpaceUsed’)/128.000,0)) as SPACE_USED,
Convert(int,round((sysfiles.size-fileproperty(sysfiles.name,‘SpaceUsed’))/128.000,0)) as SPACE_LEFT
From sysfiles;
Select DB_NAME() as [DATABASE NAME],
fileid as FILEID,
CASE WHEN groupid = 0 then ‘LOG FILE’ else ‘DATA FILE’ END as FILE_TYPE,
Name as PHYSICAL_NAME,
Filename as PHYSICAL_PATH,
Convert(int,round((sysfiles.size*1.000)/128.000,0)) as FILE_SIZE,
Convert(int,round(fileproperty(sysfiles.name,‘SpaceUsed’)/128.000,0)) as SPACE_USED,
Convert(int,round((sysfiles.size-fileproperty(sysfiles.name,‘SpaceUsed’))/128.000,0)) as SPACE_LEFT
From sysfiles;
SSMS Error : Saving changes is not permitted
Posted: December 26, 2012 in Database AdministratorTags: Save (Not Permitted) Dialog Box, Saving changes is not permitted, Saving Changes Is Not Permitted On SQL Server 2008 Management Studio, Saving changes is not permitted. The changes that you have made require the following tables to be dropped and re-created. You have either made changes to a table that can't be re-created or enabled t, SQL Server Error: Saving changes is not permitted, SSMS Error, SSMS Error : Saving changes is not permitted
In case of following
Saving changes is not permitted. The changes that you have made require the following tables to be dropped and re-created. You have either made changes to a table that can’t be re-created or enabled the option Prevent saving changes that require the table to be re-created.
This message problem occurs when the Prevent saving changes that require the table re-creation option is enabled, to resolve this, follow the steps as
Click the Tools menu, click Options, expand Designers, and then click Table and Database Designers. Select or clear the Prevent saving changes that require the table to be re-created check box.
- Adding a new column to the middle of the table
- Dropping a column
- Changing column nullability
- Changing the order of the columns
- Changing the data type of a column
Saving changes is not permitted. The changes that you have made require the following tables to be dropped and re-created. You have either made changes to a table that can’t be re-created or enabled the option Prevent saving changes that require the table to be re-created.
This message problem occurs when the Prevent saving changes that require the table re-creation option is enabled, to resolve this, follow the steps as
Click the Tools menu, click Options, expand Designers, and then click Table and Database Designers. Select or clear the Prevent saving changes that require the table to be re-created check box.
Difference between @@SERVERNAME and SERVERPROPERTY(‘SERVERNAME’)
Posted: December 26, 2012 in Database AdministratorTags: @@SERVERNAME, Difference between @@SERVERNAME and SERVERPROPERTY(‘SERVERNAME’), SERVERPROPERTY, sp_addserver, sp_dropserver, sp_helpserver
Its general
understanding that @@SERVERNAME and SERVERPROPERTY(‘SERVERNAME’) will
return same values. But once I get a different values for both means
both
Select @@SERVERNAME
Select SERVERPROPERTY(‘SERVERNAME’)
were returning different name, I got answer @ BOL as
@@SERVERNAME function and the SERVERNAME property of SERVERPROPERTY function may return strings with similar formats, the information can be different. The SERVERNAME property automatically reports changes in the network name of the computer.In contrast, @@SERVERNAME does not report such changes. @@SERVERNAME reports changes made to the local server name using the sp_addserver or sp_dropserver stored procedure.
To resolve the issue, Just follow below steps,
– To see Servername
sp_helpserver
– Removes server from the list of known servers on the local instance of SQL Server.
sp_dropserver ‘WRON_SERVER_NAME’, null
– Add server to the local instance of SQL Server.
sp_addserver ‘REAL_SERVER_NAME’,‘LOCAL’
Select @@SERVERNAME
Select SERVERPROPERTY(‘SERVERNAME’)
were returning different name, I got answer @ BOL as
@@SERVERNAME function and the SERVERNAME property of SERVERPROPERTY function may return strings with similar formats, the information can be different. The SERVERNAME property automatically reports changes in the network name of the computer.In contrast, @@SERVERNAME does not report such changes. @@SERVERNAME reports changes made to the local server name using the sp_addserver or sp_dropserver stored procedure.
To resolve the issue, Just follow below steps,
– To see Servername
sp_helpserver
– Removes server from the list of known servers on the local instance of SQL Server.
sp_dropserver ‘WRON_SERVER_NAME’, null
– Add server to the local instance of SQL Server.
sp_addserver ‘REAL_SERVER_NAME’,‘LOCAL’
SQL Server Upgrade Strategy
Posted: December 26, 2012 in Database AdministratorTags: In-Place Upgrade, Side by side upgrade, SQL Server upgradation from 2000 to 2008, SQL Server upgradation from 2005 to 2008, SQL Server Upgrade Strategy, Upgrading SQL Server
A
successful upgrade to SQL Server 2008 R2/2012 should be smooth and
trouble-free. To achieve that smooth transition, we must have to devote a
plan sufficiently for the upgrade, and match the complexity of database
application, otherwise, it risk costly and stressful errors and upgrade
problems. Like all IT projects, planning for every
Contingency/eventuality and then testing our plan gives us confidence
that will succeed. Any ignorance may increase the chances of running
into difficulties that can derail and delay upgrade.
Upgrade
scenarios will be as complex as our underlying applications and
instances of SQL Server. Some scenarios within environment might be
simple, other scenarios complex. Start to plan by analyzing upgrade
requirements, including reviewing upgrade strategies, understanding SQL
Server hardware and software requirements for specific version, and
discovering any blocking problems caused by backward-compatibility
issues.
There may be two Upgrade Scenarios as In-Place Upgrade and Side by side upgrade.
In-Place Upgrade : By using an in-place
upgrade strategy, the SQL Server 2008 R2 Setup program directly
replaces an instance of SQL Server 2000 or SQL Server 2005 with a new
instance of SQL Server 2008 R2 on the same x86 or x64 platform. This
kind of upgrade is called “in-place” because the upgraded instance of
SQL Server 2000 or SQL Server 2005 is actually replaced by the new
instance of SQL Server 2008 R2. You do not have to copy database-related
data from the older instance to SQL Server 2008 R2 because the old data
files are automatically converted to the new format. When the process
is complete, the old instance of SQL Server 2000 or SQL Server 2005 is
removed from the server, with only the backups that you retained being
able to restore it to its previous state.
Note:
If you want to upgrade just one database from a legacy instance of SQL
Server and not upgrade the other databases on the server, use the
side-by-side upgrade method instead of the in-place method.
Side by side upgrade : In a side-by-side upgrade, instead of directly replacing the older instance of SQL Server, required database and component data is transferred from an instance of SQL Server 2000 or SQL Server 2005 to a separate instance of SQL Server 2008 R2. It is called a “side-by-side” method because the new instance of SQL Server 2008 R2 runs alongside the legacy instance of SQL Server 2000 or SQL Server 2005, on the same server or on a different server.
There are two important options when you use the side-by-side upgrade method:
- You
can transfer data and components to an instance of SQL Server 2008 R2
that is located on a different physical server or on a different virtual
machine, or
- You can transfer data and components to an instance of SQL Server 2008 R2 on the same physical server
Both options let you run the new instance of SQL Server 2008 R2 alongside the legacy instance of SQL Server 2000 or SQL Server 2005. Typically, after the upgraded instance is accepted and moved into production, you can remove the older instance.
A
side-by-side upgrade to a new server offers the best of both worlds:
You can take advantage of a new and potentially more powerful server and
platform, but the legacy server remains as a fallback if you encounter a
problem. This method could also potentially reduce an upgrade downtime
by letting you have the new server and instances tested, up, and running
without affecting a current server and its workloads. You can test and
address hardware or software problems encountered in bringing the new
server online, without downtime of the legacy system. Although you would
have to find a way to export data out of the new system to go back to
the old system, rolling back to the legacy system would still be less
time-consuming than a full SQL Server reinstall and restoring the
databases, which a failed in-place upgrade would require. The downside
of a side-by-side upgrade is that increased manual interventions are
required, so it might take more preparation time by an
upgrade/operations team. However, the benefits of this degree of control
can frequently be worth the additional effort.
Source: SQL Server 2008 Upgrade Technical Reference Guide
Microsoft SQL Server 2012–SQL Server Data Tools (SSDT)
Posted: December 25, 2012 in Database AdministratorTags: Juneau, Microsoft SQL Server 2012–SQL Server Data Tools (SSDT), SQL Server Data Tools, SQL Server Developer Tools, SQL Server Juneau, SSDT
SQL Server 2012–SQL Server Data Tools is
available as a free component of the SQL Server platform and is
available for all SQL Server users. provides an integrated environment
for database developers to carry out all their database design work for
any SQL Server platform (both on and off premise) within Visual Studio.
Database developers can use the SQL Server Object Explorer in Visual
Studio to easily create or edit database objects and data, or execute
queries.
Developers will also appreciate the familiar VS tools we bring to database development, specifically; code navigation, IntelliSense, language support that parallels what is available for C# and VB, platform-specific validation, debugging and declarative editing in the TSQL Editor, as well as a visual Table Designer for both database projects and online database instances.
SQL Server Data Tools (SSDT) is the final name for the product formerly known as SQL Server Developer Tools, Code-Named “Juneau”. SSDT provides a modern database development experience for the SQL Server and SQL Azure Database Developer. As the supported SQL Azure development platform, SSDT will be regularly updated online to ensure that it keeps pace with the latest SQL Azure features.
Developers will also appreciate the familiar VS tools we bring to database development, specifically; code navigation, IntelliSense, language support that parallels what is available for C# and VB, platform-specific validation, debugging and declarative editing in the TSQL Editor, as well as a visual Table Designer for both database projects and online database instances.
SQL Server Data Tools (SSDT) is the final name for the product formerly known as SQL Server Developer Tools, Code-Named “Juneau”. SSDT provides a modern database development experience for the SQL Server and SQL Azure Database Developer. As the supported SQL Azure development platform, SSDT will be regularly updated online to ensure that it keeps pace with the latest SQL Azure features.
Free E-Book Gallery for Microsoft Technologies
Posted: December 24, 2012 in Database AdministratorTags: Download free ebooks, E-Book Gallery for Microsoft Technologies, ebook download, free e-books, free ebooks, Free Microsoft Books, Free MS books, free SQL server 2012 book download, MS books download, SQL Server 2012 ebooks
After so lots of google, I got a very nice link for a Large collection of Free Microsoft eBooks including: SharePoint, Visual Studio, Windows Phone, Windows 8, Office 365, Office 2010, SQL Server 2012, Azure, and more. Here is link
another link is http://social.technet.microsoft.com/wiki/contents/articles/11608.e-book-gallery-for-microsoft-technologies.aspx
Happy Reading! JJJ
Connectivity Error – A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
Posted: December 24, 2012 in Database AdministratorTags: A transport-level error has occurred when sending the request to the server, An existing connection was forcibly closed by the remote host.), Connectivity Error, Msg 10054, SQL Server restared error
It’s a connectivity problem with a previously opened session in SQL Server, Sometimes a user can get SQL server error as
Msg 10054, Level 20, State 0, Line 0
A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
Some possible causes are as below
Msg 10054, Level 20, State 0, Line 0
A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
Some possible causes are as below
- The server has been restarted, this will close the existing connections.
- Someone has killed the SPID that is being used.
- Network Failure
- SQL Services restarted
SEQUENCE v/s IDENTITY
Posted: December 24, 2012 in Database AdministratorTags: Diff between SEQUENCE and IDENTITY, Difference between IDENTITY and SEQUENCE, Difference between SEQUENCE and IDENTITY, IDENTITY, SEQUENCE, Sequence Object, Sequence Object in SQL Server 2012, SEQUENCE v/s IDENTITY, SQL Server 2012 - SEQUESNCE
As we know Sequence is new enhanced feature introduced in SQL Server 2012. SEQUENCE
work similarly to an IDENTITY value, but where the IDENTITY value is
scoped to a specific column in a specific table, the Sequence Object is
scoped to the entire database and controlled by application code. This
can allow us to synchronize seed values across multiple tables that
reference one another in a parent child relationship. Or, with a little
bit of code we can also take control on whether or not the next value
is used or saved for the next INSERT should the current transaction be
rolled back where the IDENTITY value is lost and creates a gap when an
INSERT is rolled backed. Here are some key differences as
Sequence |
Identity
|
A SQL Server sequence object generates sequence of numbers just like an identity column in sql tables. But the advantage of sequence numbers is the sequence number object is not limited with single SQL table. | IDENTITY is a table specific. |
The status of the sequence object can be viewed by querying the DMV sys.sequences as shown below. SELECT Name,start_value,minimum_value,maximum_value ,current_value
FROM
sys.sequences |
System Function @@IDENTITY
can use for the last-inserted identity value
|
Sequence is an object. Example : CREATE SEQUENCE MySequesnceName AS INT START WITH 1 INCREMENT BY 1 MINVALUE 1 MAXVALUE 1000 NO CYCLE NO CACHE |
Identity is a property in a table.
Example :
CREATETABLE TblIdentityChk ( ID INT Identity (1,1), CUSTNAME Varchar(50) ) |
You can obtain the new value before using it in an INSERT statement | You cannot obtain the new value in your application before using it |
In the sequence, you do not need to insert new ID, you can view the new ID directly.
Example :
SELECT NEXT VALUE FOR MySequesnceName |
If you need a new ID from an identity column you need to
insert and then get new ID.
Example :
Insert into TblIdentityChk Values(‘TEST CUSTOMER’)
GO
SELECT @@IDENTITY AS ‘Identity’
–OR
Select SCOPE_IDENTITY() AS‘Identity’
|
You can add or remove a default constraint defined for a column with an expression that generates a new sequence value (extension to the standard) | You cannot add or remove the property from an existing column |
You can generate new values in an UPDATE statement, let see example as UPDATE TableName SET IDD = Next Values for MySequesnceName |
You cannot generate new values in an UPDATE statement when needed, rather only in INSERT statements |
In the sequence, you can simply add one property to make it a cycle.
Example :
ALTER SEQUENCE MySequesnceName
CYCLE;
|
You cannot perform a cycle in identity column. Meaning, you cannot restart the counter after a particular interval. |
The semantics of defining ordering in a multi-row insert are very clear using an OVER clause (extension to the standard), and are even allowed in SELECT INTO statements | The semantics of defining ordering in a multi-row insert are confusing, and in SELECT INTO statements are actually not guaranteed |
Sequence can be easily cached by just setting cache property of sequence. It also improves the performance.
Example :
ALTER SEQUENCE [dbo].[Sequence_ID]
CACHE 3;
|
You cannot cache Identity column property. |
The sequence is not table dependent so you can easily remove it
–Let Insert With Sequence object
INSERT INTO TblBooks([ID],[BookName])
VALUES (NEXT VALUE FOR MySequesnceName, ‘MICROSOFT SQL SERVER 2012′)
GO
-Now Insert Second value without Sequence object
INSERT INTO TblBooks([ID],[BookName])
VALUES (2, ‘MICROSOFT SQL SERVER 2012′)
GO
|
You cannot remove the identity column from the table directly. |
You can define minimum and maximum values, whether to allow cycling, and a cache size option for performance
Example :
ALTER SEQUENCE MySequesnceName
MAXVALUE 2000;
|
You cannot define: minimum and maximum values, whether to allow cycling, and caching options |
You can obtain a whole range of new sequence values in one shot using the stored procedure sp_sequence_get_range (extension to the standard), letting the application assign the individual values for increased performance | You cannot obtain a whole range of new identity values in one shot, letting the application assign the individual values |
You can reseed as well as change the step size.
Example :
ALTER SEQUENCE MySequesnceName
RESTART WITH 7
INCREMENT BY 2;
|
You can reseed it but cannot change the step size.
Example :
DBCC CHECKIDENT (TblIdentityChk,RESEED, 4)
|
Row Numbering in SQL Select Query
Posted: December 24, 2012 in Database AdministratorTags: Cannot add identity column using the SELECT INTO statement, inherits the identity property, Msg 177, Msg 8108, Row Numbers in SQL Query, Row Numbers in SQL Select Query, SELECT statement with auto generate row id, SQL Server, The IDENTITY function can only be used when the SELECT statement has an INTO clause, Using Identity, Using IDENTITY in a SELECT
Some
time we have to do Row Numbering, for same we can use IDENTTY function
with SELECT Statement, but we can’t use it with simply SELECT command,
its always used with … INTO .. form as
SELECT Col1,Col2…..Coln INTO TableName from SourceTablename
Will give error as
Msg 177, Level 15, State 1, Line 1
The IDENTITY function can only be used when the SELECT statement has an INTO clause.
SELECT * from #TempEmp
The IDENTITY gets three mandatory parameters, namely datatype of the identity column, starting value and the increment. With these, it can customize the new column according to our requirement. For example, an integer starting from 1 and incremented for each row with 1 can be specified as:
IDENTITY(INT,1,1)
Select * from #TempEmp
Will throw error as
Msg 8108, Level 16, State 1, Line 1
Cannot add identity column, using the SELECT INTO statement, to table ‘#TempEmp’, which already has column ‘EMPID’ that inherits the identity property.
SELECT Col1,Col2…..Coln INTO TableName from SourceTablename
If it tried with simply select command as
Select identity(int,1,1) IDD, EMPNAME,DEPT,BASICSALARY from EmployeeWill give error as
Msg 177, Level 15, State 1, Line 1
The IDENTITY function can only be used when the SELECT statement has an INTO clause.
So if we want to do it, we have to pass Select statement as
Select IDENTITY(int,1,1) IDD, NAME,BASICSALARY into #TempEmp from EmployeeSELECT * from #TempEmp
The IDENTITY gets three mandatory parameters, namely datatype of the identity column, starting value and the increment. With these, it can customize the new column according to our requirement. For example, an integer starting from 1 and incremented for each row with 1 can be specified as:
IDENTITY(INT,1,1)
Note
: Suppose in source table there is already a Identity column exist,
this column should be not in Select statement otherwise it will give a
error, let in Employee Table, EMPID is an IDENTITY column, and we are
passing Select statement as
Select IDENTITY(int,1,1) IDD, EMPID,NAME,BASICSALARY into #TempEmp from EmployeeSelect * from #TempEmp
Will throw error as
Msg 8108, Level 16, State 1, Line 1
Cannot add identity column, using the SELECT INTO statement, to table ‘#TempEmp’, which already has column ‘EMPID’ that inherits the identity property.
Ignoring all Constraints
Posted: December 24, 2012 in Database AdministratorTags: Bulk Insert, Bulk operation ignoring Constraints, Bulk Update ignoring trigger, Disable All Constraints, Disable Constraints with INSERT and UPDATE Statements, Disable Foreign, Disable Foreign Key Constraints with INSERT and UPDATE Statements, Ignoring all Constraints, Ignoring Trigger
Some
time its required we have to INSERT/UPDATE some bulk data, and due to
particular Constraints, it work as a barrier to prevent as per
constrains behavior. We can set the constraints on a perticular table /
column to not check temporarily, then re-enable the constraints as
ALTER TABLE TableName NOCHECK CONSTRAINT ConstraintName
Then re-enable constraints using-
ALTER TABLE TableName CHECK CONSTRAINT ConstraintName
If its required to Disable all constraints from all of Tables of Current Database, for same we can use
ALTER TABLE TableName NOCHECK CONSTRAINT ConstraintName
Then re-enable constraints using-
ALTER TABLE TableName CHECK CONSTRAINT ConstraintName
If its required to Disable all constraints from all of Tables of Current Database, for same we can use
–Disable all Constraints
EXEC sp_MSforeachtable ‘ALTER TABLE ? NOCHECK CONSTRAINT ALL’
–Enable all Constraints
EXEC sp_MSforeachtable ‘ALTER TABLE ? CHECK CONSTRAINT ALL’
EXEC sp_MSforeachtable ‘ALTER TABLE ? NOCHECK CONSTRAINT ALL’
–Enable all Constraints
EXEC sp_MSforeachtable ‘ALTER TABLE ? CHECK CONSTRAINT ALL’
Best Practices for Bulk Importing Data
Posted: December 24, 2012 in Database AdministratorTags: Best Practices, Best Practices for Bulk Importing Data, Bulk Import, Bulk Import Data, IGNORE_TRIGGERS, INSERT INTO, Insert Perfomance, KEEPDEFAULTS, KEEPIDENTITY, TABLOCK, Using OPENROWSET, Using OPENROWSET and BULK to Bulk Import Data
- Using INSERT INTO…SELECT to Bulk Import Data with Minimal Logging
INSERT INTO <target_table> SELECT <columns> FROM
<source_table> is the best way to efficiently transfer a large
number of rows from one table, such as a staging table, to another table
with minimal logging. Minimal logging can improve the performance and
will fill the minimum amount of available transaction log space during
the transaction. Minimal logging for this statement has the following
requirements:
• The recovery model of the database is set to simple or bulk-logged.
• The target table is an empty or nonempty heap.
• The target table is not used in replication.
• The TABLOCK hint is specified for the target table.
• The recovery model of the database is set to simple or bulk-logged.
• The target table is an empty or nonempty heap.
• The target table is not used in replication.
• The TABLOCK hint is specified for the target table.
Rows that are inserted into a heap as the
result of an insert action in a MERGE statement may also be minimally
logged. Unlike the BULK INSERT statement, which holds a less restrictive
Bulk Update lock, INSERT INTO…SELECT with the TABLOCK hint holds an
exclusive (X) lock on the table. This means that we cannot insert rows
using parallel insert operations.
- Using OPENROWSET and BULK to Bulk Import Data
The OPENROWSET function can accept the following table hints, which provide bulk-load optimizations with the INSERT statement:
• The TABLOCK hint can minimize the number of log records for the insert operation. The recovery model of the database must be
set to simple or bulk-logged and the target table cannot be used in replication.
•The IGNORE_CONSTRAINTS hint can temporarily disable FOREIGN KEY and CHECK constraint checking.
• The IGNORE_TRIGGERS hint can temporarily disable trigger execution.
• The KEEPDEFAULTS hint allows the insertion of a table column’s default value, if any, instead of NULL when the data record
lacks a value for the column.
• The KEEPIDENTITY hint allows the identity values in the imported data file to be used for the identity column in the target table.
• The TABLOCK hint can minimize the number of log records for the insert operation. The recovery model of the database must be
set to simple or bulk-logged and the target table cannot be used in replication.
•The IGNORE_CONSTRAINTS hint can temporarily disable FOREIGN KEY and CHECK constraint checking.
• The IGNORE_TRIGGERS hint can temporarily disable trigger execution.
• The KEEPDEFAULTS hint allows the insertion of a table column’s default value, if any, instead of NULL when the data record
lacks a value for the column.
• The KEEPIDENTITY hint allows the identity values in the imported data file to be used for the identity column in the target table.
Discontinued Database Engine Functionality in SQL Server 2012
Posted: December 23, 2012 in Database AdministratorTags: 80 compatibility levels, Discontinued Database Engine Functionality in SQL Server 2012, FASTFIRSTROW, Feature not in SQL 2012, Functionality not in SQL Server 2012, PWDCOMPARE, SP_Configure, sp_dboption, sp_dropalias, VIA Protocol
Here are Database Engine features that are no longer available in SQL Server 2012.
Category |
Discontinued feature
|
Replacement
|
Backup and Restore |
BACKUP { DATABASE | LOG } WITH PASSWORD and BACKUP { DATABASE | LOG } WITH MEDIAPASSWORD are discontinued. RESTORE { DATABASE | LOG } WITH [MEDIA]PASSWORD continues to be deprecated.
|
None
|
Backup and Restore |
RESTORE { DATABASE | LOG } … WITH DBO_ONLY
|
RESTORE { DATABASE | LOG } … … WITH RESTRICTED_USER
|
Compatibility level |
80 compatibility levels
|
Databases must be set to at least compatibility level 90.
|
Configuration Options |
sp_configure ‘user instance timeout’ and ‘user instances enabled’
|
Use the Local Database feature. For more information, see SqlLocalDB Utility
|
Connection protocols |
Support for the VIA protocol is discontinued.
|
Use TCP instead.
|
Database objects |
WITH APPEND clause on triggers
|
Re-create the whole trigger.
|
Database options |
sp_dboption
|
ALTER DATABASE
|
SQL Mail
|
Use Database Mail. For more information, see Database Mail and Use Database Mail Instead of SQL Mail.
|
|
Memory Management |
32-bit Address Windowing Extensions (AWE) and 32-bit Hot Add memory support.
|
Use a 64-bit operating system.
|
Metadata |
DATABASEPROPERTY
|
DATABASEPROPERTYEX
|
Programmability |
SQL Server Distributed Management Objects (SQL-DMO)
|
SQL Server Management Objects (SMO)
|
Query hints |
FASTFIRSTROW hint
|
OPTION (FAST n).
|
Remote servers |
The ability for users to create new remote servers by using sp_addserver is discontinued. sp_addserver with the ‘local’ option remains available. Remote servers preserved during upgrade or created by replication can be used.
|
Replace remote servers by using linked servers.
|
Security |
sp_dropalias
|
Replace aliases with a combination of user accounts and database roles. Use sp_dropalias to remove aliases in upgraded databases.
|
Security |
The version parameter of PWDCOMPARE representing a value from a login earlier than SQL Server 2000 is discontinued.
|
None
|
Service Broker programmability in SMO |
The Microsoft.SqlServer.Management.Smo.Broker.BrokerPriority class no longer implements theMicrosoft.SqlServer.Management.Smo.IObjectPermission interface.
|
|
SET options |
SET DISABLE_DEF_CNST_CHK
|
None.
|
System tables |
sys.database_principal_aliases
|
Use roles instead of aliases.
|
Transact-SQL |
RAISERROR in the format RAISERROR integer ‘string’ is discontinued.
|
Rewrite the statement using the current RAISERROR(…) syntax.
|
Transact-SQL syntax |
COMPUTE / COMPUTE BY
|
Use ROLLUP
|
Transact-SQL syntax |
Use of *= and =*
|
Use ANSI join syntax. For more information, see FROM (Transact-SQL).
|
XEvents |
databases_data_file_size_changed, databases_log_file_size_changed
eventdatabases_log_file_used_size_changed
locks_lock_timeouts_greater_than_0
locks_lock_timeouts
|
Replaced by database_file_size_change event, database_file_size_change
database_file_size_change event
lock_timeout_greater_than_0
lock_timeout
|
SQL Server Management Studio 2012 – Enhanced Capabilities
Posted: December 20, 2012 in Database AdministratorTags: Code Sippsets, Enhancement, Intellisense, Intellisense Enhancement, SQL Management Studio 2012, SQL Management Studio 2012 – Enhanced Capabilities, SQL Server Management Studio 2012, SQL Server Management Studio 2012 – Enhanced Capabilities, SQL Server Management Studio 2012 new features, SQL Server SSMS 2012, SSMS 2012, SSMS new features
Here are some new SSMS 2012 features as
Its showing all objects which contain “TABLE”.
2) At the bottom left corner of the query window, notice there is a Zoom window that displays 100%, change this to 150%. The enhancement to be able to zoom in the query window enables you to read queries more accurately. Its for results message windows too.
3) Code Snippets are an enhancements to the Template Feature that was introduced in SQL Server Management Studio. You can also create and insert your own code snippets, or Import them from a pre-created snippet library.
- Exploring Intellisense Enhancement – Enhanced
Intellisense will now provide partial syntax completion, while in
previous versions Intellisense would only display the objects that start
with the syntax you started typing.
Let see example
Its showing all objects which contain “TABLE”.
2) At the bottom left corner of the query window, notice there is a Zoom window that displays 100%, change this to 150%. The enhancement to be able to zoom in the query window enables you to read queries more accurately. Its for results message windows too.
3) Code Snippets are an enhancements to the Template Feature that was introduced in SQL Server Management Studio. You can also create and insert your own code snippets, or Import them from a pre-created snippet library.
Differences between SQL Server 2008 and 2012
Posted: December 19, 2012 in Database AdministratorTags: Diff Between SQL Server 2008 and 2012, Difference between SQL 2008 abd 2012, Differences between SQL Server 2008 and SQL Server 2012, SQL Server 2012, sql server 2012 new features, SQL Server new features
Hi, Here are some Differences between SQL Server 2008/R2 and 2012.
Other more precious features will be added with this document very soon.
J
Sr. No. |
SQL Server 2008 |
SQL Server 2012 |
1 | Exceptions handle using TRY….CATCH | Unique Exceptions handling with THROW |
2 | High Availability features as Log Shipping, Replication, Mirroring & Clustering | New Feature ALWAYS ON introduced with addition of 2008 features. |
3 | Web Development and Business Intelligence Enhanced with business intelligence features. Excel PowerPivot by adding more drill and KPI through. | In Addition with SQL server 2008, Web Development and Business Intelligence Enhanced with business intelligence features and Excel PowerPivot by adding more drill and KPI’s. |
4 | Could not supported for Windows Server Core Support. | Supported for Windows Server Core Support |
5 | Columnstore Indexes not supported. | New Columnstore Indexes feature that is completely unique to SQL Server. They are special type of read-only index designed to be use with Data Warehouse queries. Basically, data is grouped and stored in a flat, compressed column index, greatly reducing I/O and memory utilization on large queries. |
6 | PowerShell Supported | Enhanced PowerShell Supported |
7 | Distributed replay features not available. | Distributed replay allows you to capture a workload on a production server, and replay it on another machine. This way changes in underlying schemas, support packs, or hardware changes can be tested under production conditions. |
8 | PowerView not available in BI features | PowerView is a fairly powerful self-service BI toolkit that allows users to create mash ups of BI reports from all over the Enterprise. |
9 | EXECUTE … with RECOMPLIE feature | Enhanced EXECUTE with many option like WITH RESULT SET…. |
10 | Maximum numbers of concurrent connections to SQL Server 2008 is 32767 | SQL server 2012 has unlimited concurrent connections. |
11 | The SQL Server 2008 uses 27 bit bit precision for spatial calculations. |
The SQL Server 2012 uses 48 bit precision for spatial calculations |
12 | TRY_CONVERT() and FORMAT() functions are not available in SQL Server 2008 |
TRY_CONVERT() and FORMAT() functions are newly included in SQL Server 2012 |
13 | ORDER BY Clause does not have OFFSET / FETCH options for paging | ORDER BY Clause have OFFSET / FETCH options for paging |
14 | SQL Server 2008 is code named as Katmai. | SQL Server 2012 is code named as Denali |
15 | In SQL Server 2008, audit is an Enterprise-only feature. Only available in Enterprise, Evaluation, and Developer Edition. | In SQL Server 2012,support for server auditing is expanded to include all editions of SQL Server. |
16 | Sequence is not available in SQL Server 2008 |
Sequence is included in SQL Server 2012.Sequence is a user defined object that generates a sequence of a number |
17 | The Full Text Search in SQL Server 2008 does not allow us to search and index data stored in extended properties or metadata. | The
Full Text Search in SQL Server 2012 has been enhanced by allowing us to
search and index data stored in extended properties or metadata.
Consider a PDF document that has “properties” filled in like Name, Type,
Folder path, Size, Date Created, etc. In the newest release of SQL
Server, this data could be indexes and searched along with the data in
the document itself. The data does have to be exposed to work, but it’s
possible now. |
18 | Analysis Services in SQL Server does not have BI Semantic Model |
Analysis Services will include a new BI Semantic Model (BISM). BISM is a (BISM) concept. 3-layer model that includes: Data Model, Business Logic and Data Access |
19 | The BCP not support –K Option. | The BCP Utility and SQLCMD Utility utilities enhanced with -K option, which allows to specify read-only access to a secondary replica in an AlwaysOn availability group. |
20 | sys.dm_exec_query_stats | sys.dm_exec_query_stats added four columns to help troubleshoot long running queries. You can use the total_rows, min_rows, max_rows and last_rows aggregate row count columns to separate queries that are returning a large number of rows from problematic queries that may be missing an index or have a bad query plan. |
J
DATA DICTIONARY
Posted: December 19, 2012 in Database AdministratorTags: Creating Data dictionary, Creating Data Dictionary in SQL Server, Data Dictionary, Documentation, Extended Property, Sp_Addextendedproperty, Sql, SQL Server
As a DBA we have to maintain our all databases Dictionaries, Here
is a Script , from which we can generate a full view of Tables
structure details.
SELECT a.Name [Table Name],
b.name [Column Name],
c.name [Data Type],
b.length [Column Length],
b.isnullable [Allow Nulls],
CASE WHEN d.name is null THEN 0 ELSE 1 END [Primary Key],
CASE WHEN e.parent_object_id is null THEN 0 ELSE 1 END [ForiegnKey],
CASE WHEN e.parent_object_id is null
THEN ‘-’ ELSE g.name END [Reference Table],
CASE WHEN h.value is null THEN ‘-’ ELSE h.value END [Description]
from sysobjects as a
join syscolumns as b on a.id = b.id
join systypes as c on b.xtype = c.xtype
left join (SELECT so.id,sc.colid,sc.name FROM syscolumns sc
JOIN sysobjects so ON o.id = sc.id
JOIN sysindexkeys si ON so.id = si.id and sc.colid = si.colid
WHERE si.indid = 1) d on a.id = d.id and b.colid = d.colid
Left join sys.foreign_key_columns as e on a.id = e.parent_object_id
and b.colid = e.parent_column_id
left join sys.objects as g on e.referenced_object_id = g.object_id
left join sys.extended_properties as h on a.id = h.major_id
and b.colid = h.minor_id
where a.type = ‘U’
order by a.name
We can also get somehow details, not as above , using below
SELECT * FROM INFORMATION_SCHEMA.COLUMNS
JJJ
SELECT a.Name [Table Name],
b.name [Column Name],
c.name [Data Type],
b.length [Column Length],
b.isnullable [Allow Nulls],
CASE WHEN d.name is null THEN 0 ELSE 1 END [Primary Key],
CASE WHEN e.parent_object_id is null THEN 0 ELSE 1 END [ForiegnKey],
CASE WHEN e.parent_object_id is null
THEN ‘-’ ELSE g.name END [Reference Table],
CASE WHEN h.value is null THEN ‘-’ ELSE h.value END [Description]
from sysobjects as a
join syscolumns as b on a.id = b.id
join systypes as c on b.xtype = c.xtype
left join (SELECT so.id,sc.colid,sc.name FROM syscolumns sc
JOIN sysobjects so ON o.id = sc.id
JOIN sysindexkeys si ON so.id = si.id and sc.colid = si.colid
WHERE si.indid = 1) d on a.id = d.id and b.colid = d.colid
Left join sys.foreign_key_columns as e on a.id = e.parent_object_id
and b.colid = e.parent_column_id
left join sys.objects as g on e.referenced_object_id = g.object_id
left join sys.extended_properties as h on a.id = h.major_id
and b.colid = h.minor_id
where a.type = ‘U’
order by a.name
We can also get somehow details, not as above , using below
SELECT * FROM INFORMATION_SCHEMA.COLUMNS
JJJ
Writing Efficient Queries
Posted: December 18, 2012 in Database AdministratorTags: How to write optimise query, Query - Best Practices, Query tips, SQL Server, Wriiting Query in SQL Server, Writing Efficient Queries
Here are initial tips for writing efficient/ cost-effective Queries
- When using AND, put the condition least likely to be true first. The database system evaluates conditions from left to right, subject to operator precedence. If you have two or more AND operators
in a condition, the one to the left is evaluated first, and if and only
if it’s true is the next condition evaluated. Finally, if that
condition is true, then the third condition is evaluated. You can save
the database system work, and hence increase speed, by putting the least
likely condition first. For example, if you were looking for all
members living in Delhi and born before January 1, 1960, you could write
the following query:
SELECT FirstName, LastName FROM EMPLOYEE WHERE State = ‘Delhi’ AND DateOfBirth < ’1960-01-01′
The query would work fine; however, the number of members born before that date is very small, whereas plenty of people live in New State. This means that State = Delhi will occur a number of times and the database system will go on to check the second condition, DateOfBirth < ’1960-01-01′. If you swap the conditions around, the least likely condition (DateOfBirth < ’1960-01-01′) is evaluated first:
SELECT FirstName, LastName FROM MemberDetails WHERE
DateOfBirth < ’1960-01-01′ AND State = ‘Delhi’;
Because the condition is mostly going to be false, the second condition will rarely be executed, which saves time. It’s not a big deal when there are few records, but it is when there are a lot of them.
- When using OR, put the condition most likely to be true first. Whereas AND needs both sides to be true for the overall condition to be true, OR needs only one side to be true. If the left-hand side is true, there’s no need for OR to
check the other condition, so you can save time by putting the most
likely condition first. Consider the following statement:
SELECT FirstName, LastName FROM MemberDetails WHERE State
= ‘Delhi’ OR DateOfBirth < ’1960-01-01′;
If Delhi is true, and it is true more often than DateOfBirth < ’1960-01-01′ is true, then there’s no need for the database system to evaluate the other condition, thus saving time.
-
DISTINCT can be faster than GROUP BY. DISTINCT and GROUP BY often do the same thing: limit results to unique rows. However,DISTINCT is often faster with some database systems than GROUP BY. For example, examine the following GROUP BY:SELECT MemberId FROM Orders GROUP BY MemberId;
The GROUP BY could be rewritten using the DISTINCT keyword:
SELECT DISTINCT MemberId FROM Orders; -
Use IN with your subqueries. When you write a query similar to the following, the database system has to get all the results from the subquery to make sure that it returns only one value,SELECT FirstName, LastName FROM EMPLOYEE WHERE EMPID IN (SELECT EMPID FROM Orders WHERE OrderId = 2);
SELECT FirstName, LastName FROM EMPLOYEE WHERE
EMPID = (SELECT EMPID FROM Orders WHERE OrderId = 2);
If you rewrite the query using the IN operator, the database system only needs to get results until there’s a match with the values returned by the subquery; it doesn’t necessarily have to get all the values:
- Avoid using SELECT * FROM.
Specifying which columns you need has a few advantages, not all of them
about efficiency. First, it makes clear which columns you’re actually
using. If you use SELECT * and
actually use only two out of seven of the columns, it’s hard to guess
from the SQL alone which ones you’re using. If you say SELECT FirstName, LastName…..then
it’s quite obvious which columns you’re using. From an efficiency
standpoint, specifying columns reduces the amount of data that has to
pass between the database and the application connecting to the
database. This is especially important where the database is connected
over a network.
- Search on integer columns. If you have a choice, and often you don’t, search on integer columns. For example, if you are looking for the member whose name is VIRENDRA YADUVANSHI and whose MemberId is 101, then it makes sense to search via the MemberId because it’s much faster.
Deprecated Database Engine Features in SQL Server 2012
Posted: December 16, 2012 in Database AdministratorTags: Commands not suggested to use for future coding, Deprecated commands SQL Server 2012, Deprecated Database Engine Features in SQL Server 2012, Discontinued Commands
Here
is the complete list of Deprecated Database Engine Features in SQL
Server 2012. This topic describes the deprecated SQL Server Database
Engine features that are still available in SQL Server 2012. These
features are scheduled to be removed in a future release of SQL Server.
Deprecated features should not be used in new applications.
For more please visit http://msdn.microsoft.com/en-us/library/ms143729.aspx
For more please visit http://msdn.microsoft.com/en-us/library/ms143729.aspx
DC-DR Log Shipping Drill Down
Posted: December 14, 2012 in Database AdministratorTags: DC-DR Log Shipping Drill Down, Log shipping - Failure of Primary Server, Log Shipping DC DR Drill, Log Shipping Failure, LOG Shipping swapping PRIMARY and SECONDARY Server, SQL Server Log Shipping Failure
Its sometime happened, our DC (Main
Primary DB Instance) may be down due to some reasion and we have to
start our log shipped Secondary Server which is at DR location, needs to
act as Primary Server for smooth Production.
In my Example lets Primary Server is
named as ‘PRIMARY’ and DB which is used for Log shipping at server
‘SECONDARY’ is VIRENDRATEST.
Now suppose I would like to change my
PRIMARY Server’s production environment as a ‘SECONDARY’ and ‘SECONDARY’
as a PRIMARY. Below are the steps for same.
- First be sure, all applications hits are stopped to DB server. For same we have to contact Application Team/Production to STOP/DISABLE the services/Application, After confirmation from there let go ahead.
- Take Full BACKUP of PRIMARY Server’s DB.
- Run LS-Backup Job at Primary Server and Disable it.
- Run LS-Copy Job at SECONDARY Server , after complition of this, disable this job.
- Run LS-Restore job at Secondary Server and after completion of it, Disable it
- Now Take LOG Backup from PRIMARY server as using command as
BACKUP LOG VIRENDRATEST TO Disk=N’D:\LS_BACKUP\VIRENDRATESTLOG.BAK’ WITH NORECOVERY - Copy LOG Backup file(of Step 6) of PRIMARY at SCECONDARY server manually.
- At SECONDARY , restores last BACKUP LOG file with option WITH RECOVERY
- Now our SECONDARY Server’s DB is up for production and PRIMARY Server’s DB is in RESTORING mode
- Now Configure LOG SHIPPING again at SECONDARY as PRIMARY and PRIMARY as SECONDARY.
- Run LS-Backup Job at newly PRIMARY server.
- Refresh both servers instance, both server should be in sysn order with proper status.
- Check the Logshipping where its working properly or not
- Inform to Production team to Start/Enable all Application/Services
SSMS Boost add-in
Posted: December 12, 2012 in Database AdministratorTags: boostup SSMS, SSMS add-in, SSMS addin, SSMS Boost add-in, SSMS features, Working with SQL Server in SSMS
I found a very powerful SSMS add-in as SSMSBoost.
“SSMS Boost add-in” is designed to improve productivity when working with Microsoft SQL Server in SQL Server Management Studio. The main goal of project is to speed-up daily tasks of SQL DBA and T-SQL developers. Please visit http://www.ssmsboost.com/ for more details and download and enjoy!
“SSMS Boost add-in” is designed to improve productivity when working with Microsoft SQL Server in SQL Server Management Studio. The main goal of project is to speed-up daily tasks of SQL DBA and T-SQL developers. Please visit http://www.ssmsboost.com/ for more details and download and enjoy!
RAISERROR and @@ERROR
Posted: December 11, 2012 in Database AdministratorTags: @@ERROR, Difference between RAISERROR and @@ERROR, Error Handling using RAISERROR and @@ERROR, RAISERROR, RAISERROR and @@ERROR, SQL Server RAISERROR and @@ERROR
RAISERROR:
AS
IF @SecondNumber=0
BEGIN
SELECT ‘Error’
RAISERROR (‘Error: Division by zero.’, 16, 1)
END
ELSE
select @FirstNumber/@SecondNumber
When Run SP as EXEC SPDivision 5, 0 will show error as
(1 row(s) affected)
Msg 50000, Level 16, State 1, Procedure spDivision, Line 6
Error: Division by zero.
EXEC sp_addmessage 50001, 16, ‘Error: Division by zero.’
Now above SP code will be as
CREATE PROCEDURE spDivision @FirstNumber int, @SecondNumber int
AS
IF @SecondNumber=0
BEGIN
SELECT ‘Error’
RAISERROR (50001, 16, 1)
END
ELSE
Select @FirstNumber/@SecondNumber
@@ERROR:
AS
DECLARE @errnum int
Select @FirstNumber/@SecondNumber
SET @errnum=@@Error
IF @errnum<>0
SELECT ‘Error’
EXEC spDivision1 5,0
Msg 8134, Level 16, State 1, Procedure spDivision, Line 4
Divide by zero error encountered.
(1 row(s) affected)
RAISERROR
raise an error with a user defined error message. The error message is
either created dynamically or stored in the system table sysmessages.
Example: Lets create a SP, pass 2 number, if second number is 0, then error as .
CREATE PROCEDURE SPDivision @FirstNumber int, @SecondNumber intAS
IF @SecondNumber=0
BEGIN
SELECT ‘Error’
RAISERROR (‘Error: Division by zero.’, 16, 1)
END
ELSE
select @FirstNumber/@SecondNumber
When Run SP as EXEC SPDivision 5, 0 will show error as
(1 row(s) affected)
Msg 50000, Level 16, State 1, Procedure spDivision, Line 6
Error: Division by zero.
If the error message is used in many SP’s, to avoid inconsistencies due to changes in the message, the message can be stored in sysmessages..
The system SP sp_addmessage can be used to add the message and
sp_dropmessage fro drop message. Be sure User-Defined error messages
must have a msg_id greater or equal to 50001.
Example: To add an error message into sysmessages..EXEC sp_addmessage 50001, 16, ‘Error: Division by zero.’
Now above SP code will be as
CREATE PROCEDURE spDivision @FirstNumber int, @SecondNumber int
AS
IF @SecondNumber=0
BEGIN
SELECT ‘Error’
RAISERROR (50001, 16, 1)
END
ELSE
Select @FirstNumber/@SecondNumber
@@ERROR:
The
system function @@ERROR returns an error code if an error was
encountered after the completion of the TSQL statement immediately
preceding it, otherwise it will return 0, means success. The value of
@@ERROR changes for each TSQL statement and the only way to keep track
of errors is by using a temporary variable to store the error code. If
there is no need to keep track of the error but simply act upon it, then
the value of @@ERROR can be checked after the TSQL statement to be
tested.
CREATE PROCEDURE spDivision @FirstNumber int, @SecondNumber intAS
DECLARE @errnum int
Select @FirstNumber/@SecondNumber
SET @errnum=@@Error
IF @errnum<>0
SELECT ‘Error’
EXEC spDivision1 5,0
Msg 8134, Level 16, State 1, Procedure spDivision, Line 4
Divide by zero error encountered.
(1 row(s) affected)
The
return code is changed automatically to store the latest @@Error value
if no RETURN statement is present. Even if there are more statements
after the error occurred, the error code is still preserved.
Locking v/s Multi-Versioning
Posted: December 7, 2012 in Database AdministratorTags: Deadlock, Difference between Locking and Multi Versioning, Isolation Level, Locking, Locking and Multi-Versioning, Locking v/s Multi-Versioning, Multi Version, Read committed, Record Multi-Version, Row Multi version, SQL Server
Locking
is a means of not allowing any other transaction to take place when one
is already in progress. In this the data is locked and there won’t be
any modification taking place till the transaction either gets
successful or it fails. The lock has to be put up before the processing
of the data whereas Multi-Versioning is an alternate to locking to
control the concurrency. It provides easy way to view and modify the
data. It allows two users to view and read the data till the transaction
is in progress. Multiversion concurrency control is described in some
detail in the 1981 paper “Concurrency Control in Distributed Database
Systems” by Philip Bernstein and Nathan Goodman.
Multi-Versioning, As the name implies, each record in the system might have multiple versions visible to different transactions. When a transaction modifies a record, a new version is written to the database, and a previous version, representing only the difference between the version of the record that was read by the transaction and the new value of the record, is written as a back version of that record.
Read committed isolation using row versioning is somewhere in between Locks and Multi-Versioning. Under this isolation level, read operations do not acquire locks against the active live data. However, with update operations the process is the same for this isolation level as it is for the default read committed isolation level, The selection of rows to update is done by using a blocking scan where an update lock is taken on the data row as data values are read.
Snapshot isolation uses purely Multi-Versioning because data that is to be modified is not actually locked in advance, but the data is locked when it is selected for modification. When a data row meets the update criteria, the snapshot transaction verifies that the data has not been modified by another transaction after the snapshot transaction started. If the data has not been modified by another transaction, the snapshot transaction locks the data, updates the data, releases the lock, and moves on. If the data has been modified by another transaction, an update conflict occurs and the snapshot transaction rolls back.
Although locking can be the best concurrency-control choice for applications that require data consistency and inter-transaction protection, it can cause writers to block readers. If a transaction changes a row, another transaction cannot read the row until the writer commits. There are cases where waiting for the change to complete is the correct response; however, there are cases where the previous transactionally consistent state of the row is enough.
Multi-Versioning, As the name implies, each record in the system might have multiple versions visible to different transactions. When a transaction modifies a record, a new version is written to the database, and a previous version, representing only the difference between the version of the record that was read by the transaction and the new value of the record, is written as a back version of that record.
Read committed isolation using row versioning is somewhere in between Locks and Multi-Versioning. Under this isolation level, read operations do not acquire locks against the active live data. However, with update operations the process is the same for this isolation level as it is for the default read committed isolation level, The selection of rows to update is done by using a blocking scan where an update lock is taken on the data row as data values are read.
Snapshot isolation uses purely Multi-Versioning because data that is to be modified is not actually locked in advance, but the data is locked when it is selected for modification. When a data row meets the update criteria, the snapshot transaction verifies that the data has not been modified by another transaction after the snapshot transaction started. If the data has not been modified by another transaction, the snapshot transaction locks the data, updates the data, releases the lock, and moves on. If the data has been modified by another transaction, an update conflict occurs and the snapshot transaction rolls back.
Although locking can be the best concurrency-control choice for applications that require data consistency and inter-transaction protection, it can cause writers to block readers. If a transaction changes a row, another transaction cannot read the row until the writer commits. There are cases where waiting for the change to complete is the correct response; however, there are cases where the previous transactionally consistent state of the row is enough.
Reducing BLOCKING and DEADLOCKING issues
Posted: December 4, 2012 in Database AdministratorTags: Block, Blocking in SQL Server, Deadlocks in SQL Server, Improve SQL Server Performance, Locks, Performance enhancement, RCSI, Read-Committed Snapshot Isolation level, Reducing blocking and deadlocking issues, SQL Server, Transaction, Transaction level isolation
As we know blocking
occurs when one SPID holds a lock on a specific resource and a second
SPID attempts to acquire a conflicting lock type on the same resource.
Typically, the time frame for which the first SPID locks the resource is
very small. When it releases the lock, the second connection is free to
acquire its own lock on the resource and continue processing. This is
normal behavior and may happen many times throughout the course of a day
with no noticeable effect on system performance.
By using the read-committed snapshot isolation level introduced in Microsoft SQL Server 2005, we can reduce blocking and deadlocking issues caused by lock contention.Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb.
As per BOL, The read-committed snapshot isolation level provides the following benefits:By using the read-committed snapshot isolation level introduced in Microsoft SQL Server 2005, we can reduce blocking and deadlocking issues caused by lock contention.Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb.
- SELECT statements do not lock data during a read operation. Read transactions do not block write transactions and vice versa.
- Since
SELECT statements do not acquire locks in most cases, the number of
locks required by a transaction is reduced, which reduces the system
overhead required to manage locks.
- The possibility of blocking is significantly reduced.
- SELECT
statements can access the last committed value of the row while other
transactions are updating the row without getting blocked.
- The number of blocks and deadlocks is reduced.
- Fewer lock escalations occur.
- Readers DO NOT block Writers
- Writers DO NOT block Readers
- Writers DO block Writers
REMEMBER:
TempDB is a critical resource for the performance of Read-Committed
Snapshot Isolation. RCSI puts a copy of the records that are being
modified in TempDB before the transaction is committed so you need to
make sure that TempDB has appropriate space and is placed on fast drives
that ideally are isolated away from your user database data and log
files.
The default Syntax is asSET TRANSACTION ISOLATION LEVEL
{ READ COMMITTED | READ UNCOMMITTED | REPEATABLE READ | SERIALIZABLE }
Running Queries Details
Posted: November 28, 2012 in Database AdministratorTags: Advance SP_WHO, Advance SP_WHO2, Advance version of SP_WHO, Advance version of SP_WHO2, Blocked By, current command, current Query, current running query, Database Administrator, DBA, HOST IP, HOST NAME, Running Queries Details, seeing current running command, SP_who, SP_WHO2, SQL Server, Whats goin on server
As we know, to check whats going on at server level, we commonly use SP_WHO or SP_WHO2
but from both of these, there are no clear picture about exact commands
which currently running on server, these shows the command, not exact
full text string that’s currently running, Here is a some DMV’s
manipulation to find the same.
SELECT SPID = er.session_id,BlkBy = er.blocking_session_id
,ElapsedMS = er.total_elapsed_time
,CPU = er.cpu_time
,IOReads = er.logical_reads + er.reads
,IOWrites = er.writes
,Executions = ec.execution_count
,CommandType = er.command
,ObjectName = OBJECT_SCHEMA_NAME(qt.objectid,dbid) + ‘.’ + OBJECT_NAME(qt.objectid, qt.dbid)
,SQLStatement = SUBSTRING(qt.text,er.statement_start_offset/2,
(CASE WHEN er.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2 ELSE
er.statement_end_offset END - er.statement_start_offset)/2)
,Status = ses.status
,[Login] = ses.login_name
,Host = ses.host_name
,DBName = DB_Name(er.database_id)
,LastWaitType = er.last_wait_type
,StartTime = er.start_time
,Protocol = con.net_transport
,transaction_isolation =CASE ses.transaction_isolation_level
WHEN 0 THEN ‘Unspecified’
WHEN 1 THEN ‘Read Uncommitted’
WHEN 2 THEN ‘Read Committed’
WHEN 3 THEN ‘Repeatable’
WHEN 4 THEN ‘Serializable’
WHEN 5 THEN ‘Snapshot’
END
,ConnectionWrites = con.num_writes
,ConnectionReads = con.num_reads
,ClientAddress = con.client_net_address
,Authentication = con.auth_scheme
FROM sys.dm_exec_requests er
LEFT JOIN sys.dm_exec_sessions ses ON ses.session_id = er.session_id
LEFT JOIN sys.dm_exec_connections con ON con.session_id = ses.session_id
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) as qt
OUTER APPLY ( SELECT execution_count = MAX(cp.usecounts) FROM sys.dm_exec_cached_plans cp
WHERE cp.plan_handle = er.plan_handle ) ec
ORDER BY er.blocking_session_id DESC, er.logical_reads + er.reads DESC, er.session_id
Similarity and Difference between Truncate and Delete
Posted: November 23, 2012 in Database AdministratorTags: Delete, Delete in SQL Server, Difference Between Delete and Truncate, Similarity and Difference between Truncate and Delete, SQL Server DELETE, SQL Server Truncate, Truncate, Truncate in SQL Server
Similarity
These both command will only delete data of the specified table, they cannot remove the whole table data structure.
Difference
These both command will only delete data of the specified table, they cannot remove the whole table data structure.
Difference
- TRUNCATE is a DDL (data definition language) command whereas DELETE is a DML (data manipulation language) command.
- We can’t execute a trigger in case of TRUNCATE whereas with DELETE command, we can execute a trigger.
- TRUNCATE
is faster than DELETE, because when you use DELETE to delete the data,
at that time it store the whole data in rollback space from where you
can get the data back after deletion. In case of TRUNCATE, it will not
store data in rollback space and will directly delete it. You can’t get
the deleted data back when you use TRUNCATE.
- We can use any condition in WHERE clause using DELETE but you can’t do it with TRUNCATE.
- If table is referenced by any foreign key constraints then TRUNCATE will not work.
SP_EXECUTESQL v/s EXECUTE
Posted: November 22, 2012 in Database AdministratorTags: Difference between sp_executesql and execute, SP_EXECUTESQL v/s EXEC, sp_executesql v/s execute, SP_EXECUTESQL v/s EXECUTE/EXEC, What is the difference between sp_executesql and execute, When to use sp_executesql and execute, which one better sp_executesql or execute
sp_executesql
is use for parameterized statements while EXEC does not support
Parameterized statements. Parameterized statements is more secured in
terms of SQL injection and its better reuse in the plan cache which
reduces overhead on the server and boost up performance. Sp_executesql
executes a string of Transact-SQL in its own self-contained batch. When
it is run, SQL Server compiles the code in the string into an execution
plan that is separate from the batch that contained the sp_executesql
and its string.
Suppose,
if a query which takes a parameter i.e. “EmpID”, When we run the query
with ” EmpID ” as 1 and 2 it would be creating two different cache entry
(one each for value 1 and 2 respectively).
– First clear Proc cacheDBCC Freeproccache
Go
– Lets Check
– Using EXEC
Declare @strQuery nvarchar(1000)
Select @strQuery = ‘Select * from Employee where EmpID = ”1”’
Exec (@strQuery)
Select @strQuery = ‘Select * from Employee where EmpID = ”2”’
Exec (@strQuery)
– Using SP_EXECUTESQL
–Declare @strQuery nvarchar(1000)
Select @strQuery = ‘Select * from Employee where EmpID =@EmpID’
Exec sp_executesql @strQuery, N’@EmpID int’, 1
Exec sp_executesql @strQuery, N’@EmpID int’, 2
– Lets Check execution count for both
Select sqlTxt.text, QS.execution_count from sys.dm_exec_query_stats QS
Cross Apply (Select [text] from sys.dm_exec_sql_text(QS.sql_handle)) as sqlTxt
It
means for Unparameterised (EXEC ) queries the cached plan is reused
only if we ask for the same id again. So the cached plan is not of any
major use. In case of SP_EXECUTESQL- means for “Parameterised” queries
the cached plan would be created only once and would be reused ‘n’
number of times. Similar to that of a stored procedure. So this would
have better performance.
sp_executesql works as “Forced Statement Caching” while EXEC works as “Dynamic String Execution”Summarizing Data with ROLLUP, CUBE, GROUPING SETS, COMPUTE and COMPUTE BY
Posted: November 20, 2012 in Database AdministratorTags: Avoiding nested queries, Column Name for CUBE, Column Name for ROLLUP, Command for Summarizing Data, COMPUTE and COMPUTE BY, Compute and compute by in sql server, CUBE, CUBE in SQL server, Difference between ORDER BY and the COMPUTE BY, Difference between ROLLUP and CUBE, GROUPING SETS, Grouping Sets in SQL Server, Header for CUBE, Header for ROLLup, How to group data, Query Performance, Query to group Data, Reporting from T-SQL, ROLLUP in SQL Server, SQL Serer, SQL Server Summarizing Data, Summarizing Data commnads in SQL Server, Summarizing Data in SQL Server, Summarizing Data with COMPUTE, Summarizing Data with COMPUTE BY, Summarizing Data with CUBE, Summarizing Data with GROUPING SETS, Summarizing Data with ROLLUP, Summarizing groups of by compute clause, T-SQL Summarizing data Commands
Its
very common, The GROUP BY statement to is used to summarize data, is
almost as old as SQL itself. Microsoft introduced additional paradigm of
ROLLUP and CUBE to add power to the GROUP BY clause in SQL Server 6.5
itself. The CUBE or ROLLUP operators,
which are both part of the GROUP BY clause of the SELECT statement and
The COMPUTE or COMPUTE BY operators, which are also associated with
GROUP BY.
A very new feature in SQL Server 2008 is the GROUPING SETS clause, which allows us to easily specify combinations of field groupings in our queries to see different levels of aggregated data.
– Create a Sample SalesData table
CREATE TABLE SalesData (EmpCode Char(5), SalesYear INT, SalesAmount MONEY)
– Insert some rows in SaleData table
INSERT SalesData VALUES(1, 2005, 10000),(1, 2006, 8000),(1, 2007, 17000),
(2, 2005, 13000),(2, 2007, 16000),(3, 2006, 10000),
(3, 2007, 14000),(1, 2008, 15000),(1, 2009, 15500),
(2, 2008, 19000),(2, 2009, 16000),(3, 2008, 8000),
(3, 2009, 7500),(1, 2010, 9000),(1, 2011, 15000),
(2, 2011, 5000),(2, 2010, 16000),(3, 2010, 12000),
(3, 2011, 14000),(1, 2012, 8000),(2, 2006, 6000),
(3, 2012, 2000)
Will show result as ,
case when grouping(EmpCode)=1 THEN ‘Grand Total’ else EmpCode end as EmpCode,
sum(SalesAmount) as SalesAMT from SalesData Group by ROLLUP(EmpCode)
will show result as,
Will shows as
Select SalesYear,EmpCode,sum(SalesAmount) as SalesAMT from SalesData Group by CUBE(SalesYear,EmpCode)
Will show as,
Select EmpCode,SalesYear,sum(SalesAmount) as SalesAMT from SalesData Group by GROUPING SETS (EmpCode,SalesYear)
The COMPUTE and COMPUTE BY clauses are provided for backward compatibility. Instead, use the following components, both are discontinued from SQL Server 2012.
A very new feature in SQL Server 2008 is the GROUPING SETS clause, which allows us to easily specify combinations of field groupings in our queries to see different levels of aggregated data.
These operators generate result sets that
contain both detail rows for each item in the result set and summary
rows for each group showing the aggregate totals for that group. The
GROUP BY clause can be used to generate results that contain aggregates
for each group, but no detail rows.
COMPUTE and COMPUTE BY are supported for
backward compatibility. The ROLLUP operator is preferred over either
COMPUTE or COMPUTE BY. The summary values generated by COMPUTE or
COMPUTE BY are returned as separate result sets interleaved with the
result sets returning the detail rows for each group, or a result set
containing the totals appended after the main result set. Handling these
multiple result sets increases the complexity of the code in an
application. Neither COMPUTE nor COMPUTE BY are supported with server
cursors, and ROLLUP is. CUBE and ROLLUP generate a single result set
containing embedded subtotal and total rows. The query optimizer can
also sometimes generate more efficient execution plans for ROLLUP than
it can for COMPUTE and COMPUTE BY. When GROUP BY is used without these
operators, it returns a single result set with one row per group
containing the aggregate subtotals for the group. There are no detail
rows in the result set.
To demonstrate all these, lets create a sample table as SalesDate as– Create a Sample SalesData table
CREATE TABLE SalesData (EmpCode Char(5), SalesYear INT, SalesAmount MONEY)
– Insert some rows in SaleData table
INSERT SalesData VALUES(1, 2005, 10000),(1, 2006, 8000),(1, 2007, 17000),
(2, 2005, 13000),(2, 2007, 16000),(3, 2006, 10000),
(3, 2007, 14000),(1, 2008, 15000),(1, 2009, 15500),
(2, 2008, 19000),(2, 2009, 16000),(3, 2008, 8000),
(3, 2009, 7500),(1, 2010, 9000),(1, 2011, 15000),
(2, 2011, 5000),(2, 2010, 16000),(3, 2010, 12000),
(3, 2011, 14000),(1, 2012, 8000),(2, 2006, 6000),
(3, 2012, 2000)
ROLLUP : – The ROLLUP operator generates reports that contain subtotals and totals.
The ROLLUP option, placed after the GROUP BY clause, instructs SQL
Server to generate an additional total row. Let see the examples as per
our TestData from SalesData Table.
Select EmpCode,sum(SalesAmount) as SalesAMT from SalesData Group by ROLLUP(EmpCode)Will show result as ,
Here
we can see clearly sum of all EmpCode with NULL title, Once a guy asked
about this NULL , How to give Name for this Null as suppose ‘Grand
Total’, here is some manipulation with tricky tips and we can get the
desired output and we can also change it as where its required.
Selectcase when grouping(EmpCode)=1 THEN ‘Grand Total’ else EmpCode end as EmpCode,
sum(SalesAmount) as SalesAMT from SalesData Group by ROLLUP(EmpCode)
will show result as,
In
ROLLUP parameter we can also set multiple grouping set like
ROLLUP(EmpCode,Year), hence ROLLUP outputs are hierarchy of values as
per selected columns.
CUBE : The
CUBE operator is specified in the GROUP BY clause of a SELECT
statement. The select list contains the dimension columns and aggregate
function expressions. The GROUP BY specifies the dimension columns by
using the WITH CUBE keywords. The result set contains all possible
combinations of the values in the dimension columns, together with the
aggregate values from the underlying rows that match that combination of
dimension values. CUBE generates a result set that represents aggregates for all combinations of values as per the selected columns. Let see the examples as per our TestData from SalesData Table.
Select EmpCode,SalesYear,sum(SalesAmount) as SalesAMT from SalesData Group by CUBE(SalesYear,EmpCode)
Will shows as
Select SalesYear,EmpCode,sum(SalesAmount) as SalesAMT from SalesData Group by CUBE(SalesYear,EmpCode)
Will show as,
The
ROLLUP and CUBE aggregate functions generate subtotals and grand totals
as separate rows, and supply a null in the GROUP BY column to indicate
the grand total.
the
difference between the CUBE and ROLLUP operator is that the CUBE
generates a result set that shows the aggregates for all combinations of
values in the selected columns. By contrast, the ROLLUP operator
returns only the specific result set.The ROLLUP operator generates a
result set that shows the aggregates for a hierarchy of values in the
selected columns. Also, the ROLLUP operator provides only one level of
summarization
GROUPING SETS : The
GROUPING SETS function allows you to choose whether to see subtotals
and grand totals in your result set. Its allow us versatility as- include just one total
- include different levels of subtotal
- include a grand total
- choose the position of the grand total in the result set
Let See examples,
Select EmpCode,SalesYear,sum(SalesAmount) as SalesAMT from SalesData Group by GROUPING SETS (EmpCode,SalesYear)
COMPUTE & COMPUTE BY: The
summary values generated by COMPUTE appear as separate result sets in
the query results. The results of a query that include a COMPUTE clause
are like a control-break report. This is a report whose summary values
are controlled by the groupings, or breaks, that you specify. You can
produce summary values for groups, and you can also calculate more than
one aggregate function for the same group.
When COMPUTE is specified with the optional BY clause, there are two result sets for each group that qualifies for the SELECT:
- The first result set for each group has the set of detail rows that contain the select list information for that group.
- The
second result set for each group has one row that contains the subtotals
of the aggregate functions specified in the COMPUTE clause for that
group.
When COMPUTE is specified without the optional BY clause, there are two result sets for the SELECT:
- The first result set for each group has all the detail rows that contain the select list information.
- The second result set has one row that contains the totals of the aggregate functions specified in the COMPUTE clause.
Comparing COMPUTE to GROUP BY
To summarize the differences between COMPUTE and GROUP BY:
- GROUP
BY produces a single result set. There is one row for each group
containing only the grouping columns and aggregate functions showing the
subaggregate for that group. The select list can contain only the
grouping columns and aggregate functions.
-
COMPUTE produces multiple result sets. One type of result set contains the detail rows for each group containing the expressions from the select list. The other type of result set contains the subaggregate for a group, or the total aggregate for the SELECT statement. The select list can contain expressions other than the grouping columns or aggregate functions. The aggregate functions are specified in the COMPUTE clause, not in the select list.
Let See example
The COMPUTE and COMPUTE BY clauses are provided for backward compatibility. Instead, use the following components, both are discontinued from SQL Server 2012.
Magic Tables
Posted: November 20, 2012 in Database AdministratorTags: deleted table in sql server, inserted table in sql server, Magic Table Example, Magic Table example in SQL Server, magic table with example, Magic Tables, SQL Server, SQL Server Magic Table, SQL Server Magic Table Example, SQL Server Magic Tables, Trigger use Magic Table, Trigger's Magic Table, use of magic table or virtual table, virtual table with example
Magic Tables
are invisible tables which created on MS SQL Server, during
INSERT/UPDATE/DELETE operations on any table. These tables temporarily
persists values before completing the DML statements.
Magic Tables are internal table which is used by the SQL server to recover recently inserted, deleted and updated data into SQL server database. That is when we insert or delete any record from any table in SQL server then recently inserted or deleted data from table also inserted into inserted magic table or deleted magic table with help of which we can recover data which is recently used to modify data into table either use in delete, insert or update to table. Basically there are two types of magic table in SQL server namely: INSERTED and DELETED, update can be performed with help of these twos. When we update the record from the table, the INSERTED table contains new values and DELETED table contains the old values. Magic Tables does not contain the information about the Columns of the Data Type text , ntext or image. These are maintained by SQL Server for internal processing whenever an update,insert,Delete occur on table.
These Magic tables is used In SQL Server 6.5, 7.0 & 2000 versions with Triggers only while in SQL Server 2005, 2008 & 2008 R2 Versions can use these Magic tables with Triggers and Non-Triggers also. Magic tables with Non-Trigger activities uses OUTPUT Clause in SQL Server 2005, 2008 & 2008 R2 versions.
Suppose we have Employee table, Now We need to create two triggers to see data with in virtual tables Inserted and Deleted and without triggers.
For INSERTED virtual table, using Trigger ( Magic Tables with Triggers)
CREATE TRIGGER trg_Emp_Ins ON Employee
FOR INSERT
AS
begin
/* Here you can write your required codes, but I am here putting only demostration of Magic Table.*/
Print ‘Data in INSERTED Table’
SELECT * FROM INSERTED – It will show data in Inserted virtual table
Print ‘Data in DELETED Table’
SELECT * FROM DELETED – It will show data in Deleted virtual table
end
–Now insert a new record in Employee table to see data with in Inserted virtual tables
INSERT INTO Employee(Name, BasicSalary) VALUES(‘Virendra’,2000)
For INSERTED virtual table, without Trigger ( Magic Tables with Non-Triggers)
– Use INSERTED Magic Tables with OUTPUT Clause to Insert values in Temp Table or Table Variable
create table #CopyEMP(CName varchar(12),CDept varchar(12))
–Now insert a new record in Employee table to see data with in Inserted virtual tables
Insert into Employee(Name, BasicSalary) OUTPUT INSERTED.Name, INSERTED.BasicSalary into #CopyEMP values(‘Ram’,’100′)
Select * from #CopyEMP
For DELETED virtual table, using Trigger ( Magic Tables with Triggers)
CREATE TRIGGER trg_Emp_Ins ON Employee
FOR DELETE
AS
Begin
/* Here you can write your required codes, but I am here putting only demostration of Magic Table.*/
Print ‘INSERTED Table’
SELECT * FROM INSERTED – It will show data in Inserted virtual table
Print ‘DELETED Table’
SELECT * FROM DELETED – It will show data in Deleted virtual table
End
–Now Delete few records in Employee table to see data with in DELETED virtual tables
DELETE Employee where BasicSalary > 10000
For DELETED virtual table, without Trigger ( Magic Tables with Non-Triggers)
– Use DELETED Magic Tables with OUTPUT Clause to Insert values in Temp Table or Table Variable
create table #CopyEMP(CName varchar(12),CDept varchar(12))
–Now Delete record in Employee table to see data with in DELETED virtual tables
DELETE Employee where BasicSalary > 10000
Select * from #CopyEMP
Magic Tables are internal table which is used by the SQL server to recover recently inserted, deleted and updated data into SQL server database. That is when we insert or delete any record from any table in SQL server then recently inserted or deleted data from table also inserted into inserted magic table or deleted magic table with help of which we can recover data which is recently used to modify data into table either use in delete, insert or update to table. Basically there are two types of magic table in SQL server namely: INSERTED and DELETED, update can be performed with help of these twos. When we update the record from the table, the INSERTED table contains new values and DELETED table contains the old values. Magic Tables does not contain the information about the Columns of the Data Type text , ntext or image. These are maintained by SQL Server for internal processing whenever an update,insert,Delete occur on table.
These Magic tables is used In SQL Server 6.5, 7.0 & 2000 versions with Triggers only while in SQL Server 2005, 2008 & 2008 R2 Versions can use these Magic tables with Triggers and Non-Triggers also. Magic tables with Non-Trigger activities uses OUTPUT Clause in SQL Server 2005, 2008 & 2008 R2 versions.
Suppose we have Employee table, Now We need to create two triggers to see data with in virtual tables Inserted and Deleted and without triggers.
For INSERTED virtual table, using Trigger ( Magic Tables with Triggers)
CREATE TRIGGER trg_Emp_Ins ON Employee
FOR INSERT
AS
begin
/* Here you can write your required codes, but I am here putting only demostration of Magic Table.*/
Print ‘Data in INSERTED Table’
SELECT * FROM INSERTED – It will show data in Inserted virtual table
Print ‘Data in DELETED Table’
SELECT * FROM DELETED – It will show data in Deleted virtual table
end
–Now insert a new record in Employee table to see data with in Inserted virtual tables
INSERT INTO Employee(Name, BasicSalary) VALUES(‘Virendra’,2000)
For INSERTED virtual table, without Trigger ( Magic Tables with Non-Triggers)
– Use INSERTED Magic Tables with OUTPUT Clause to Insert values in Temp Table or Table Variable
create table #CopyEMP(CName varchar(12),CDept varchar(12))
–Now insert a new record in Employee table to see data with in Inserted virtual tables
Insert into Employee(Name, BasicSalary) OUTPUT INSERTED.Name, INSERTED.BasicSalary into #CopyEMP values(‘Ram’,’100′)
Select * from #CopyEMP
For DELETED virtual table, using Trigger ( Magic Tables with Triggers)
CREATE TRIGGER trg_Emp_Ins ON Employee
FOR DELETE
AS
Begin
/* Here you can write your required codes, but I am here putting only demostration of Magic Table.*/
Print ‘INSERTED Table’
SELECT * FROM INSERTED – It will show data in Inserted virtual table
Print ‘DELETED Table’
SELECT * FROM DELETED – It will show data in Deleted virtual table
End
–Now Delete few records in Employee table to see data with in DELETED virtual tables
DELETE Employee where BasicSalary > 10000
For DELETED virtual table, without Trigger ( Magic Tables with Non-Triggers)
– Use DELETED Magic Tables with OUTPUT Clause to Insert values in Temp Table or Table Variable
create table #CopyEMP(CName varchar(12),CDept varchar(12))
–Now Delete record in Employee table to see data with in DELETED virtual tables
DELETE Employee where BasicSalary > 10000
Select * from #CopyEMP
Ranking Functions – ROW_NUMBER,RANK,DENSE_RANK,NTILE
Posted: November 19, 2012 in Database AdministratorTags: DENSE_RANK, How to use SQL Server Ranking Functions, NTILE, RANK, Ranking Functions (Transact-SQL), Ranking Functions – ROW_NUMBER, Ranking Functions DENSE_RANK, Ranking Functions NTILE, Ranking Functions RANK, Ranking Functions T-SQL, ROW_NUMBER, SQL Server Ranking Functions, SQL Server's Ranking Functions
SQL
Server 2005 introduced four new ranking functions: ROW_NUMBER, RANK,
DENSE_RANK, and NTILE. These functions allow you to analyze data and
provide ranking values to result rows of a query. For example, you might
use these ranking functions for assigning sequential integer row IDs to
result rows or for presentation, paging, or scoring purposes.
All four ranking functions follow a similar syntax pattern:
function_name() OVER([PARTITION BY partition_by_list] ORDER BY order_by_list)
The basic syntax follows.
ROW_NUMBER() OVER ([<partition_by_clause>] <order_by_clause>)
RANK() OVER ([<partition_by_clause>] <order_by_clause>)
DENSE_RANK() OVER([<partition_by_clause>]<order_by_clause>)
NTILE(integer_expression) OVER ([<partition_by_clause>] <order_by_clause>)
Ranking functions are a subset of the built in functions in SQL Server. They are used to provide a rank of one kind or another to a set of rows in a partition. The partition can be the full result set, if there is no partition. Otherwise, the partitioning of the result set is defined using the partition clause of the OVER clause. The order of the ranking within the partition is defined by the order clause of OVER. Order is required even though partition is optional.
ROW_NUMBER: ROW-NUMBER function returns a sequential value for every row in the results. It will assign value 1 for the first row and increase the number of the subsequent rows.
Syntax:
SELECT ROW_NUMBER() OVER (ORDER BY column-name), columns FROM table-name
OVER – Specify the order of the rows
ORDER BY – Provide sort order for the records
RANK: The
RANK function returns the rank based on the sort order. When two rows
have the same order value, it provide same rank for the two rows and
also the ranking gets incremented after the same order by clause.
Order_by_clause – Set of results order the within the partition
In the above example, based on the sort order Employee Name, the Rank is given.
The first two rows in the list has same Employee Name, those rows are given same Rank, followed by the rank of next for another set of rows because there are two rows that are ranked higher. Therefore, the RANK function does not always return consecutive integers.
Order_by_clause – Set of results Order the within the partition
NTILE: NTILE () splits the set of rows into the specified number of groups. It equally splits the rows for each group when the number of rows is divisible by number of group. The number is incremented for every additional group. If the number of rows in the table is not divisible by total groups count (integer_expression), then the top groups will have one row greater than the later groups. For example if the total number of rows is 6 and the number of groups is 4, the first two groups will have 2 rows and the two remaining groups will have 1 row each
All four ranking functions follow a similar syntax pattern:
function_name() OVER([PARTITION BY partition_by_list] ORDER BY order_by_list)
The basic syntax follows.
ROW_NUMBER() OVER ([<partition_by_clause>] <order_by_clause>)
RANK() OVER ([<partition_by_clause>] <order_by_clause>)
DENSE_RANK() OVER([<partition_by_clause>]<order_by_clause>)
NTILE(integer_expression) OVER ([<partition_by_clause>] <order_by_clause>)
Ranking functions are a subset of the built in functions in SQL Server. They are used to provide a rank of one kind or another to a set of rows in a partition. The partition can be the full result set, if there is no partition. Otherwise, the partitioning of the result set is defined using the partition clause of the OVER clause. The order of the ranking within the partition is defined by the order clause of OVER. Order is required even though partition is optional.
ROW_NUMBER: ROW-NUMBER function returns a sequential value for every row in the results. It will assign value 1 for the first row and increase the number of the subsequent rows.
Syntax:
SELECT ROW_NUMBER() OVER (ORDER BY column-name), columns FROM table-name
OVER – Specify the order of the rows
ORDER BY – Provide sort order for the records
Syntax:
SELECT RANK() OVER ([< partition_by_clause >] < order_by_clause >)
SELECT RANK() OVER ([< partition_by_clause >] < order_by_clause >)
SELECT RANK() OVER ([< partition_by_clause >] < order_by_clause >)
Partition_by_clause – Set of results grouped into partition in which RANK function applied.Order_by_clause – Set of results order the within the partition
In the above example, based on the sort order Employee Name, the Rank is given.
The first two rows in the list has same Employee Name, those rows are given same Rank, followed by the rank of next for another set of rows because there are two rows that are ranked higher. Therefore, the RANK function does not always return consecutive integers.
DENSE_RANK: The
DENSE_RANK function is very similar to RANK and return rank without any
gaps. This function sequentially ranks for each unique order by clause.
Syntax:
SELECT DENSE_RANK() OVER ([< partition_by_clause >] <order_by_clause>)</order_by_clause>
SELECT DENSE_RANK() OVER ([< partition_by_clause >] <order_by_clause>)</order_by_clause>
SELECT DENSE_RANK() OVER ([< partition_by_clause >] )
Partition_by_clause – Set of reults grouped into partition in which DENSE RANK function applied.Order_by_clause – Set of results Order the within the partition
NTILE: NTILE () splits the set of rows into the specified number of groups. It equally splits the rows for each group when the number of rows is divisible by number of group. The number is incremented for every additional group. If the number of rows in the table is not divisible by total groups count (integer_expression), then the top groups will have one row greater than the later groups. For example if the total number of rows is 6 and the number of groups is 4, the first two groups will have 2 rows and the two remaining groups will have 1 row each
Syntax:
SELECT NTILE (integer_expression) OVER ([< partition_by_clause >] <order_by_clause>) </order_by_clause>
SELECT NTILE (integer_expression) OVER ([< partition_by_clause >] <order_by_clause>) </order_by_clause>
SELECT NTILE (integer_expression) OVER ([< partition_by_clause >] )
(integer_expression) – The number of groups into which each partition must be divided.EXCEPT v/s NOT IN clause
Posted: November 19, 2012 in Database AdministratorTags: Comparison between EXCEPT operator and NOT IN, Difference between Except and Not In, Difference between EXCEPT and NOT IN Operator, EXCEPT, EXCEPT v/s NOT IN clause, except vs not in in sql, EXCEPT vs. NOT EXISTS, NOT IN, use of EXCEPt and NOT IN, When to use EXCEPT vs. NOT IN?, which is better EXCEPT or NOT IN
- The EXCEPT operator returns the rows that are only in the first result set but not in the second. EXCEPT introduced from SQL Server 2005.
- The NOT IN
returns all rows from returned from by first result set which does not exist in the second select statement
- When we combine two queries using EXCEPT clause, it will returns distinct rows from the first SELECT statement that are not returned by the second one.
- EXCEPT clause works the same way as the UNION operator of SQL and MINUS clause in Oracle.
The syntax of EXCEPT clause is as follow
column1
[, column2 ]
FROM
table1
[, table2 ]
[WHERE condition]
EXCEPT
SELECT
column1
[, column2 ]
FROM
table1
[, table2 ]
[WHERE condition]
- The
difference between EXCEPT and NOT IN clause is EXCEPT operator returns
all distinct rows from the rows returned by first select statement which
does not exist in the rows returned by second select statement. On the
other hand “NOT IN” will return all rows from returned by first select
statement which does not exist in the rows returned by second select
statement.
– Lets Create two sample tables ( I am creating here table variables)
Declare
@VirendraTestTable1
table (id
int,
course
varchar(50)
);
Declare
@VirendraTestTable2
table (id
int,course
varchar(50)
);
– Insert Some sample date to table @VirendraTestTable1
Insert
into
@VirendraTestTable1
values(1,‘ASP .NET’),(2,‘SQL SERVER’),(2,‘SQL SERVER’),(3,‘FOXPRO’),(3,‘FOXPRO’)
– Insert Some sample date to table @VirendraTestTable2
Insert
into
@VirendraTestTable2
values(1,‘ASP .NET’),(2,‘SQL SERVER’),(2,‘SQL SERVER’)
– Run query with EXCEPT operator, Only distinct rows will return
‘EXCEPT output’
Select
id,course
from
@VirendraTestTable1
except
Select
id,course
from
@VirendraTestTable2
– Run query with NOT IN operator, duplicate rows will exist in the result
‘NOT IN output’
Select
id,course
from
@VirendraTestTable1
Where
id
not
in
(Select
id
from
@VirendraTestTable2
)
Hence conclusion is ,
EXCEPT is defined in terms of duplicates based on distinctness, and for example (1 is distinct from 1) is false, (1 is distinct from NULL) is true and (NULL is distinct from NULL) if false.
NOT IN is defined in terms of equality, and for example, (1=1) is true,(1=NULL) is unknown and (NULL=NULL) is unknown.
Common Table Expressions – CTE
Posted: November 19, 2012 in Database AdministratorTags: Common Table Expressions, CTE, CTE Advantages, CTE in SQL Server, CTE means, Solving nested queries with CTE - Common Table Expressions, SQL Server - What is CTE?, SQL Server 2005, SQL Server 2008, SQL Server Common Table Expressions, SQL Server CTE, Using CTE with nested queries
Hi,
It’s a headache for developers/DBA to write or read a complex SQL query
using a number of Joins because Dealing with sub-queries often required
to select a part of the data from a sub query or even join data from a
query with some other many tables, for same we have an option to name
our sub-queries with an alias or to use it directly and our business
processes requirement may be getting more and more complex and as a
resultant query would be not maintainable. Common Table Expressions(CTE)
is a new way/method to define the sub-queries at once and name it using
an alias and later call the same data using the alias just like what we
do with a normal table. CTE is standard ANSI SQL standard.
Common table expression (CTE) can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement.
CTE allows us to generate Tables beforehand and use it later when we actually bind the data into the output while sub-queries is parallel process in this regard.
Common table expression (CTE) can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement.
CTE allows us to generate Tables beforehand and use it later when we actually bind the data into the output while sub-queries is parallel process in this regard.
The basic syntax structure for a CTE is:
WITH expression_name [ ( column_name [,...n] ) ]AS
( CTE_query_definition )
Let see an example –
Using Sub-Query
SELECT * FROM
(SELECT ED.EmpName, ED.EmpDOB, AD.EmpAddress From AddressDetails AD
Inner join EmployeeDetails ED on ED.EmpCode = AD.EmpCode) AA
WHERE AA.Age > 50 ORDER BY AA.NAME
For Same, CTE implementation will as
With AA(Name, Age, Address) –Provide Column names for CTE temporary Table ‘AA’
AS
( SELECT ED.EmpName, ED.EmpDOB, AD.EmpAddress From AddressDetails AD
Inner join EmployeeDetails ED on ED.EmpCode = AD.EmpCde)
SELECT * FROM AA WHERE AA.Age > 50 ORDER BY AA.NAME
CTE
offers the advantages of improved readability and ease in maintenance
of complex queries. The query can be divided into separate, simple,
logical building blocks. These simple blocks can then be used to build
more complex, interim CTEs until the final result set is generated.
Common Table Expressions offer the same
functionality as a view, but are ideal for one-off usages where you
don’t necessarily need a view defined for the system. CTE Create a
recursive query. It’s a Substitute for a view when the general use of a
view is not required; that is, you do not have to store the definition
in metadata. Using CTE we can Reference the resulting table multiple
times in the same statement.
As per MSDN, The following guidelines apply to nonrecursive common table expressions- A
CTE must be followed by a single SELECT, INSERT, UPDATE, MERGE, or
DELETE statement that references some or all the CTE columns. A CTE can
also be specified in a CREATE VIEW statement as part of the defining
SELECT statement of the view.
- Multiple
CTE query definitions can be defined in a nonrecursive CTE. The
definitions must be combined by one of these set operators: UNION ALL,
UNION, INTERSECT, or EXCEPT.
- A CTE can reference itself and previously defined CTEs in the same WITH clause. Forward referencing is not allowed.
- Specifying more than one WITH clause in a CTE is not allowed. For example, if a CTE_query_definition contains a subquery, that subquery cannot contain a nested WITH clause that defines another CTE.
-
The following clauses cannot be used in the CTE_query_definition:
- COMPUTE or COMPUTE BY
- ORDER BY (except when a TOP clause is specified)
- INTO
- OPTION clause with query hints
- FOR XML
- FOR BROWSE
- COMPUTE or COMPUTE BY
- When a CTE is used in a statement that is part of a batch, the statement before it must be followed by a semicolon.
- A query referencing a CTE can be used to define a cursor.
- Tables on remote servers can be referenced in the CTE.
- When
executing a CTE, any hints that reference a CTE may conflict with other
hints that are discovered when the CTE accesses its underlying tables,
in the same manner as hints that reference views in queries. When this
occurs, the query returns an error.
- When
a CTE is the target of an UPDATE statement, all references to the CTE
in the statement must match. For example, if the CTE is assigned an
alias in the FROM clause, the alias must be used for all other
references to the CTE. Ambiguous CTE references may produce unexpected
join behavior and unintended query results
Guidelines for Defining and Using Recursive Common Table Expressions
The following guidelines apply to defining a recursive common table expression:
- The
recursive CTE definition must contain at least two CTE query
definitions, an anchor member and a recursive member. Multiple anchor
members and recursive members can be defined; however, all anchor member
query definitions must be put before the first recursive member
definition. All CTE query definitionsare anchor members unless they
reference the CTE itself.
- Anchor
members must be combined by one of these set operators: UNION ALL,
UNION, INTERSECT, or EXCEPT. UNION ALL is the only set operator allowed
between the last anchor member and first recursive member, and when
combining multiple recursive members.
- The number of columns in the anchor and recursive members must be the same.
- The
data type of a column in the recursive member must be the same as the
data type of the corresponding column in the anchor member.
- The FROM clause of a recursive member must refer only one time to the CTE expression_name.
-
The following items are not allowed in the CTE_query_definition of a recursive member:
- SELECT DISTINCT
- GROUP BY
- HAVING
- Scalar aggregation
- TOP
- LEFT, RIGHT, OUTER JOIN (INNER JOIN is allowed)
- Subqueries
- A hint applied to a recursive reference to a CTE inside a CTE_query_definition.
The following guidelines apply to using a recursive common table expression:
- SELECT DISTINCT
- All
columns returned by the recursive CTE are nullable regardless of the
nullability of the columns returned by the participating SELECT
statements.
- An
incorrectly composed recursive CTE may cause an infinite loop. For
example, if the recursive member query definition returns the same
values for both the parent and child columns, an infinite loop is
created. To prevent an infinite loop, you can limit the number of
recursion levels allowed for a particular statement by using the
MAXRECURSION hint and a value between 0 and 32,767 in the OPTION clause
of the INSERT, UPDATE, MERGE, DELETE, or SELECT statement. This lets you
control the execution of the statement until you resolve the code
problem that is creating the loop. The server-wide default is 100. When 0
is specified, no limit is applied. Only one MAXRECURSION value can be
specified per statement.
- A view that contains a recursive common table expression cannot be used to update data.
- Cursors may be defined on queries using CTEs. The CTE is the select_statement
argument that defines the result set of the cursor. Only fast
forward-only and static (snapshot) cursors are allowed for recursive
CTEs. If another cursor type is specified in a recursive CTE, the cursor
type is converted to static.
- Tables
on remote servers may be referenced in the CTE. If the remote server is
referenced in the recursive member of the CTE, a spool is created for
each remote table so the tables can be repeatedly accessed locally. If
it is a CTE query, Index Spool/Lazy Spools is displayed in the query
plan and will have the additional WITH STACK predicate. This is one way
to confirm proper recursion.
- Analytic and aggregate functions in the recursive part of the CTE are applied to the set for the current recursion level and not to the set for the CTE. Functions like ROW_NUMBER operate only on the subset of data passed to them by the current recursion level and not the entire set of data passed to the recursive part of the CTE.
SQL Injection
Posted: November 19, 2012 in Database AdministratorTags: Security breach in SQL Server, SQL Attacks, SQL Injection, SQL injection: attacks and defenses, SQL Server, SQL Server Injection, SQL vulnerabilities handling
SQL injection is a technique often used to attack a website.It
is an attack in which attacker take the advantage of insecure
application over internet by running the SQL command against the
database and to steal information from it that too using GUI of the
website. This attack can happen with the applications in which SQL
queries are generated in the code. The attacker tries to inject their
own SQL into the statement that the application will use to query the
database.
Examples-
1) For example suppose the below query string is used for a search feature in a website and a user may have the inserted “VIRENDRA” as the keyword to search. If in the code behind the keyword is directly used into the SQL statement, it will be like.
String sql = “Select EmpName, City from EmployeeMaster where EmpName like ‘%” + txtSearch.Text + “%’”;
But the attacker might enter the keyword like
‘ UNION SELECT name, type, id from sysobjects;–
This way attacker can get details of all the objects of the application database and using that attacker can steal further information.
2) Web applications use string queries, where the string contains both the query itself and its parameters. The string is built using server-side script languages such as ASP, JSP and CGI, and is then sent to the database server as a single SQL statement. The following example demonstrates an ASP code that generates a SQL query.
sql_query= “SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = ” & Request.QueryString(“ProductID”)
The call Request.QueryString(“ProductID”) extracts the value of the Web form variable ProductID so that it can be appended as the SELECT condition.
When a user enters the following URL:
http://www.mydomain.com/products/products.asp?productid=123
The corresponding SQL query is executed:
SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = 123
An attacker may abuse the fact that the ProductID parameter is passed to the database without sufficient validation. The attacker can manipulate the parameter’s value to build malicious SQL statements. For example, setting the value “123 OR 1=1″ to the ProductID variable results in the following URL:
http://www.mydomain.com/products/products.asp?productid=123 or 1=1
The corresponding SQL Statement is:
SELECT ProductName, Product Description FROM Products WHERE ProductNumber = 123 OR 1=1
This condition would always be true and all ProductName and ProductDescription pairs are returned. The attacker can manipulate the application even further by inserting malicious commands. For example, an attacker can request the following URL:
http://www.mydomain.com/products/products.asp?productid=123; DROP TABLE Products
In this example the semicolon is used to pass the database server multiple statements in a single execution. The second statement is “DROP TABLE Products” which causes SQL Server to delete the entire Products table.
An attacker may use SQL injection to retrieve data from other tables as well. This can be done using the SQL UNION SELECT statement. The UNION SELECT statement allows the chaining of two separate SQL SELECT queries that have nothing in common. For example, consider the following SQL query:
SELECT ProductName, ProductDescription FROM Products WHERE ProductID = ’123′ UNION SELECT Username, Password FROM Users;
The result of this query is a table with two columns, containing the results of the first and second queries, respectively. An attacker may use this type of SQL injection by requesting the following URL:
http://www.mydomain.com/products/products.asp?productid=123 UNION SELECT user-name, password FROM USERS
The security model used by many Web applications assumes that an SQL query is a trusted command. This enables attackers to exploit SQL queries to circumvent access controls, authentication and authorization checks. In some instances, SQL queries may allow access to host operating system level commands. This can be done using stored procedures. Stored procedures are SQL procedures usually bundled with the database server. For example, the extended stored procedure xp_cmdshell executes operating system commands in the context of a Microsoft SQL Server. Using the same example, the attacker can set the value of ProductID to be “123;EXEC master..xp_cmdshell dir–”, which returns the list of files in the current directory of the SQL Server process or as well as pass any DOS command using cmdshell.
Examples-
1) For example suppose the below query string is used for a search feature in a website and a user may have the inserted “VIRENDRA” as the keyword to search. If in the code behind the keyword is directly used into the SQL statement, it will be like.
String sql = “Select EmpName, City from EmployeeMaster where EmpName like ‘%” + txtSearch.Text + “%’”;
But the attacker might enter the keyword like
‘ UNION SELECT name, type, id from sysobjects;–
This way attacker can get details of all the objects of the application database and using that attacker can steal further information.
2) Web applications use string queries, where the string contains both the query itself and its parameters. The string is built using server-side script languages such as ASP, JSP and CGI, and is then sent to the database server as a single SQL statement. The following example demonstrates an ASP code that generates a SQL query.
sql_query= “SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = ” & Request.QueryString(“ProductID”)
The call Request.QueryString(“ProductID”) extracts the value of the Web form variable ProductID so that it can be appended as the SELECT condition.
When a user enters the following URL:
http://www.mydomain.com/products/products.asp?productid=123
The corresponding SQL query is executed:
SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = 123
An attacker may abuse the fact that the ProductID parameter is passed to the database without sufficient validation. The attacker can manipulate the parameter’s value to build malicious SQL statements. For example, setting the value “123 OR 1=1″ to the ProductID variable results in the following URL:
http://www.mydomain.com/products/products.asp?productid=123 or 1=1
The corresponding SQL Statement is:
SELECT ProductName, Product Description FROM Products WHERE ProductNumber = 123 OR 1=1
This condition would always be true and all ProductName and ProductDescription pairs are returned. The attacker can manipulate the application even further by inserting malicious commands. For example, an attacker can request the following URL:
http://www.mydomain.com/products/products.asp?productid=123; DROP TABLE Products
In this example the semicolon is used to pass the database server multiple statements in a single execution. The second statement is “DROP TABLE Products” which causes SQL Server to delete the entire Products table.
An attacker may use SQL injection to retrieve data from other tables as well. This can be done using the SQL UNION SELECT statement. The UNION SELECT statement allows the chaining of two separate SQL SELECT queries that have nothing in common. For example, consider the following SQL query:
SELECT ProductName, ProductDescription FROM Products WHERE ProductID = ’123′ UNION SELECT Username, Password FROM Users;
The result of this query is a table with two columns, containing the results of the first and second queries, respectively. An attacker may use this type of SQL injection by requesting the following URL:
http://www.mydomain.com/products/products.asp?productid=123 UNION SELECT user-name, password FROM USERS
The security model used by many Web applications assumes that an SQL query is a trusted command. This enables attackers to exploit SQL queries to circumvent access controls, authentication and authorization checks. In some instances, SQL queries may allow access to host operating system level commands. This can be done using stored procedures. Stored procedures are SQL procedures usually bundled with the database server. For example, the extended stored procedure xp_cmdshell executes operating system commands in the context of a Microsoft SQL Server. Using the same example, the attacker can set the value of ProductID to be “123;EXEC master..xp_cmdshell dir–”, which returns the list of files in the current directory of the SQL Server process or as well as pass any DOS command using cmdshell.
SQL Server Hardening
Posted: November 14, 2012 in Database AdministratorTags: SQL Server, SQL Server Configuration Manager, SQL Server 2008, SQL Server Hardening, Microsoft Baseline Security Analyzer, Best Practices Analyzer, Hardening SQL Server, SQL Server Security Best Practices, SQL Server Security, SQL Server hardening checklist, Security Best Practices
Hi,
As DBA, its many time we have to faced Questions from Auditors or
Clients where your SQL Server follow SQL Server Hardening best
practices?, Here are some few as,
- During
installation, Install only required components, When the SQL Server
installation is complete, harden the SQL Server environment.
- After the installation, use the SQL Server Configuration Manager tool in order to disable unnecessary features and services.
- Install the most recent critical fixes and service packs for both Windows and SQL Server.
- When you’re selecting authentication modes, Windows Authentication is a more secure choice
- If there is still a need to use SQL Authentication – enforce strong password policy.
- Do not use the SA
account for day-to-day administration, logging on to the server
remotely, or having applications use it to connect to SQL. It is best if
the SA account is disabled and renamed.
- Create a role-based security policy with the Security Configuration Wizard tool.
- Create policies with Policy Based Management enable/ disable unnecessary features and services.
- After the server
has been hardened, periodically asses the server’s security using the
MBSA (Microsoft Baseline Security Analyzer) and SQL Server BPA (Best
Practices Analyzer).
- For production SQL
Servers running mission-critical databases, either hide the instance or
disable the SQL Server Browser service.
- Change the default ports associated with the SQL Server installation to put off hackers from port-scanning the server.
- Enable a firewall to filter unnecessary and unknown traffic.
- At the very least,
set security auditing to failed login attempts; otherwise, both failed
and successful logins should be captured and monitored.
-
If upgrading previous versions of SQL Server, remove the BUILTIN/Administrators group from the SQL Server Logins.
-
Use the IIS Lockdown and URL Scan tools to harden IIS.
Delete millions of Records from very huge Table
Posted: November 9, 2012 in Database AdministratorTags: best way to delete million rows, Delete many records from Large Table, Delete millions of Records from very huge Table, Delete Operations, Delete rcords from Table, Delete Records, Deletion from Huge Table, SQL Server, SQL Server Delete Millions of Rows Without hampering the Transaction Log, waitfor delay
Dear Friends, As you know, It’s the
great option to delete millions or billions of records using batches
transactions, in this the logs does not grow drastically and makes
recovery faster in the event of failures.
Its my observation, some time if we use CHECKPOINT in each transaction, give very fruitful performance in production environment, because Database Engine performs modifications to database pages in memory—in the buffer cache—and does not write these pages to disk after every change. Rather, the Database Engine periodically issues a checkpoint on database. A checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from memory to disk and, also, records information about the transaction log.
Its my observation, some time if we use CHECKPOINT in each transaction, give very fruitful performance in production environment, because Database Engine performs modifications to database pages in memory—in the buffer cache—and does not write these pages to disk after every change. Rather, the Database Engine periodically issues a checkpoint on database. A checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from memory to disk and, also, records information about the transaction log.
Let see an example,
Please Note : Here one point have to keep in mind , if we are using any date comparisons to delete records, have to store that value in a variable, will be much faster than using in query’s WHERE directly. Suppose we have to delete 60 days old records, then setting the date initially would make a difference instead of calculating for each row.
Please Note : Here one point have to keep in mind , if we are using any date comparisons to delete records, have to store that value in a variable, will be much faster than using in query’s WHERE directly. Suppose we have to delete 60 days old records, then setting the date initially would make a difference instead of calculating for each row.
DECLARE @DelDate DATETIME;
SET @DelDate = DATEADD(d, -60,GETDATE())
WHILE 1 = 1
BEGIN
– Choose records count as per your records size.
DELETE TOP (1000) FROM MyTableName WHERE column_name<@DelDate
IF @@ROWCOUNT< 1000 BREAK;
CHECKPOINT;
END
SET @DelDate = DATEADD(d, -60,GETDATE())
WHILE 1 = 1
BEGIN
– Choose records count as per your records size.
DELETE TOP (1000) FROM MyTableName WHERE column_name<@DelDate
IF @@ROWCOUNT< 1000 BREAK;
CHECKPOINT;
END
One more approach , it may be this
operation can generate a lot of disk I/O, So its recommend that you put
a “WAITFOR DELAY ‘time_to_pass’” statement in the loop to reduce the
stress on your server. Waiting for even a quarter of a second between
DELETE executions can reduce stress on your production environment, but
its better in case of Production Server if records amount are large not
so huge.
DECLARE @DelDate DATETIME;SET @DelDate = DATEADD(d,-60,GETDATE());
WHILE 1 = 1
BEGIN
– Choose records count as per your records size.
DELETE TOP (1000) FROM MyTableName WHERE column_name < @DelDate
IF @@ROWCOUNT < 1000 BREAK;
CHECKPOINT;
WAITFOR DELAY ’00:00:00.250′ – wait for quarter second
END
Hope you like this way to get better performance .
SQL Server Level Optimization’s Tips and Tricks
Posted: November 7, 2012 in Database AdministratorTags: Point to remember for SQL Server Setting, SQL Server Basic Configuration, SQL Server best Practices, SQL Server configuration setting, SQL Server Default values, SQL Server Installation best practices, SQL Server level Optimization’s Tips and Tricks, SQL Server optimal performance, SQL Server Optimization, SQL Server Setting
As a SQL Server Administrator, We have to always follow some server specific settings, below are few among them.
-
The login auditing is configured to “Failed logins only” by default. This is to record and audit the login failure attempts and is a recommended security best practice. Consider enabling Auditing for failed logins.
-
Use 2048 bit and above to create asymmetric keys.
-
There are SQL Server Agent job steps that use the incorrect brackets to call out tokens. Starting with SQL Server 2005, the token syntax changed, and square brackets – [ ], are no longer used to call out SQL Server Agent job step tokens.
-
Windows Authentication is the default authentication mode, and is much more secure than SQL Server Authentication. Windows Authentication uses Kerberos security protocol, provides password policy enforcement with regard to complexity validation for strong passwords, provides support for account lockout, and supports password expiration.
-
The SQL Server error log contains messages that indicate either the failure of database file autogrow operations or that these operations took a long period of time to complete. An application will have timeouts, or query performance will be decreased, if database file growth happens during regular activity. Enable Instant File Initialization, pre-autogrow the database files to the required size, and use a growth value in absolute size.
-
Review the backup schedules of all critical databases and schedule backups based on the various tolerance levels.
-
If The combination of the affinity mask and affinity I/O mask is incorrect then Configure the affinity mask and affinity I/O mask, such the CPU masks, do not overlap.
-
If the instance of SQL Server has recorded one or more potentially dangerous data inconsistency and data corruption errors in the past then For each one of the affected databases that encountered these errors, perform database consistency checks and take appropriate decision accordingly.
-
Sometimes one or more databases on server that have their data and log files in the same volumes can cause contention for that device and result in poor performance, for this its recomended that the Data and log files should be placed on separate volumes.
-
Inconsistencies that exist in a database can threaten the availability of the entire database or some portions of it for the application. It should be Schedule a DBCC CHECK to perform a consistency check of the problematic database.
-
Never place database files and backup files on the same volume, Always perform backups to a different volume or to a remote location/SAN.
-
Always avoid the Shrink option to manage the database file sizes. Frequent grow and shrink operations can lead to excessive fragmentation and performance issues.
-
If the databases have not CHECKSUM page protection, must need to enable this protection on those databases becausetThe CHECKSUM protection provides a high level of data integrity guarantee when reading database pages from disk.
-
Always try to avoid Auto Close database option disable because for a On systems, repeated opening and closing of the database can result in performance problems and timeout issues.
-
Be informative about Recovery Model, its always recommended that the FULL or BULK-LOGGED recovery model must be used in production servers.
-
There are databases that contain a transaction log file that contains a high number of virtual log files (VLFs).A high number of virtual log files can effect database recovery performance, and consequently, also effect various database maintenance operations that depend on recovery.in this case always Reconfigure the transaction log files for these database. For this please visit http://support.microsoft.com/kb/2028436
-
if default extended event health session is either not active or changed, Reset the changes done to the default extended event health session. The default extended event health session records critical events that can be useful for root cause investigations of certain server conditions.
-
If The max degree of parallelism option is not set to the recommended value, needs to use sp_configure stored procedure to set the max degree of parallelism option to 8 or less.Setting this option to a large value often results in unwanted resource consumption and performance degradation.
-
Incorrect partition alignment can result in adverse I/O performance impacts on SQL Server queries and operations.The hard-disk partitions are configured optimally if partitions have a starting offset of 65536 bytes, or a multiple of 65536 bytes.
-
Using cross-database or distributed transactions on a mirrored database could lead to inconsistencies after a failover.
-
Enabling the guest account allows anonymous connections to your database, The Guest account has access to databases, excluding the master, tempdb, and msdb databases.The guest user cannot be dropped. However, run REVOKE CONNECT FROM GUEST within any database other than the master or tempdb database to revoke the guest user’s CONNECT permission and disable the guest user.
-
If you see presence of Event ID 12, it indicates memory corruption due to a firmware or BIOS issue,This problem can result in system instability and data corruption, for same Check with hardware manufacturer for updated firmware and BIOS.
-
SQL Server reports Event ID 833 when I/O requests take longer than 15 seconds to finish.This indicates a potential problem with the disk I/O subsystem or configuration. This affects SQL Server performance.Use system monitor disk counters to measure disk throughput and response rates. Perform appropriate configuration changes to ensure disk response times are acceptable.
-
Event ID 825 indicates that SQL Server was unable to read data from the disk on the first try.Even though retry was successful, this is usually an indication of an underlying issue that needs investigation and fixing before it becomes worse.
-
If startup parameters for the SQL Server service are incorrect, can result in unexpected SQL Engine behavior and can also cause startup problems.Use the SQL Server Error Log to review the startup parameters that the instance of SQL Server is currently using. Then, use the SQL Configuration Manager to correct these parameters.
-
Its need to set to disable the lightweight pooling option using sp_configure stored procedure,fiber mode scheduling for routine operations can decrease performance by inhibiting the regular benefits of context switching. Furthermore, some components of SQL Server cannot function correctly in fiber mode.
-
If lock is set to a nonzero value, large transactions can encounter the “out of locks” error message, if the value specified is exceeded.Use the sp_configure system stored procedure to change the value of locks to its default setting.
-
The Microsoft Distributed Transaction Coordinator (MSDTC) needs configured properly.
-
The value for the network packet size needs not too high. Excessive memory allocations are needed to support these large network packet size, and can cause memory loss and other stability issues on the server. Be sure, Configuration of the network packet size option to be set as default value.
-
Granting users, who are not Administrators, more than read permission on the Binn folder represents a potential security risk, Only administrators should have full control permission on the Binn folder.
-
Check the permission on the Data folder. Only server administrators and SQL Server service groups should have full control permission on the Data folder. Other users and groups should not have any access.
-
The public database role has EXECUTE permissions on some of the Data Transformation Services (DTS) system stored procedures in the msdb database.A potential security issue occurs if an application uses a login that owns DTS packages to connect to an instance of SQL Server. In this scenario, there is the risk that a SQL injection attack could modify or delete existing DTS packages.Use the sp_dts_secure stored procedure to revoke public access to DTS stored procedures.
-
Every login that is created on the server is a member of the public server role. If this condition is met, every login on the server will have server permissions.Do not grant server permissions to the public server role.
-
Frequently review the Event Log entries that indicate the crash and apply the solutions provided.
-
Enable “Enforce password policy” and “Enforce password expiration” for all the SQL Server logins. Use ALTER LOGIN (Transact-SQL) to configure the password policy for the SQL Server login.Password complexity policies are designed to deter brute force attacks by increasing the number of possible passwords.Disabling the password policy could allow the user to create a SQL Server login with a weak password.
-
The sysadmin role may not be applied correctly. The sysadmin role membership should be granted to limited users. Check the membership and determine if users other than Local System, SQL Server built in admin (SA), SQL Server Engine and Agent service account or service SID are granted sysadmin permission.Grant sysadmin role membership to users who require it. Carefully evaluate the membership of the sysadmin role. Granting sysadmin membership to users who do not require it represents potential security risk.
-
If the Public Role in MSDB is enabled for the SQL Server Agent proxy account, Remove the Public Role in MSDB from the SQL Server Agent proxy account. Typically, the proxy account is given elevated privileges, so someone using the Public Role in MSDB, which has fewer privileges, could take advantage of a higher privilege level.
-
The error log files for SQL Server are very big. This could be due to repetitive errors or information messages.If the error logs are growing due to frequent error conditions, then it will cause stability and availability problems.Best Practices Recommendations Examine the error log to find out why it is growing. If it is expected information, cycle the error logs frequently and configure more error logs to be retained.
-
Do not install SQL Server on a computer that is a domain controller. Installing a SQL Server failover cluster instance on a domain controller is not supported.
-
The SQL Server uses the default values for SQL Server memory configuration [-g parameter] and is also configured to use CLR integration. It may encounter various out-of-memory conditions and associated performance problems, to configure enough memory to handle CLR integration and other needs, use the -g startup parameter.
-
To reduce contention in the tempdb database, increase the number of tempdb data files, and configure the required startup trace flags. Under a heavy load, the tempdb database can become a single point of contention and affect concurrency and performance.
-
Use AES 128-bit and greater to create symmetric keys for data encryption. If AES is not supported by the version of your operating system, use 3DES. Do not create symmetric keys in the databases shipped with SQL Server, such as the master, msdb, model, and tempdb databases.
-
Running the SQL Server Agent service/FDHOST Launcher service under a highly-privileged account represents a potential security risk.Use a specific low-privilege user account or domain account to run the SQL Server Agent service/FDHOST Launcher service.
-
Avoid using unsupported .NET Framework assemblies. If you must use them, always update them in the database when you update them in the GAC.
-
If databases whose collation is different from the model, and therefore, their collations are different from the collation used by the tempdb database, there might be collation conflict errors and unexpected behavior when using different collations to compare data in a user database against the tempdb database. to resolve this, Rebuild the user database to match the model database collation or specify COLLATE when creating temp tables.
Tips for Advanced MS SQL Server Developers
Posted: November 5, 2012 in Database AdministratorTags: Best way to write Stored Procedure in SQL Server, How to optimise Stored Procedure, SQL Server 2005, SQL Server 2008, SQL Server 2012, SQL Server Stored Procedure writting tips, Stored Procedure optimization, Tips for Advanced MS SQL Server Developers, Writting optimized Stored procedure in SQL Server
Hi Guys, Here, some basic tips about How to write optimized Stored Procedure.
- Its always better to use table owner in query like SELECT col1,col2….coln FROM dbo.Tablename
- Due to this, SQL query optimizer does not have to decide whether to retrieve from dbo.Tablename or other owner’s table and avoids recompilation.
- Always Use TRUNCATE TABLE statement instead of DELETE clause if its needed to delete all rows from a table. Its more faster then DELETE statement without any conditions. TRUNCATE TABLE always releases all the space occupied by that table’s data and indexes, without logging the individual row deletes.
- Don’t use the “sp_” prefix in a stored procedure name as the “sp_” prefix is reserved for system stored procedures. Any stored procedure that has the “sp_” prefix will cause an extra lookup in the MASTER database If a stored procedure uses same name in both the user database and a system database, the stored procedure in the user database will execute but first it will find the procedure in resource database and then the user database (for SQL server 2005/2008/2012) hence causing an extra burden to the server.
- To get number of rows from table never use “SELECT COUNT(*)” statement, There is other way for much faster to achieve this goal as SELECT rows FROM sysindexes WHERE id = OBJECT_ID(‘Table_Name’) AND indid <2
- Include “SET NOCOUNT ON” statement in your stored procedures to greatly reduce network traffic.
- Try to use table variables instead of Temporary Tables – Temp tables can cause stored procedures to recompile. (From SQL 2005 onwards, using temporary table not always causing recompilations. But adding rows to temporary tables may cause recompilations). But table variables were designed specifically to guard against stored procedure recompiles during execution. If the result set is not containing a huge number of records then you should stick to table variable, otherwise temp table has its advantages. There is a misconception that temp tables always use the tembdb database but table variable do not. Table variables also use tempdb after a certain size be careful to use them only with not very huge amount of data that you want to allocate in temp tables, otherwise you can easily get the server down.
Stored Procedure best practices
Posted: November 1, 2012 in Database AdministratorTags: Optimize stored procedure, software, SQL Server, SQL Server 2008 Stored Procedure Optimization, Stored Procedure, The best practices used for SQL Server's Stored Procedure optimization, Tunning Stored Procedure
Here, The best practices used for Stored Procedure .
Stored Procedure Tunning
You can also visit here http://youtu.be/A-17sMgQqnM
Stored Procedure Tunning
You can also visit here http://youtu.be/A-17sMgQqnM
Difference between Data file shrinking and Log file shrinking
Posted: October 29, 2012 in Database AdministratorTags: Difference between Data file shrinking and Log file shrinking, SQL Server LDF file shrink, SQL Server MDF file shrink
You
may be confused to find this question’s answer, after so lots of
google, you may read that shrinking data files is good, and in other
places you may read that it’s bad.
A shrink operation on a data file tries to move the database page nearest the end of the file down toward the start of the file. This creates “empty” space at the end of the data file that can be returned to the OS. In other words, the data file is made physically smaller.
A shrink operation on a transaction log file, on the other hand, doesn’t move anything around—it simply removes the empty portion of the transaction log at the end of the file, as long as the transaction log records are not being retained for any reason. If the operation is successful, the log file is made physically smaller.
The confusion comes from the side effects of the two operations, and when they should be performed.
People are advised (or decide) to shrink data files to reclaim space. It may be that their index maintenance job causes the data files to grow, or their drive is getting full, and the natural reaction is to reclaim some of this “wasted” space. However, it is likely this space will be needed again and often it’s better to leave the remaining free space available to the data file rather than repeatedly shrink and auto-grow a file.
Shrinking a data file should be a very rare operation because it has a nasty side effect. Shrinking a data file causes massive index fragmentation, which may affect query performance. For same you may read Paul S. Randal blog post “Why You Should Not Shrink Your Data Files” includes a simple script that shows this.
A shrink operation on a data file tries to move the database page nearest the end of the file down toward the start of the file. This creates “empty” space at the end of the data file that can be returned to the OS. In other words, the data file is made physically smaller.
A shrink operation on a transaction log file, on the other hand, doesn’t move anything around—it simply removes the empty portion of the transaction log at the end of the file, as long as the transaction log records are not being retained for any reason. If the operation is successful, the log file is made physically smaller.
The confusion comes from the side effects of the two operations, and when they should be performed.
People are advised (or decide) to shrink data files to reclaim space. It may be that their index maintenance job causes the data files to grow, or their drive is getting full, and the natural reaction is to reclaim some of this “wasted” space. However, it is likely this space will be needed again and often it’s better to leave the remaining free space available to the data file rather than repeatedly shrink and auto-grow a file.
Shrinking a data file should be a very rare operation because it has a nasty side effect. Shrinking a data file causes massive index fragmentation, which may affect query performance. For same you may read Paul S. Randal blog post “Why You Should Not Shrink Your Data Files” includes a simple script that shows this.
Deleting files older than n days using command FORFILES
Posted: October 29, 2012 in Database AdministratorTags: batch processing, batch processing with forfiles, command line utitlity forfiles, deleting old bak files in sql server, deleting old files, forfiles
It is always very challenging and energetic know some command
line fun magically, as it is sometimes very easier and powerful tips to
process commands that way. Here, I would like to say something this type
of funny operations with Forfiles command in Windows Server 2003/
Windows Vista /Windows 7/ Windows 8 or higher. Forfiles can process
files based on names, file extensions and file life cycle. It’s very
easy to find files depending on our search criteria or all files in a
directory/or including sub directories that are older than Nth days, or all documents in specific folder that have been changed since a specific date.
The forfiles command can be used with processing options to delete those files, or create a list of all files that match the search criteria.
Forfiles, Selects and executes a command on a file or set of files. This command enables batch processing of files. In conjunction with a SQL Server Agent job, we can easily use this command to delete old database backup files to remove dependencies on items such as SQL Server maintenance plans, the xp_cmdshell extended stored procedure, or VBScript objects. For example, the following command searches the directory c:\test and all its subdirectories for files with the .BAK extension that are older than 10 days
FORFILES /P C:\TEST\ /M .BAK /S /D -10
The parameter /p followed by a directory defines the starting directory, /s includes subdirectories in the search, /m filters the files and folders based on the search criteria, and /D defines the date or a time span.
For more details visit http://technet.microsoft.com/en-us/library/cc753551(v=ws.10).aspx#BKMK_examples
The forfiles command can be used with processing options to delete those files, or create a list of all files that match the search criteria.
Forfiles, Selects and executes a command on a file or set of files. This command enables batch processing of files. In conjunction with a SQL Server Agent job, we can easily use this command to delete old database backup files to remove dependencies on items such as SQL Server maintenance plans, the xp_cmdshell extended stored procedure, or VBScript objects. For example, the following command searches the directory c:\test and all its subdirectories for files with the .BAK extension that are older than 10 days
FORFILES /P C:\TEST\ /M .BAK /S /D -10
The parameter /p followed by a directory defines the starting directory, /s includes subdirectories in the search, /m filters the files and folders based on the search criteria, and /D defines the date or a time span.
For more details visit http://technet.microsoft.com/en-us/library/cc753551(v=ws.10).aspx#BKMK_examples
Find the number of CPU’s through T-SQL query
Posted: October 26, 2012 in Database AdministratorTags: Exec xp_msver, Find the number of CPU’s through T-SQL query, SQL Server number of CPU
One day
I have to find how many numbers of CPU are working on my DB Server,
before know below options, I always take a Remote Desktop session for
server and from Task Manager let know about CPU information and many
more available information provided by Task Manager.
Option :1
Option :2
Using ‘Set NUMBER_OF_PROCESSORS’ which run on command prompt.
Exec
xp_cmdshell
‘Set NUMBER_OF_PROCESSORS’
Option 3:From Registry values using extended stored procedure XP_REGREAD
Declare
@key
varchar(1000)
Declare
@value
varchar(1000)
EXEC
master..xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,@key=‘SYSTEM\CurrentControlSet\Control\Session Manager\Environment\’,@value=NUMBER_OF_PROCESSORS
Option :1
Using extended stored procedures XP_MSVER , there at index 16, processor count displays
Exec
xp_msver
‘processorcount’
xp_msver
‘processorcount’
or we can directly use processorcountExec
xp_msver
‘processorcount’
xp_msver
‘processorcount’
Option :2
Using ‘Set NUMBER_OF_PROCESSORS’ which run on command prompt.
Exec
xp_cmdshell
‘Set NUMBER_OF_PROCESSORS’
Declare
@key
varchar(1000)
Declare
@value
varchar(1000)
EXEC
master..xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,@key=‘SYSTEM\CurrentControlSet\Control\Session Manager\Environment\’,@value=NUMBER_OF_PROCESSORS
Identify blocking in SQL Server
Posted: October 26, 2012 in Database AdministratorTags: find Blocking Query, Identify blocking in SQL Server, Identify Blocking Query
In all version of SQL Server we face Locking and Blocking
problems. Now its very first thing to know blocking and blocked SPID.
When a user or DBA itself come to know that particular process seems to
hung or a process is not processing properly, only checking the
applicable database blocking makes a great deal to winning the battle.
Blocking happens when one connection from an application holds a lock
and a second connection requires a conflicting lock. These forces the
second connection to be blocked until the first connection completes.
With this being said, locking is a natural occurrence in SQL Server in
order to maintain data integrity.
In SQL Server, there are many ways to findout the blocking and blocked SPID/Process. Some options are like:
In SQL Server, there are many ways to findout the blocking and blocked SPID/Process. Some options are like:
-
sp_who2 System Stored Procedure
-
sys.dm_exec_requests DMV
-
Sys.dm_os_waiting_tasks DMV
-
SQL Server Management Studio Activity Monitor
-
SQL Server Management Studio Reports
-
SQL Server Profiler
Find running Queries completion time
Posted: October 26, 2012 in Database AdministratorTags: dm_exec_sessions, Find Process Completion time, Find Running queries completion time, Get Running query session, know your execute session in sql server 2008, sys.dm_exec_sessions
For DBA day-to-day activities, Sometimes it happened, anyone
executed queries on production server and after long waiting, he/she
come to DBA and ask a very funny question like I have executed this/that
queries and its running since long time, now tell me how
much time it will take to complete
Sometime its very difficult to explain but from using sys.dm_exec_requests DMV, somehow we can find out the answer like
much time it will take to complete
Sometime its very difficult to explain but from using sys.dm_exec_requests DMV, somehow we can find out the answer like
SELECT percent_complete,start_time,status,command,estimated_completion_time/1000/60 As ‘Minutes to Completion’,
total_elapsed_time/1000/60 As ‘Minutes Elapsed’, wait_type, last_wait_type FROM sys.dm_exec_requests
order by ‘Minutes to Completion’ desc
total_elapsed_time/1000/60 As ‘Minutes Elapsed’, wait_type, last_wait_type FROM sys.dm_exec_requests
order by ‘Minutes to Completion’ desc
Compare the extra Columns between Tables
Posted: October 25, 2012 in Database AdministratorTags: column mismatch in 2 tables, compare 2 tables in sql server, Compare the extra Columns between Tables, Compare two tables columns, DMV to find table structure differencef, find extra columns in two tables, table column compare
Sometimes its needed to check where two tables having the same number of columns?
To demonstrate this first create two tables as EMP1 and EMP2.
Lets we have two Tables as EMP1 and EMP2 as below,
Suppose table EMP1 is as below
USE [VirendraTest]
GO
CREATE TABLE [dbo].[Emp1]
( [ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [nchar](10) NULL,
[Basic] [numeric](18, 2) NULL
) ON [PRIMARY]
GO
and table EMP2 is as below,
USE [VirendraTest]
GO
CREATE TABLE [dbo].[Emp2]
( [ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](10) NULL,
[Basic] [numeric](18, 2) NULL,
[Age] [int]
) ON [PRIMARY]
GO
Now to find the extra columns from EMP2 to EMP1 , we can use below query,
Select c2.table_name,c2.COLUMN_NAME from [INFORMATION_SCHEMA].[COLUMNS] c2
where table_name=‘Emp2′ and c2.COLUMN_NAME not in (select column_name from [INFORMATION_SCHEMA].[COLUMNS] where table_name=‘emp1′)
To demonstrate this first create two tables as EMP1 and EMP2.
Lets we have two Tables as EMP1 and EMP2 as below,
Suppose table EMP1 is as below
USE [VirendraTest]
GO
CREATE TABLE [dbo].[Emp1]
( [ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [nchar](10) NULL,
[Basic] [numeric](18, 2) NULL
) ON [PRIMARY]
GO
and table EMP2 is as below,
USE [VirendraTest]
GO
CREATE TABLE [dbo].[Emp2]
( [ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](10) NULL,
[Basic] [numeric](18, 2) NULL,
[Age] [int]
) ON [PRIMARY]
GO
Now to find the extra columns from EMP2 to EMP1 , we can use below query,
Select c2.table_name,c2.COLUMN_NAME from [INFORMATION_SCHEMA].[COLUMNS] c2
where table_name=‘Emp2′ and c2.COLUMN_NAME not in (select column_name from [INFORMATION_SCHEMA].[COLUMNS] where table_name=‘emp1′)
Compare Data types between two tables
Posted: October 25, 2012 in Database AdministratorTags: compare 2 tables in sql server, Compare two tables data type, Datatype diff in 2 tables, DMV to find table diff, find mismatched data types in two tables, table data type compare
Its required in our daily practices or sometimes we have to
compare two or more tables to find out where tables are same in terms of
column’s data types.
Below is the query to compare data types between two tables, for this, we can use the [INFORMATION_SCHEMA].[COLUMNS] system views to verify and compare the information.
Lets we have two Tables as EMP1 and EMP2 as below,
Suppose table EMP1 is as below
USE
[VirendraTest]
GO
CREATE TABLE[dbo].[Emp1]
( [ID][int]IDENTITY(1,1)NOT NULL,
[Name][nchar](10)NULL,
[Basic][numeric](18, 2)NULL
) ON[PRIMARY]
GO
and table EMP2 is as below,
USE
[VirendraTest]
GO
CREATE TABLE[dbo].[Emp2]
( [ID][int]IDENTITY(1,1)NOTNULL,
[Name][varchar](10)NULL,
[Basic][numeric](18, 2)NULL,
[Age] [int]
) ON [PRIMARY]
GO
Now to find the data type mismatched columns, use below query,
Select c1.table_name,c1.COLUMN_NAME,c1.DATA_TYPE,c2.table_name,c2.DATA_TYPE,c2.COLUMN_NAME
from [INFORMATION_SCHEMA].[COLUMNS] c1
Left join [INFORMATION_SCHEMA].[COLUMNS] c2 on c1.COLUMN_NAME=c2.COLUMN_NAME
where c1.TABLE_NAME=‘emp1′ and c2.TABLE_NAME=‘emp2′ and c1.data_type<>c2.DATA_TYPE
Below is the query to compare data types between two tables, for this, we can use the [INFORMATION_SCHEMA].[COLUMNS] system views to verify and compare the information.
Lets we have two Tables as EMP1 and EMP2 as below,
Suppose table EMP1 is as below
USE
[VirendraTest]
GO
CREATE TABLE[dbo].[Emp1]
( [ID][int]IDENTITY(1,1)NOT NULL,
[Name][nchar](10)NULL,
[Basic][numeric](18, 2)NULL
) ON[PRIMARY]
GO
and table EMP2 is as below,
USE
[VirendraTest]
GO
CREATE TABLE[dbo].[Emp2]
( [ID][int]IDENTITY(1,1)NOTNULL,
[Name][varchar](10)NULL,
[Basic][numeric](18, 2)NULL,
[Age] [int]
) ON [PRIMARY]
GO
Now to find the data type mismatched columns, use below query,
Select c1.table_name,c1.COLUMN_NAME,c1.DATA_TYPE,c2.table_name,c2.DATA_TYPE,c2.COLUMN_NAME
from [INFORMATION_SCHEMA].[COLUMNS] c1
Left join [INFORMATION_SCHEMA].[COLUMNS] c2 on c1.COLUMN_NAME=c2.COLUMN_NAME
where c1.TABLE_NAME=‘emp1′ and c2.TABLE_NAME=‘emp2′ and c1.data_type<>c2.DATA_TYPE
I got this error when I was working with my one DBA, after so
lots of basic finding, I came to know that this error occurred when
anyone try to attach a Database (Physical MDF/LDF files) from any folder
or copy from elsewhere, was previously attached with different SQL
Instance (or same instance but previously startup account was different)
and that instance Startup account (SQL Server Service) is not as your
current startup account.
To resolve this issue you have to provide full access of physical files containing folder/files to that SQL Server startup account. After providing full access of that specific service account for folders/files, now you are able to attach that DB files.
To resolve this issue you have to provide full access of physical files containing folder/files to that SQL Server startup account. After providing full access of that specific service account for folders/files, now you are able to attach that DB files.
Maintenance plan delete error 547
Posted: October 16, 2012 in Database AdministratorTags: Maintenance Plan Delete Error, Maintenance plan delete error 547, Microsoft SQL Server Error 547, Sql Server Delete Maintenance Plan Error, SQL Server Error: 547
Its happen, sometime we delete maintenance plan from SSMS, due to
some reason it show succeed or error and in between that some
object/steps being deleted but some not, sometimes you see maintenance
plan job name in Job Activity Monitor/Jobs Listing and when you tr to
delete that job, its through error like
Drop failed for Job ‘XXXXXXXXXXXXX’. (Microsoft.SqlServer.Smo)The DELETE statement conflicted with the REFERENCE constraint “FK_subplan_job_id(xxx)”. The conflict occurred in database “msdb”, table “dbo.sysmaintplan_subplans”, column ‘job_id’.The statement has been terminated. (Microsoft SQL Server, Error: 547)
We can resolve this error as
Drop failed for Job ‘XXXXXXXXXXXXX’. (Microsoft.SqlServer.Smo)The DELETE statement conflicted with the REFERENCE constraint “FK_subplan_job_id(xxx)”. The conflict occurred in database “msdb”, table “dbo.sysmaintplan_subplans”, column ‘job_id’.The statement has been terminated. (Microsoft SQL Server, Error: 547)
We can resolve this error as
- Find which plans needs to delete by :
SELECT * FROM sysmaintplan_plans -
Using above query, you can get plane ID of your maintenance plan which you want to deleteJust Delete entries from above mentioned these 3 tables and be Happy from such problem!
DELETE FROM sysmaintplan_log WHERE plan_id = ‘Plan ID of your needs to be delete Maintenance Plan’
DELETE FROM sysmaintplan_subplans WHERE plan_id = ‘Plan ID of your needs to be delete Maintenance Plan’
DELETE FROM sysmaintplan_plans WHERE id = ‘Plan ID of your needs to be delete Maintenance Plan’
Different ways to find out SQL Server’s Version
Posted: September 6, 2012 in Database AdministratorTags: Sql Server 2005 version, SQL Server 2008 version, SQL Server Verson
There are many options to find out SQL Server’s Version, some are as below,
– Option : 1
SELECT @@VERSION
– Option : 2
SELECT SERVERPROPERTY(‘ProductVersion’) ‘Version’, SERVERPROPERTY(‘ProductLevel’) ‘Service Pack’, SERVERPROPERTY (‘Edition’) ‘Edition’
-- Option : 3
sp_server_info
– Option : 4
xp_msver
– Option : 5
sp_MSgetversion
– Option : 6
SELECT @@MicrosoftVersion /power(2,24)
– Option : 7
– Finding values from Registry – For SQL Server 2005
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘Version’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘Edition’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘SP’
GO
– Finding values from Registry – For SQL Server 2008)
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘Version’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘Edition’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘SP’
GO
– Option : 1
SELECT @@VERSION
– Option : 2
SELECT SERVERPROPERTY(‘ProductVersion’) ‘Version’, SERVERPROPERTY(‘ProductLevel’) ‘Service Pack’, SERVERPROPERTY (‘Edition’) ‘Edition’
-- Option : 3
sp_server_info
– Option : 4
xp_msver
– Option : 5
sp_MSgetversion
– Option : 6
SELECT @@MicrosoftVersion /power(2,24)
– Option : 7
– Finding values from Registry – For SQL Server 2005
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘Version’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘Edition’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘SP’
GO
– Finding values from Registry – For SQL Server 2008)
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘Version’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘Edition’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘SP’
GO
Configure SQL Server Memory
Posted: August 31, 2012 in Database AdministratorTags: MAX Server Memory, Memory Setting in SQL Server, Min Server Memory, SQL Server Memory Configuration
– Step -1 Check Your Configuration Values at Server level
SP_CONFIGURE ‘show advanced options’,1
Reconfigure with Override
go
– Step -2 Check About “min server memory (MB)” & “max server memory (MB)” options Config_value
SP_CONFIGURE ‘min server memory (MB)’
SP_CONFIGURE ‘max server memory (MB)’
– Step -3 Now you can set your prefer amount of Memory as Min & Max as per your’s Server Actual RAM, If its pure DB Box, Its Preferd as SQL:OS is 80:20 ratio ( Its my opinion, not any Standard).
SP_CONFIGURE ‘min server memory (MB)’,1024
Reconfigure with Override
go
SP_CONFIGURE ‘max server memory (MB)’,80% of Your Total RAM
Reconfigure with Override
go
SP_CONFIGURE ‘show advanced options’,1
Reconfigure with Override
go
– Step -2 Check About “min server memory (MB)” & “max server memory (MB)” options Config_value
SP_CONFIGURE ‘min server memory (MB)’
SP_CONFIGURE ‘max server memory (MB)’
– Step -3 Now you can set your prefer amount of Memory as Min & Max as per your’s Server Actual RAM, If its pure DB Box, Its Preferd as SQL:OS is 80:20 ratio ( Its my opinion, not any Standard).
SP_CONFIGURE ‘min server memory (MB)’,1024
Reconfigure with Override
go
SP_CONFIGURE ‘max server memory (MB)’,80% of Your Total RAM
Reconfigure with Override
go
SQL Server Network Interface/Protocol
Posted: August 31, 2012 in Database AdministratorTags: SQL Server Network Interface, SQL Server Network Protocol, TDS, SQL Server Endpoint, Shared Memory in Sql Server, Shared Memory, TCP/IP Protocol, VIA Protocol, SQL Server Configuration Manager
The SQL Server
Network Interface (SNI) is a protocol layer that establishes the network
connection between the client and the server. It consists of a set of
APIs that are used by both the database engine and the SQL Server Native
Client i.e SNAC
SQL Server has support for the following protocols:
➤
Shared memory: Simple and fast, shared memory is the default protocol used to connect from a client running on the same computer as SQL Server. It can only be used locally, has no configurable properties, and is always tried first when connecting from the local machine means The limitation is that the client applications must reside on the same machine where the SQL Server is installed.
➤
TCP/IP: TCP/IP is the most commonly / the most popular and common protocol widely used throughout the industry today. It communicates across interconnected networks and is a standard for routing network traffics and offers advanced security features.It enables you to connect to SQL Server by specifying an IP address and a port number. Typically, this happens automatically when you specify an instance to connect to. Your internal name resolution system resolves the hostname part of the instance name to an IP address, and either you connect to the default TCP port number 1433 for default instances or the SQL Browser service will find the right port for a named instance using UDP port 1434.
➤
Named Pipes: This protocol can be used when your application and SQL Server resides on a local area network. A part of memory is used by one process to pass information to another process, so that the output of one is the input of the other. The second process can be local (on the same computer as the first) or remote (on a networked computer). TCP/IP and Named Pipes are comparable protocols in the architectures in which they can be used. Named Pipes was developed for local area networks (LANs) but it can be inefficient across slower networks such as wide area networks (WANs). To use Named Pipes you first need to enable it in SQL Server Configuration Manager (if you’ll be connecting remotely) and then create a SQL Server alias, which connects to the server using Named Pipes as the protocol. Named Pipes uses TCP port 445, so ensure that the port is open on any firewalls between the two computers, including the Windows Firewall.
➤
VIA: Virtual Interface Adapter is a protocol that enables high-performance communications between two systems. It requires specialized hardware at both ends and a dedicated connection. Like Named Pipes, to use the VIA protocol you fi rst need to enable it in SQL Server Configuration Manager and then create a SQL Server alias that connects to the server using VIA as the protocol. This protocol has been deprecated and will no longer be available in the future versions of SQL Server.
Regardless of the network protocol used, once the connection is established, SNI creates a secure connection to a TDS endpoint
TDS Endpoint : Tabular Data Stream (TDS) Endpoint also known as TSQL.
For Details About SQL Server ENDPOINT , Very good explained blog is http://www.tobuku.com/docs/SQL%20Server%20Endpoint.pdf , Thanks for writer.
SQL Server has support for the following protocols:
➤
Shared memory: Simple and fast, shared memory is the default protocol used to connect from a client running on the same computer as SQL Server. It can only be used locally, has no configurable properties, and is always tried first when connecting from the local machine means The limitation is that the client applications must reside on the same machine where the SQL Server is installed.
➤
TCP/IP: TCP/IP is the most commonly / the most popular and common protocol widely used throughout the industry today. It communicates across interconnected networks and is a standard for routing network traffics and offers advanced security features.It enables you to connect to SQL Server by specifying an IP address and a port number. Typically, this happens automatically when you specify an instance to connect to. Your internal name resolution system resolves the hostname part of the instance name to an IP address, and either you connect to the default TCP port number 1433 for default instances or the SQL Browser service will find the right port for a named instance using UDP port 1434.
➤
Named Pipes: This protocol can be used when your application and SQL Server resides on a local area network. A part of memory is used by one process to pass information to another process, so that the output of one is the input of the other. The second process can be local (on the same computer as the first) or remote (on a networked computer). TCP/IP and Named Pipes are comparable protocols in the architectures in which they can be used. Named Pipes was developed for local area networks (LANs) but it can be inefficient across slower networks such as wide area networks (WANs). To use Named Pipes you first need to enable it in SQL Server Configuration Manager (if you’ll be connecting remotely) and then create a SQL Server alias, which connects to the server using Named Pipes as the protocol. Named Pipes uses TCP port 445, so ensure that the port is open on any firewalls between the two computers, including the Windows Firewall.
➤
VIA: Virtual Interface Adapter is a protocol that enables high-performance communications between two systems. It requires specialized hardware at both ends and a dedicated connection. Like Named Pipes, to use the VIA protocol you fi rst need to enable it in SQL Server Configuration Manager and then create a SQL Server alias that connects to the server using VIA as the protocol. This protocol has been deprecated and will no longer be available in the future versions of SQL Server.
Regardless of the network protocol used, once the connection is established, SNI creates a secure connection to a TDS endpoint
TDS Endpoint : Tabular Data Stream (TDS) Endpoint also known as TSQL.
For Details About SQL Server ENDPOINT , Very good explained blog is http://www.tobuku.com/docs/SQL%20Server%20Endpoint.pdf , Thanks for writer.
Paging Result Sets with OFFSET and FETCH
Posted: August 29, 2012 in Database AdministratorTags: Alternative of ROW_NUMBAER, OFFSET and FETCH SQL Server 2012, Paging Result Sets with OFFSET and FETCH, Substitute of ROW_NUMBER with SQL Server 2012
Its very long time waited about paging result sets, MicroSoft
finally provide with SQL Server 2012 very unique feature as OFFSET and
FETCH.
I got very explanatory blog of Respected Mr. Andy Hayes about it , thanks Andy for same.
http://www.dbadiaries.com/new-t-sql-features-in-sql-server-2012-offset-and-fetch/
I got very explanatory blog of Respected Mr. Andy Hayes about it , thanks Andy for same.
http://www.dbadiaries.com/new-t-sql-features-in-sql-server-2012-offset-and-fetch/
Various ways to find particular text within Stored Procedures
Posted: August 29, 2012 in Database AdministratorTags: Search Text in Stored Procedure ( SP ), SQL Server Find Text in Stored Procedure, Stored Procedure Text, Text Within SQL Server Stored Procedure, Text within Stored Procedure
So many times we forgot any SQL Statement or text what we had
been written in any Stored Procedure, wants to check that Stored
procedure name / Statement/Word/Text.
Here are some options, Where we can easily find wanted Text/String within Current Database’s All Stored Procedures.
Option 1:
Select
*
from
Sys.sql_modules
where
definition
like
‘%Your Text Here%’
Option 2:
Select
*
from
Sys.syscomments
where
text
like
‘%Your Text Here%’
Option 3:
Select
*
from
Syscomments
where
text
like
‘%Your Text Here%’
Option 4:
Select
*
from
INFORMATION_SCHEMA.ROUTINES
where
ROUTINE_DEFINITION
like
‘%Your Text Here%’
Here are some options, Where we can easily find wanted Text/String within Current Database’s All Stored Procedures.
Option 1:
Select
*
from
Sys.sql_modules
where
definition
like
‘%Your Text Here%’
Option 2:
Select
*
from
Sys.syscomments
where
text
like
‘%Your Text Here%’
Option 3:
Select
*
from
Syscomments
where
text
like
‘%Your Text Here%’
Option 4:
Select
*
from
INFORMATION_SCHEMA.ROUTINES
where
ROUTINE_DEFINITION
like
‘%Your Text Here%’
Avoid SELECT * FROM TABLENAME
Posted: August 27, 2012 in Database AdministratorTags: Avoid Select *, Problem with Select *, Select * in Stored Procedure, Select * in View
Select * from TableName in your PRODUCTION code or Stored Procedure is problematic because if a column is added or deleted, you can expect a sure bug.
It is still generally advisable not to use SELECT * because the definition of the table may change over time. For example, a table may have a column added, removed, renamed or repositioned relative to other columns,such changes can result in SQL queries returning data that are never used, attempting to implicitly access data that do not exist, implicitly relying upon a column with a given name or relying upon the relative ordering of the table’s columns. While explicitly indicating only those columns one needs does not prevent problems when a column is removed or renamed, it does prevent problems when a column is added or repositioned and makes it easier to find a problematic query when one is able to search on the text of the column name.
Lets see an example here,
—- Lets create a Test Table.
CREATE
TABLE
[dbo].[VirendraTest](
[ID]
[int]
IDENTITY(1,1)
NOT
NULL,
[Name]
[varchar](50)
NOT
NULL,
[Age]
[int]
NULL
)
GO
—- Lets Insert some data in this Table.
Insert
into
VirendraTest
values
(‘Naresh’,40),
(‘AShok’,42),
(‘Ashish’,27),
(‘Paul’,35),
(‘Smitha’,29)
—- Now Create a View on This Table
Create
View
VWTest
as
(
Select
*
from
VirendraTest
)
go
—- See Data as per View
Select
*
from
VWTest
—- Now, Add a new Column in Table / Position Columns orders / Drop Any Coumn etc..
Alter
table
VirendraTest
add
Gender
varchar(1)
default
‘M’
—- See Data as per View, ha ha ha … its not as per Table.
Select
*
from
VWTest
—- But in Table its as
Select
*
from
VirendraTest
It is still generally advisable not to use SELECT * because the definition of the table may change over time. For example, a table may have a column added, removed, renamed or repositioned relative to other columns,such changes can result in SQL queries returning data that are never used, attempting to implicitly access data that do not exist, implicitly relying upon a column with a given name or relying upon the relative ordering of the table’s columns. While explicitly indicating only those columns one needs does not prevent problems when a column is removed or renamed, it does prevent problems when a column is added or repositioned and makes it easier to find a problematic query when one is able to search on the text of the column name.
Lets see an example here,
—- Lets create a Test Table.
CREATE
TABLE
[dbo].[VirendraTest](
[ID]
[int]
IDENTITY(1,1)
NOT
NULL,
[Name]
[varchar](50)
NOT
NULL,
[Age]
[int]
NULL
)
GO
—- Lets Insert some data in this Table.
Insert
into
VirendraTest
values
(‘Naresh’,40),
(‘AShok’,42),
(‘Ashish’,27),
(‘Paul’,35),
(‘Smitha’,29)
—- Now Create a View on This Table
Create
View
VWTest
as
(
Select
*
from
VirendraTest
)
go
—- See Data as per View
Select
*
from
VWTest
—- Now, Add a new Column in Table / Position Columns orders / Drop Any Coumn etc..
Alter
table
VirendraTest
add
Gender
varchar(1)
default
‘M’
—- See Data as per View, ha ha ha … its not as per Table.
Select
*
from
VWTest
—- But in Table its as
Select
*
from
VirendraTest
Difference between SP_WHO & SP_WHO2
Posted: March 28, 2012 in Database AdministratorTags: Diff between SP_WHO and SP_WHO2, Difference between SP_WHO and SP_WHO2, SP_who, SP_WHO2, SQL Server - Difference between SP_WHO and SP_WHO2
First of all, SP_WHO is a MS-SQL SERVER’s Documented Command
while SP_WHO2 is an UnDocumented Command. SP_WHO2 provides more detailed
information about Servers running objects as
SPID,Status,Login,HostName,BlkBy,DBName,Command,CPUTime, DiskIO, LastBatch, ProgramName,SPID, REQUESTID
While SP_WHO provides details as
Spid,ecid,status,loginame,hostname,blk,dbname,cmd,request_id
SPID,Status,Login,HostName,BlkBy,DBName,Command,CPUTime, DiskIO, LastBatch, ProgramName,SPID, REQUESTID
While SP_WHO provides details as
Spid,ecid,status,loginame,hostname,blk,dbname,cmd,request_id
Know Your Backup Status
Posted: March 11, 2012 in Database AdministratorTags: Backup Information, Backup Progress Details, Backup Status, Database Backup, How to know SQL Server Database Backup Status, Know your Backup Status, SQL Server 2005 Database Status, SQL Server 2008 Database Backup Status
Hi Guys, As a DBA its happend so many times, You started your
Database Backup from any remote PC/Server or it may be your Maintenance
Plan for Backup Database(s) and you want to know your backup progress
status.
Its possible using system SP / DMV, The Query for same will be as
SELECT A.NAME ‘DATABASE NAME’,B.TOTAL_ELAPSED_TIME/60000 AS ‘RUNNING SINCE (Minutes)’,
B.ESTIMATED_COMPLETION_TIME/60000 AS ‘REMAINING TIME(Minutes)’,B.PERCENT_COMPLETE as ‘BACKUP % COMPLETED’ ,B.COMMAND ‘COMMAND TYPE’,(SELECT TEXT FROM sys.dm_exec_sql_text(B.SQL_HANDLE))AS
‘COMMAND TEXT’, UPPER(B.STATUS) STATUS,B.SESSION_ID ‘SPID’, B.START_TIME, B.BLOCKING_SESSION_ID, B.LAST_WAIT_TYPE, B.READS, B.LOGICAL_READS, B.WRITES from
MASTER..SYSDATABASES A , sys.dm_exec_requests B
WHERE A.DBID=B.DATABASE_ID AND B.COMMAND LIKE ‘%BACKUP%’
order by percent_complete desc,B.TOTAL_ELAPSED_TIME/60000 desc
Output result will be as below
Its possible using system SP / DMV, The Query for same will be as
SELECT A.NAME ‘DATABASE NAME’,B.TOTAL_ELAPSED_TIME/60000 AS ‘RUNNING SINCE (Minutes)’,
B.ESTIMATED_COMPLETION_TIME/60000 AS ‘REMAINING TIME(Minutes)’,B.PERCENT_COMPLETE as ‘BACKUP % COMPLETED’ ,B.COMMAND ‘COMMAND TYPE’,(SELECT TEXT FROM sys.dm_exec_sql_text(B.SQL_HANDLE))AS
‘COMMAND TEXT’, UPPER(B.STATUS) STATUS,B.SESSION_ID ‘SPID’, B.START_TIME, B.BLOCKING_SESSION_ID, B.LAST_WAIT_TYPE, B.READS, B.LOGICAL_READS, B.WRITES from
MASTER..SYSDATABASES A , sys.dm_exec_requests B
WHERE A.DBID=B.DATABASE_ID AND B.COMMAND LIKE ‘%BACKUP%’
order by percent_complete desc,B.TOTAL_ELAPSED_TIME/60000 desc
Output result will be as below
SQL Error : Exclusive access could not be obtained because the database is in use
Posted: March 7, 2012 in Database AdministratorTags: Database is in use when restoring, SQL Error : Exclusive access could not be obtained because the database is in use, SQL Restore error
SQL Error : Exclusive access could not be obtained because the database is in use
One
day I got this error, My Logshipping was stopped due to some system
fault, after doing so lots of excersice and googling, at last I decided
to take full backup and restore it to Secondary box, but by accident ,
in between R&D, I stopped Logshipping Restoration Job, due to this
database was not accessible/Suspect mode and process was in
KILLED/ROLLBACK state in SP_WHO2 output.
The reason was very simple, because DB was being used by Killed/Rollback process thus why DB was in use.
There
are so many options to kill this rollback operation like killing
associated KPID at OS level, or restart the server.(Here i am not afraid
of any data loss, because i was going to restore it on this box, My
backup was ready.)
Other way is to take this db in Single user mode using Syntax like :
USE
MASTER
ALTER DATABASE yourdatabasename SET SINGLE_USER WITH ROLLBACK
MASTER
ALTER DATABASE yourdatabasename SET SINGLE_USER WITH ROLLBACK
Then again it in Multiuser, and start restore process
USE
MASTER
ALTER DATABASE yourdatabasename SET MULTI_USER WITH ROLLBACK
MASTER
ALTER DATABASE yourdatabasename SET MULTI_USER WITH ROLLBACK
Other option is like
-
Find all the active connections, kill them all and restore the database.
-
Get database to offline because this will close all the opened connections to this database, bring it back to online and restore the database.
All Database User objects information
Posted: March 1, 2012 in Database AdministratorTags: All Database Objects information, Stored Procedure Information, Table Information, User Objects
Hi Guys, One Day I have to find out all Databases objects like User Table, Procedure & Views.
Below are the code to find out objects information like ObjectName,Type,Create_date Modify_date, Rows,Column_Count,Data_Size,index_size etc.
/* Create Table for Collect Data from All DATABASES */
createtable#VirendraTest(
TableNamevarchar(50),
ObjectTypevarchar(50),
CreateDatedatetime,
ModifyDatedatetime,
RowsCountbigint,
ColumnCounttint,
DataSizevarchar(50),
IndexSizevarchar(50))
/* Process Start to get data from Indivisual Databases */
Declare@sqltxtvarchar(50)
Declare@dbnamevarchar(50)
DECLAREVirendraCurCURSORFORSELECTNamefromSys.databaseswheredatabase_id>4
openvirendraCur
fetchnextfromvirendracurinto@dbname
WHILE@@FETCH_STATUS= 0
begin
set@sqltxt=‘USE ‘+‘['+@dbname+']‘
print@sqltxt
exec (@sqltxt)
CREATETABLE#Virendratemp(
table_namesysname,
row_countINT,
reserved_sizeVARCHAR(50),
data_sizeVARCHAR(50),
index_sizeVARCHAR(50),
unused_sizeVARCHAR(50))
INSERT#Virendratemp
EXECsp_msforeachtable‘sp_spaceused “?”‘
insert#Virendratest
Selectaa.*,bb.col_count,bb.Data_Size,bb.index_sizefrom (
Selectob.Name,ob.type_descType,ob.create_date,ob.modify_date,pt.rowsfromsys.objectsob
leftjoinsys.partitionsptonob.object_id=pt.object_id
whereob.typein(‘U’,‘V’,‘P’))aa
leftjoin(SELECTa.Table_Name,
a.Row_count,
COUNT(*)ASCol_count,
a.Data_size,a.index_size
FROM#Virendratempa
INNERJOINinformation_schema.columnsb
ONa.table_namecollatedatabase_default
=b.table_namecollatedatabase_default
GROUPBYa.table_name,a.row_count,a.data_size,a.index_size)bb
onaa.name=bb.table_name
droptable#Virendratemp
set@sqltxt=‘USE ‘+‘['+@dbname+']‘
exec (@sqltxt)
fetchnextfromvirendracurinto@dbname
end
closevirendracur
deallocatevirendracur
/* Display Collected Data */
Select*from#VirendraTest
/* Drop Temp Table */
DropTable#VirendraTest
Below are the code to find out objects information like ObjectName,Type,Create_date Modify_date, Rows,Column_Count,Data_Size,index_size etc.
/* Create Table for Collect Data from All DATABASES */
createtable#VirendraTest(
TableNamevarchar(50),
ObjectTypevarchar(50),
CreateDatedatetime,
ModifyDatedatetime,
RowsCountbigint,
ColumnCounttint,
DataSizevarchar(50),
IndexSizevarchar(50))
/* Process Start to get data from Indivisual Databases */
Declare@sqltxtvarchar(50)
Declare@dbnamevarchar(50)
DECLAREVirendraCurCURSORFORSELECTNamefromSys.databaseswheredatabase_id>4
openvirendraCur
fetchnextfromvirendracurinto@dbname
WHILE@@FETCH_STATUS= 0
begin
set@sqltxt=‘USE ‘+‘['+@dbname+']‘
print@sqltxt
exec (@sqltxt)
CREATETABLE#Virendratemp(
table_namesysname,
row_countINT,
reserved_sizeVARCHAR(50),
data_sizeVARCHAR(50),
index_sizeVARCHAR(50),
unused_sizeVARCHAR(50))
INSERT#Virendratemp
EXECsp_msforeachtable‘sp_spaceused “?”‘
insert#Virendratest
Selectaa.*,bb.col_count,bb.Data_Size,bb.index_sizefrom (
Selectob.Name,ob.type_descType,ob.create_date,ob.modify_date,pt.rowsfromsys.objectsob
leftjoinsys.partitionsptonob.object_id=pt.object_id
whereob.typein(‘U’,‘V’,‘P’))aa
leftjoin(SELECTa.Table_Name,
a.Row_count,
COUNT(*)ASCol_count,
a.Data_size,a.index_size
FROM#Virendratempa
INNERJOINinformation_schema.columnsb
ONa.table_namecollatedatabase_default
=b.table_namecollatedatabase_default
GROUPBYa.table_name,a.row_count,a.data_size,a.index_size)bb
onaa.name=bb.table_name
droptable#Virendratemp
set@sqltxt=‘USE ‘+‘['+@dbname+']‘
exec (@sqltxt)
fetchnextfromvirendracurinto@dbname
end
closevirendracur
deallocatevirendracur
/* Display Collected Data */
Select*from#VirendraTest
/* Drop Temp Table */
DropTable#VirendraTest
SQL Server tries to use statistics to react intelligently in its
query optimization. Knowing number of records, density of pages,
histogram, or available indexes help the SQL Server optimizer guess
more accurately how it can best retrieve data. A common misnomer is that
if you have indexes, SQL Server will use those indexes to retrieve
records in your query.
Below is the query using DMVs to find out statistics details about current database’s tables.
SELECT
OBJECT_NAME(A.OBJECT_ID)
AS Object_Name,
A.name AS index_name, a.type_desc AS Index_Type,STATS_DATE(A.OBJECT_ID, index_id)
AS StatsUpdated ,DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DaysOld
FROM
sys.indexes A
INNER
JOIN
sys.tables B ON A.object_id = B.object_id
WHERE A.name IS
NOT
NULL
ORDER
BY
DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate())
DESC
Below is the query using DMVs to find out statistics details about current database’s tables.
SELECT
OBJECT_NAME(A.OBJECT_ID)
AS Object_Name,
A.name AS index_name, a.type_desc AS Index_Type,STATS_DATE(A.OBJECT_ID, index_id)
AS StatsUpdated ,DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DaysOld
FROM
sys.indexes A
INNER
JOIN
sys.tables B ON A.object_id = B.object_id
WHERE A.name IS
NOT
NULL
ORDER
BY
DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate())
DESC
Move TEMPDB : SQL Server 2005 / SQL Server 2008
Posted: November 6, 2011 in Database Administrator, SQL GeneralTags: Change TempDB location, Move TempDB, TempDB
As we
know, by default, tempdb is placed on the same drive where SQL Server
Instance installed on. Sometime its happened that where its installed,
that disk space is running in very tight position or disk I/O operation
is very slow or not performing upto mark. The solution is to move tempdb
to a spacious and faster drive and improve performance, follow these
steps:
1. Open SQL Server Management Studio.
Click on Start -> Programme -> MicroSoft SQL Server 2005 / MicroSoft SQL Server 2008 -> SQL Server Management Studio
2. Connect to the desired server.
3. Click the New Query button.
4. Copy and paste the following into the query pane to check the TEMPDB location
SELECT name, physical_name AS CurrentLocation FROM
sys.master_files
WHERE database_id =
DB_ID(N’tempdb’);
GO
5. Now, Suppose you want to move TEMPDB at D:\SQLDATA, simply copy and paste the following in the query pane.
USE
master
GO
ALTER
DATABASE tempdb
MODIFY
FILE (NAME = tempdev, FILENAME
=
‘d:\SQLData\tempdb.mdf’);
GO
ALTER
DATABASE tempdb
MODIFY
FILE (NAME = templog, FILENAME
=
‘d:\SQLData\templog.ldf’);
GO
6. Click Execute, result may be like this,
7. Now, We have to restart SQL Server Service using via Start-> Run -> Services.msc
Or
Start -> Programme- MicroSoft SQL Server 2005 / MicroSoft SQL Server 2008 ->Configuration Tools -> SQL Server Configuration Manager
8. Stop and Start SQL Server (MSSQLSERVER).
9. Go back to SQL Server Management Studio and open a new query pane.
10. Copy and paste the following to verify that tempdb has moved to the new location:
SELECT name, physical_name AS CurrentLocation FROM
sys.master_files
WHERE database_id =
DB_ID(N’tempdb’);
GO
11. Click Execute.
12. In the physical_name column, you should see the path to the new location.
1. Open SQL Server Management Studio.
Click on Start -> Programme -> MicroSoft SQL Server 2005 / MicroSoft SQL Server 2008 -> SQL Server Management Studio
2. Connect to the desired server.
3. Click the New Query button.
4. Copy and paste the following into the query pane to check the TEMPDB location
SELECT name, physical_name AS CurrentLocation FROM
sys.master_files
WHERE database_id =
DB_ID(N’tempdb’);
GO
5. Now, Suppose you want to move TEMPDB at D:\SQLDATA, simply copy and paste the following in the query pane.
USE
master
GO
ALTER
DATABASE tempdb
MODIFY
FILE (NAME = tempdev, FILENAME
=
‘d:\SQLData\tempdb.mdf’);
GO
ALTER
DATABASE tempdb
MODIFY
FILE (NAME = templog, FILENAME
=
‘d:\SQLData\templog.ldf’);
GO
6. Click Execute, result may be like this,
7. Now, We have to restart SQL Server Service using via Start-> Run -> Services.msc
Or
Start -> Programme- MicroSoft SQL Server 2005 / MicroSoft SQL Server 2008 ->Configuration Tools -> SQL Server Configuration Manager
8. Stop and Start SQL Server (MSSQLSERVER).
9. Go back to SQL Server Management Studio and open a new query pane.
10. Copy and paste the following to verify that tempdb has moved to the new location:
SELECT name, physical_name AS CurrentLocation FROM
sys.master_files
WHERE database_id =
DB_ID(N’tempdb’);
GO
11. Click Execute.
12. In the physical_name column, you should see the path to the new location.
——1)From TempDB created date using sys.databases
SELECT create_date from sys.Databases where database_id=2
——2)From TempDB created date using sys.sysdatabases
SELECT crdate from sys.sysDatabases where DBID =2
——3)From sysprocesses
SELECT login_time FROM master..sysprocesses WHERE cmd = ‘LAZY WRITER’
——4)From sysprocesses
SELECT min(login_time) from master..sysprocesses
——5)From sysprocesses
SELECT min(login_time) from sys.sysprocesses
——6)From sp_readerrorlog SP
sp_readerrorlog 0,1,’Copyright (c)’
——7)From dm_os_sys_info
SELECT sqlserver_start_time FROM sys.dm_os_sys_info
——8)From dm_exec_sessions
SELECT login_time FROM sys.dm_exec_sessions WHERE session_id = 1
——9)From traces
SELECT start_time from sys.traces where is_default = 1
——10)From sp_helpdb
sp_helpdb ‘tempdb’
SELECT create_date from sys.Databases where database_id=2
——2)From TempDB created date using sys.sysdatabases
SELECT crdate from sys.sysDatabases where DBID =2
——3)From sysprocesses
SELECT login_time FROM master..sysprocesses WHERE cmd = ‘LAZY WRITER’
——4)From sysprocesses
SELECT min(login_time) from master..sysprocesses
——5)From sysprocesses
SELECT min(login_time) from sys.sysprocesses
——6)From sp_readerrorlog SP
sp_readerrorlog 0,1,’Copyright (c)’
——7)From dm_os_sys_info
SELECT sqlserver_start_time FROM sys.dm_os_sys_info
——8)From dm_exec_sessions
SELECT login_time FROM sys.dm_exec_sessions WHERE session_id = 1
——9)From traces
SELECT start_time from sys.traces where is_default = 1
——10)From sp_helpdb
sp_helpdb ‘tempdb’
Space used in all tables of a Database
Posted: January 7, 2011 in SQL GeneralTags: Column Count, Row Count, Space Used by Table, SQL Server, Table Space
USE VirendraTest
GO
CREATE TABLE #Virendratemp (
table_name sysname ,
row_count INT,
reserved_size VARCHAR(50),
data_size VARCHAR(50),
index_size VARCHAR(50),
unused_size VARCHAR(50))
SET NOCOUNT ON
INSERT #Virendratemp
EXEC sp_msforeachtable ‘sp_spaceused ”?”’
SELECT a.Table_Name,
a.Row_count,
COUNT(*) AS Col_count,
a.Data_size
FROM #Virendratemp a
INNER JOIN information_schema.columns b
ON a.table_name collate database_default
= b.table_name collate database_default
GROUP BY a.table_name, a.row_count, a.data_size
ORDER BY CAST(REPLACE(a.data_size, ‘ KB’, ”) AS integer) DESC
DROP TABLE #Virendratemp
GO
CREATE TABLE #Virendratemp (
table_name sysname ,
row_count INT,
reserved_size VARCHAR(50),
data_size VARCHAR(50),
index_size VARCHAR(50),
unused_size VARCHAR(50))
SET NOCOUNT ON
INSERT #Virendratemp
EXEC sp_msforeachtable ‘sp_spaceused ”?”’
SELECT a.Table_Name,
a.Row_count,
COUNT(*) AS Col_count,
a.Data_size
FROM #Virendratemp a
INNER JOIN information_schema.columns b
ON a.table_name collate database_default
= b.table_name collate database_default
GROUP BY a.table_name, a.row_count, a.data_size
ORDER BY CAST(REPLACE(a.data_size, ‘ KB’, ”) AS integer) DESC
DROP TABLE #Virendratemp
SQL Server’s CodeName History
Posted: December 27, 2010 in SQL GeneralTags: Denali, Hydra, Katmai, Kilimanjaro, Liberty, Plato, Shiloh, Sphinx, SQL CodeName, SQL History, SQL95, Yukon
• 1993 – SQL Server 4.21 for Windows NT
• 1995 – SQL Server 6.0, codenamed SQL95
• 1996 – SQL Server 6.5, codenamed Hydra
• 1999 – SQL Server 7.0, codenamed Sphinx
• 1999 – SQL Server 7.0 OLAP, codenamed Plato
• 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0)
• 2003 – SQL Server 2000 64-bit, codenamed Liberty
• 2005 – SQL Server 2005, codenamed Yukon (version 9.0)
• 2008 – SQL Server 2008, codenamed Katmai (version 10.0)
• 2010 – SQL Server 2008 R2, Codenamed Kilimanjaro (aka KJ)
• Next – SQL Server 2011, Codenamed Denali
Nice article about server disaster recovery. Please share how to choose right server disaster recovery plan. Thanks for sharing informative article.
ReplyDelete