Saturday 29 June 2013

Differences between SQL Server 2008 and 2012

Here are some Differences between SQL Server 2008/R2 and 2012.


Sr. No.

SQL Server 2008

SQL Server 2012

1 Exceptions handle using TRY….CATCH Unique Exceptions handling with THROW
2 High Availability features as Log Shipping, Replication, Mirroring & Clustering New Feature ALWAYS ON introduced with addition of 2008 features.
3 Web Development and Business Intelligence Enhanced with business intelligence features. Excel PowerPivot by adding more drill and KPI through. In Addition with SQL server 2008, Web Development and Business Intelligence Enhanced with business intelligence features and Excel PowerPivot by adding more drill and KPI’s.
4 Could not supported for Windows Server Core Support. Supported for Windows Server Core Support
5 Columnstore Indexes not supported. New Columnstore Indexes feature that is completely unique to SQL Server. They are special type of read-only index designed to be use with Data Warehouse queries. Basically, data is grouped and stored in a flat, compressed column index, greatly reducing I/O and memory utilization on large queries.
6 PowerShell Supported Enhanced PowerShell Supported
7 Distributed replay features not available. Distributed replay allows you to capture a workload on a production server, and replay it on another machine. This way changes in underlying schemas, support packs, or hardware changes can be tested under production conditions.
8 PowerView not available in BI features PowerView is a fairly powerful self-service BI toolkit that allows users to create mash ups of BI reports from all over the Enterprise.
9 EXECUTE … with RECOMPLIE feature Enhanced EXECUTE with many option like WITH RESULT SET….
10 Maximum numbers of concurrent connections to SQL Server 2008 is 32767 SQL server 2012 has unlimited concurrent connections.
11 The SQL Server 2008 uses 27 bit bit precision for spatial calculations.
The SQL Server 2012 uses 48 bit precision for spatial calculations
12 TRY_CONVERT() and FORMAT() functions are not available in SQL Server 2008
TRY_CONVERT() and FORMAT() functions are newly included in SQL Server 2012
13 ORDER BY Clause does not have OFFSET / FETCH options for paging ORDER BY Clause have OFFSET / FETCH options for paging
14 SQL Server 2008 is code named as Katmai. SQL Server 2012 is code named as Denali
15 In SQL Server 2008, audit is an Enterprise-only feature. Only available in Enterprise, Evaluation, and Developer Edition. In SQL Server 2012,support for server auditing is expanded to include all editions of SQL Server.
16 Sequence is not available in SQL Server 2008
Sequence is included in SQL Server 2012.Sequence is a user defined object that generates a sequence of a number
17 The Full Text Search in SQL Server 2008 does not allow us to search and index data stored in extended properties or metadata. The Full Text Search in SQL Server 2012 has been enhanced by allowing us to search and index data stored in extended properties or metadata. Consider a PDF document that has “properties” filled in like Name, Type, Folder path, Size, Date Created, etc. In the newest release of SQL Server, this data could be indexes and searched along with the data in the document itself. The data does have to be exposed to work, but it’s possible now.
18 Analysis Services in SQL Server does not have BI Semantic Model
Analysis Services will include a new BI Semantic Model (BISM). BISM is a
(BISM) concept. 3-layer model that includes: Data Model, Business Logic and Data Access
19 The BCP not support –K Option. The BCP Utility and SQLCMD Utility utilities enhanced with -K option, which allows to specify read-only access to a secondary replica in an AlwaysOn availability group.
20 sys.dm_exec_query_stats  sys.dm_exec_query_stats  added four columns to help troubleshoot long running queries. You can use the total_rows, min_rows, max_rows and last_rows aggregate row count columns to separate queries that are returning a large number of rows from problematic queries that may be missing an index or have a bad query plan.

Friday 28 June 2013

SQL IMP Misc..

Difference between HIGH AVAILABILITY and DISASTER RECOVERY


Its many time seen, so many people often confused or thought High Availability and Disaster Recovery (DR) are the same thing. A high-availability solution does not mean that you are prepared for a disaster. High availability covers hardware or system-related failures, whereas disaster recovery can be used in the event of a catastrophic failure due to environmental factors. Although some of the high-availability options may help us when designing our DR strategy, they are not the be-all and end-all solution.
The goal of high availability is to provide an uninterrupted user experience with zero data loss. According to Microsoft’s SQL Server Books Online, “A high-availability solution masks the effects of a hardware or software failure and maintains the availability of applications so that the perceived downtime for users is minimized”.
Disaster recovery is generally a part of a larger process known as Business Continuity Planning (BCP), which plans for any IT or non IT eventuality. A server disaster recovery plan helps in undertaking proper preventive, detective and corrective measures to mitigate any server related disaster. Here is the very good book “The Shortcut Guide To Untangling the Differences Between High Availability and Disaster Recovery” by Richard Siddaway for more details. As per my understanding below explanation is the best and quickest way to remember this is:
High Availability is @ LAN and Disaster Recovery is @ WAN
J
2
Temporary stored procedures are like normal stored procedures but, as their name suggests, have short-term existence. There are two types of temporary stored procedures as Private and global are analogous to temporary tables, can be created with the # and ## prefixes added to the procedure name. The symbol # denotes a local temporary stored procedure while ## denotes a global temporary stored procedure. These procedures do not exist after SQL Server is shut down.
Temporary stored procedures are useful when connecting to earlier versions of SQL Server that do not support the reuse of execution plans for Transact-SQL statements or batches. Any connection can execute a global temporary stored procedure. A global temporary stored procedure exists until the connection used by the user who created the procedure is closed and any currently executing versions of the procedure by any other connections are completed. Once the connection that was used to create the procedure is closed, no further execution of the global temporary stored procedure is allowed. Only those connections that have already started executing the stored procedure are allowed to complete. If a stored procedure not prefixed with # or ## is created directly in the tempdb database, the stored procedure is automatically deleted when SQL Server is shut down because tempdb is re-created every time SQL Server is started. Procedures created directly in tempdb exist even after the creating connection is terminated.
2
Data type conversion is very frequent and common practices with data to manipulate or reproduce data. As we regularly use conversion functions CONVERT and CAST in our daily routine.
Sometime its very needed to convert any datetime value to string or Date string to Datetime.
Let see example below for Datetime to character string conversion

Suppose if it required to convert a date string to date/Datetime, usually we use

But now if milliseconds part is more than 3 digits, it will give error message as
Msg 241, Level 16, State 1, Line 1

Conversion failed when converting date and/or time from character string.


To resolve this, new data type DATETIME2 introduced from SQL Server 2008 onwards and beauty of DATETIME2 is that its supports fractional precision upto 7 digit. Let See examples


1
SQL Server 2014 CTP1 released on 26-Jun-2013, highly focused on data warehousing and business intelligence (BI) enhancements made possible through new in-memory capabilities built in to the core Relational Database Management System (RDBMS). As memory prices have fallen dramatically, 64-bit architectures have become more common and usage of multicore servers has increased, Microsoft has sought to tailor SQL Server to take advantage of these trends.
Hekaton is a Greek term for “factor of 100.” The aspirational goal of the team was to see 100 times performance acceleration levels. Hekaton also is a giant mythical creature, as well as a Dominican thrash-metal band, for what it’s worth.
In-Memory OLTP (formally known as code name “Hekaton”) is a new database engine component, fully integrated into SQL Server. It is optimized for OLTP workloads accessing memory resident data. In-Memory OLTP allows OLTP workloads to achieve remarkable improvements in performance and reduction in processing time. Tables can be declared as ‘memory optimized’ to take advantage of In-Memory OLTP’s capabilities. In-Memory OLTP tables are fully transactional and can be accessed using Transact-SQL. Transact-SQL stored procedures can be compiled into machine code for further performance improvements if all the tables referenced are In-Memory OLTP tables. The engine is designed for high concurrency and blocking is minimal. “Memory-optimized tables are stored completely differently than disk-based tables and these new data structures allow the data to be accessed and processed much more efficiently”
New buffer pool extension support to non-volatile memory such as solid state drives (SSDs) will increase performance by extending SQL Server in-memory buffer pool to SSDs for faster paging.
Here is very good explanation from Mr. Brent Ozar’s article on this ( http://www.brentozar.com/archive/2013/06/almost-everything-you-need-to-know-about-the-next-version-of-sql-server/ )
The New Feature xVelocity ColumnStore provides in-memory capabilities for data warehousing workloads that result in dramatic improvement for query performance, load speed, and scan rate, while significantly reducing resource utilization (i.e., I/O, disk and memory footprint). The new ColumnStore complements the existing xVelocity ColumnStore Index, providing higher compression, richer query support and updateability of the ColumnStore giving us the even faster load speed, query performance, concurrency and even lower price per terabyte.
Extending Memory to SSDs: Seamlessly and transparently integrates solid-state storage into SQL Server by using SSDs as an extension to the database buffer pool, allowing more in-memory processing and reducing disk IO.
Enhanced High Availability : New AlwaysOn features  availability Groups now support up to 8 secondary replicas that remain available for reads at all times, even in the presence of network failures. Failover Cluster Instances now support Windows Cluster Shared Volumes, improving the utilization of shared storage and increasing failover resiliency. Finally, various supportability enhancements make AlwaysOn easier to use.
Improved Online Database Operations: includes single partition online index rebuild and managing lock priority for table partition switch, greatly increasing enterprise application availability by reducing maintenance downtime impact.
For more details please refer SQL_Server_Hekaton_CTP1_White_Paper and for SQL Server 2014 CTP1 software click on http://technet.microsoft.com/en-US/evalcenter/dn205290.aspx (The Microsoft SQL Server 2014 CTP1 release is only available in the X64 architecture.)
I appreciate your time, keep posting your comment. I will be very happy to review and reply to your comments/Questions/Doubts as soon as I can.
0
CHECKPOINT : As per MSDN/BOL , A checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from memory to disk and, also, records information about the transaction log because For performance reasons, the Database Engine performs modifications to database pages in memory—in the buffer cache—and does not write these pages to disk after every changes made. Checkpoint is the SQL engine system process that writes all dirty pages to disk for the current database. The benefit of the Checkpoint process is to minimize time during a later recovery by creating a point where all dirty pages have been written to disk.

When CHECKPOINT happen?


  • A CHECKPOINT statement is explicitly executed. A checkpoint occurs in the current database for the connection.
  • A minimally logged operation is performed in the database; for example, a bulk-copy operation is performed on a database that is using the Bulk-Logged recovery model.
  • Database files have been added or removed by using ALTER DATABASE.
  • An instance of SQL Server periodically generates automatic checkpoints in each database to reduce the time that the instance would take to recover the database.
  • A database backup is taken. Before a backup, the database engine performs a checkpoint, in order that all the changes to database pages (dirty pages) are contained in the backup.
  • Stopping the server using any of the following methods, they it cause a checkpoint.
    • Using Shutdown statement,
    • Stopping SQL Server service through SQL Server configuration, SSMS, net stop mssqlserver and ControlPanel-> Services -> SQL Server Service.
    • When the “SHUTDOWN WITH NOWAIT” is used, it does not execute checkpoint on the database.
  • When the recovery internal server configuration is accomplished. This is when the active portion of logs exceeds the size that the server could recover in amount of time defined on the server configuration (recovery internal).
  • When the transaction log is 70% full and the database is in truncation mode.
  • The database is in truncation mode, when is in simple recovery model and after a backup statement has been executed.
  • An activity requiring a database shutdown is performed. For example, AUTO_CLOSE is ON and the last user connection to the database is closed, or a database option change is made that requires a restart of the database.
  • Long-running uncommitted transactions increase recovery time for all types of checkpoints.
The time required to perform a checkpoint depends directly of the amount of dirty pages that the checkpoint must write.
We can monitor checkpoint I/O activity using Performance Monitor by looking at the “Checkpoint pages/sec” counter in the SQL Server:Buffer Manager object.



0
Here is a script to find out database’s file (either Data file-MDF or Log File – LDF ) growth details. To find this we can use MSDB’s two tables, one is backupfile table which Contains one row for each data or log file of a database and secondone is backupset table which Contains a row for each backup set.

Script is as below.

SELECT BS.database_name
       ,BF.logical_name
       ,file_type
       ,
BF.file_size/(1024*1024) FileSize_MB
       ,
BS.backup_finish_date
       ,BF.physical_name
FROM 
msdb.dbo.backupfile BF
INNER 
JOIN msdb.dbo.backupset BS ON BS.backup_set_id = BF.backup_set_id
AND 
BS.database_name‘VirendraTest’ – Database Name
WHERE 
logical_name VirendraTest’    – DB’s Logical Name
ORDER 
BY BS.backup_finish_date DESC
0
Hi, as we frequently use Database mail for SQL Server’s email sending, commonly we use to send an email for JOB status, Maintenance task info, operator alerts… etc.  sometime we send reports like Database disk usages, Server Dashboard, Memory Consumptions .. etc as a PDF or Excel attachments. By default, attachment size is 1000000 bytes. If attachments size is more than this, an error ‘The Attachment Size exceeds the allowable limit’ will occurred. Below is the step by step solution for this using SSMS.
  1. Open the SSMS and connect to SQL server Instance, then go to Management -> Right Click on Database mail -> choose Configure Database Mail











2) Select View or change system parameters radio button from the popup menu then next


















3) Change the Maximum File Size (Bytes) values from 1000000 to as per your requirement, then next and finish.


2
As we all know how to send mail from SQL server using Database Mail. Here is a very simple trick, where we can set a DO NOT REPLY mail message to recipients, below are the steps,
First create a Mail Profile from Database Mail as VirendraTest
Click on your SQL Server Instance ->Management->DatabaseMail->Right Click and Select Configure Database Mail

It will open Database Mail Configuration Wizard, Click Next

Choose Set Up Database mail by performing the following tasks :, Then Next

If you are configuring database mail first time, it will show as …. Click on Yes.

Give Profile Name ( Here profile name is VirendraTest and click on SMTP’s Account ADD button, It will shows New Database Mail Account

Give here Account Name, Account Description, then set SMTP part,
Email Address – donotreply@abcxyz.com ( Non exist email account at your mail server)
Server Name – Your mail server’s SMPT IP or Name, Port – 25 ( or yours port)
SMTP Authentication – Anonymous
Then Click OK

Now Click on OK
Select your default profile
Click next, next & finish
You Can check your mail profile as
Select from msdb..sysmail_profile
Select from msdb..sysmail_profileaccount
Now your profile is created as VirendraTest.
Now you can send mail using sp_send_dbmail , sample code is below.
declare  @profile_name varchar(50)
  EXEC 
MSDB.dbo.sp_send_dbmail
       @profile_name ‘VirendraTest’– Your Prifile name
       @recipients   ‘yaduvanshi.v@gmail.com’– Mail to whom you want to send
       @subject      ‘Test Mail !’– Mail Subject
       @body         
‘Dear Virendra, This is a Test Mail.’– Mail Body part, 
       @body_format  
=‘HTML’– Mail body format
All done, recipient will get mail from your database mail profile’s account’s configured email id, here its donotreply@abcxyz.com, if recipient will reply this mail, he will got bounce back mail/failure mail notification in his/her mail inbox.
Note : As per my knowing, its some how a bug, we can send a mail from anyone’s mail id or non exist mail id.

Lets enjoy SQL server mailing !
0
Here is a SSMS tips where we can customized our new query windows as,
For SQL Server 2008 (64 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\SqlWorkbenchProjectItems\Sql\SQLFILE.SQL
and modify the file as per your wish.

For SQL Server 2008 (32 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\SqlWorkbenchProjectItems\Sql\SQLFILE.SQL
and modify the file as per your wish.

For SQL Server 2012 (64 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql
and modify the file as per your wish.
For SQL Server 2012 (32 Bits)
1) Find the file SQLFile.SQL , which mostly located at C:\Program Files\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql
and modify the file as per your wish.
Let See my Modified SQLFILE.SQL file as

Source : Mr. Amit Bansal’s Blog



0
Hi,
Here is the SSRS tutorial. SSRS Tutorial – Part 1  ( Click here to download or Right Click -> Save target as..)
Thanks for various resources available on internet & BOL.
Please share your views on this.

0
As a DBA, sometimes it’s may be happened you forgot to set a notification to a job, here is a very simple script to find out which job have notification or not.
Select SJ.Name Job_Name,
Case SJ.Enabled when 1 then ‘Enabled’ else ‘Disabled’ end Enable_Status,
SJ.description Job_Description,
SUSER_SNAME(owner_sid) Job_Owner,
SJ.date_created 
Job_Created_Date,
SJ.date_modified 
Job_Modified_Date,
SP.Name 
Operator_Name,
SP.email_address 
Emails,
SP.last_email_date 
Last_Email_Date,
SP.last_email_time 
Last_Email_Time
from 
msdb..sysjobs SJ
LEFT 
JOIN msdb..sysoperators SP on SP.ID = SJ.notify_email_operator_id
2
Sometimes it’s happened, a developer try to access a database but he/she could not get succeed and after few R&D he/she came to know that he/she has no access right for that particular DB. For same, a very useful SQL Server’s function HAS_DBACCESS can be used to get the list of all Databases having access details of currently logged user. Here is a simple script to get the details as
Select Namecase HAS_DBACCESS(namewhen 0 then ‘No Access’ else ‘Access’end AS DB_Access_Status from sys.databases


Note : if any Database is in RECOVERY Mode, it will shows as NO Access for that DB.
2
Hi Guys,
After a little bit long time I am back on my blog, Here is my observation towards many Developers and DBA, in SSMS(SQL Server Management Studio) if they are working with multiple servers sometimes they got confused , on which server they are running particular query/queries, for this, SSMS provide a very attractive features where we can easily identified servers using setting of server’s color, Commonly Developer or DBA registered servers in SSMS as per their day to day regular work which frequently use or directly provide server name/IP then user name/password to connect any particular server.
A) In case of registering server we can set color as
1) Click on Your Server Groupà New Server Registration…

2) In New Server Registration… à Click on Connection Properties à Check on Use Custom Color and Select your desire color.

B) In case of, connecting any server directly
1)Click on Connect Server

2) Click on Options à Click on Connection Properties à Check on Use Custom Color and Select your desire color.

Now, when you will connect your server, its Query window’s Status bar color will be as your selected color, and you can easily identifies your server when you are working on multiple server and frequently changing query windows.
0
SSMS has a lot of customization features. By default when we copy query result it’s not include columns header. To enable this feature, just set the SSMS setting as

  1. From Tools menu, select “Options…”
  2. From left, Select “Query Results” and expand it
  3. Select “SQL Server” and Click on “Result to Grid”
  4. Now from right, Click on “Include columns header when copying or saving results”
0
Yesterday my one team guy came to me with an error as

SPID:         XX
Process ID:   XXXX
Msg 3624, Level 20, State 1, Line 1
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from Technical Support.

When we checked DBCC CHECKDB, results were as
DBCC CHECKDB WITH NO_INFOMSGS – reported no problems.
DBCC CHECKDB, report 0 errors, 0 consistency errors.

After analyzing queries which he was using, we came to know there was 4-5 joins were used and in two tables a comparing columns was different data types as INT and BIGINT. After changing INT to BIGINT our problem got resolved.

2012 in review

Posted: December 31, 2012 in Database Administrator
2
The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:
600 people reached the top of Mt. Everest in 2012. This blog got about 2,100 views in 2012. If every person who reached the top of Mt. Everest viewed this blog, it would have taken 4 years to get that many views.
Click here to see the complete report.
0
As per my personal observation/suggestion, Its much better reinstall server again with new name and then detached DBs from OLD instance and Attach with NEW Instance, because a instance name is associated so many things like performance counters, local groups for service start and file ACLs, service names for SQL and related (agent, full text) services, SQL browser visibility, service master key encryption, various full-text settings, registry keys, ‘local’ linked server etc. Although, we can change Name as following the below steps,

— For default instance

sp_dropserver ‘old_name’
go

sp_addserver ‘new_name’,‘local’
go

– For named instance

sp_dropserver ‘Server Name\old_Instance_name’
go

sp_addserver ‘ServerName\New Instance Name’,‘local’
go

Verify sql server instance configuration by running below queries
sp_helpserver
Select @@SERVERNAME
and then restarted the SQL server with following command at command prompt J

net stop MSSQLServerServiceName
net start MSSQLServerServiceName
0
Here is a script from which we can easily find the Spaceused on MDF and LDF files.
Select DB_NAME() as [DATABASE NAME],
       fileid 
as FILEID,
       CASE WHEN groupid = 0 then ‘LOG FILE’ else ‘DATA FILE’ END as FILE_TYPE,
       Name as PHYSICAL_NAME,
       Filename as PHYSICAL_PATH,
       Convert(int,round((sysfiles.size*1.000)/128.000,0)) 
as FILE_SIZE,
       Convert(int,round(fileproperty(sysfiles.name,‘SpaceUsed’)/128.000,0)) as SPACE_USED,
       Convert(int,round((sysfiles.size-fileproperty(sysfiles.name,‘SpaceUsed’))/128.000,0)) as SPACE_LEFT
From 
sysfiles;
0
In case of following

  • Adding a new column to the middle of the table
  • Dropping a column
  • Changing column nullability
  • Changing the order of the columns
  • Changing the data type of a column
SSMS’s table designer may throw following error,

Saving changes is not permitted. The changes that you have made require the following tables to be dropped and re-created. You have either made changes to a table that can’t be re-created or enabled the option Prevent saving changes that require the table to be re-created.

This message problem occurs when the Prevent saving changes that require the table re-creation option is enabled, to resolve this, follow the steps as

Click the Tools menu, click Options, expand Designers, and then click Table and Database Designers. Select or clear the Prevent saving changes that require the table to be re-created check box.
122612_2258_SSMSErrorSa1.png
0
Its general understanding that @@SERVERNAME and SERVERPROPERTY(‘SERVERNAME’) will return same values. But once I get a different values for both means both
Select @@SERVERNAME
Select 
SERVERPROPERTY(‘SERVERNAME’)
were returning different name, 
I got answer @ BOL as 
@@SERVERNAME function and the SERVERNAME property of SERVERPROPERTY function may return strings with similar formats, the information can be different. The SERVERNAME property automatically reports changes in the network name of the computer.In contrast, @@SERVERNAME does not report such changes. @@SERVERNAME reports changes made to the local server name using the sp_addserver or sp_dropserver stored procedure.

To resolve the issue, Just follow below steps,

– To see Servername

sp_helpserver

– Removes server from the list of known servers on the local instance of SQL Server.

sp_dropserver ‘WRON_SERVER_NAME’null

– Add server to the local instance of SQL Server.

sp_addserver ‘REAL_SERVER_NAME’,‘LOCAL’

0
A successful upgrade to SQL Server 2008 R2/2012 should be smooth and trouble-free. To achieve that smooth transition, we must have to devote a plan sufficiently for the upgrade, and match the complexity of database application, otherwise, it risk costly and stressful errors and upgrade problems. Like all IT projects, planning for every Contingency/eventuality and then testing our plan gives us confidence that will succeed. Any ignorance may increase the chances of running into difficulties that can derail and delay upgrade.
Upgrade scenarios will be as complex as our underlying applications and instances of SQL Server. Some scenarios within environment might be simple, other scenarios complex. Start to plan by analyzing upgrade requirements, including reviewing upgrade strategies, understanding SQL Server hardware and software requirements for specific version, and discovering any blocking problems caused by backward-compatibility issues.
There may be two Upgrade Scenarios as In-Place Upgrade and Side by side upgrade.

In-Place Upgrade : By using an in-place upgrade strategy, the SQL Server 2008 R2 Setup program directly replaces an instance of SQL Server 2000 or SQL Server 2005 with a new instance of SQL Server 2008 R2 on the same x86 or x64 platform. This kind of upgrade is called “in-place” because the upgraded instance of SQL Server 2000 or SQL Server 2005 is actually replaced by the new instance of SQL Server 2008 R2. You do not have to copy database-related data from the older instance to SQL Server 2008 R2 because the old data files are automatically converted to the new format. When the process is complete, the old instance of SQL Server 2000 or SQL Server 2005 is removed from the server, with only the backups that you retained being able to restore it to its previous state.
Note: If you want to upgrade just one database from a legacy instance of SQL Server and not upgrade the other databases on the server, use the side-by-side upgrade method instead of the in-place method.

Side by side upgrade : 
In a side-by-side upgrade, instead of directly replacing the older instance of SQL Server, required database and component data is transferred from an instance of SQL Server 2000 or SQL Server 2005 to a separate instance of SQL Server 2008 R2. It is called a “side-by-side” method because the new instance of SQL Server 2008 R2 runs alongside the legacy instance of SQL Server 2000 or SQL Server 2005, on the same server or on a different server.
There are two important options when you use the side-by-side upgrade method:
  • You can transfer data and components to an instance of SQL Server 2008 R2 that is located on a different physical server or on a different virtual machine, or
  • You can transfer data and components to an instance of SQL Server 2008 R2 on the same physical server
    Both options let you run the new instance of SQL Server 2008 R2 alongside the legacy instance of SQL Server 2000 or SQL Server 2005. Typically, after the upgraded instance is accepted and moved into production, you can remove the older instance.
A side-by-side upgrade to a new server offers the best of both worlds: You can take advantage of a new and potentially more powerful server and platform, but the legacy server remains as a fallback if you encounter a problem. This method could also potentially reduce an upgrade downtime by letting you have the new server and instances tested, up, and running without affecting a current server and its workloads. You can test and address hardware or software problems encountered in bringing the new server online, without downtime of the legacy system. Although you would have to find a way to export data out of the new system to go back to the old system, rolling back to the legacy system would still be less time-consuming than a full SQL Server reinstall and restoring the databases, which a failed in-place upgrade would require. The downside of a side-by-side upgrade is that increased manual interventions are required, so it might take more preparation time by an upgrade/operations team. However, the benefits of this degree of control can frequently be worth the additional effort.
Source: SQL Server 2008 Upgrade Technical Reference Guide
1
SQL Server 2012–SQL Server Data Tools is available as a free component of the SQL Server platform and is available for all SQL Server users. provides an integrated environment for database developers to carry out all their database design work for any SQL Server platform (both on and off premise) within Visual Studio. Database developers can use the SQL Server Object Explorer in Visual Studio to easily create or edit database objects and data, or execute queries.
Developers will also appreciate the familiar VS tools we bring to database development, specifically; code navigation, IntelliSense, language support that parallels what is available for C# and VB, platform-specific validation, debugging and declarative editing in the TSQL Editor, as well as a visual Table Designer for both database projects and online database instances.
SQL Server Data Tools (SSDT) is the final name for the product formerly known as SQL Server Developer Tools, Code-Named “Juneau”. SSDT provides a modern database development experience for the SQL Server and SQL Azure Database Developer. As the supported SQL Azure development platform, SSDT will be regularly updated online to ensure that it keeps pace with the latest SQL Azure features.

0

After so lots of google, I got a very nice link for a Large collection of Free Microsoft eBooks including: SharePoint, Visual Studio, Windows Phone, Windows 8, Office 365, Office 2010, SQL Server 2012, Azure, and more. Here is link

Happy Reading! JJJ
0
The link  http://technet.microsoft.com/en-us/library/cc917589.aspx is a complete SQL server system error details listing . We can also get same using query as
Select from Sys.Messages

0
It’s a connectivity problem with a previously opened session in SQL Server, Sometimes a user can get SQL server error as
Msg 10054, Level 20, State 0, Line 0
A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)

Some possible causes are as below

  1. The server has been restarted, this will close the existing connections.
  2. Someone has killed the SPID that is being used.
  3. Network Failure
  4. SQL Services restarted
RESOLUTION : Just hit F5 or (ALT + X) to re-run your query. SSMS will determine it is no longer connected and then prompt you to reconnect it will put you back into the same database.
0
As we know Sequence is new enhanced feature introduced in SQL Server 2012. SEQUENCE work similarly to an IDENTITY value, but where the IDENTITY value is scoped to a specific column in a specific table, the Sequence Object is scoped to the entire database and controlled by application code.  This can allow us to synchronize seed values across multiple tables that reference one another in a parent child relationship.  Or, with a little bit of code we can also take control on whether or not the next value is used or saved for the next INSERT should the current transaction be rolled back where the IDENTITY value is lost and creates a gap when an INSERT is rolled backed. Here are some key differences as
Sequence
Identity
A SQL Server sequence object generates sequence of numbers just like an identity column in sql tables. But the advantage of sequence numbers is the sequence number object is not limited with single SQL table. IDENTITY is a table specific.
The status of the sequence object can be viewed by querying the DMV sys.sequences as shown below.
SELECT
Name,start_value,minimum_value,maximum_value
,current_value
FROM
sys.sequences
System Function @@IDENTITY
can use for the last-inserted identity value
Sequence is an object.
Example :
CREATE
SEQUENCE
MySequesnceName
AS
INT

START
WITH 1

INCREMENT
BY 1

MINVALUE 1

MAXVALUE 1000

NO
CYCLE

NO
CACHE
Identity is a property in a table.
Example :
CREATE
TABLE
TblIdentityChk

(

ID
INT
Identity (1,1),

CUSTNAME
Varchar(50)

)


You can obtain the new value before using it in an INSERT statement You cannot obtain the new value in your application before using it
In the sequence, you do not need to insert new ID, you can view the new ID directly.
Example :
SELECT NEXT VALUE FOR MySequesnceName
If you need a new ID from an identity column you need to
insert and then get new ID.
Example :
Insert into TblIdentityChk Values(‘TEST CUSTOMER’)
GO
SELECT @@IDENTITY AS ‘Identity’
–OR
Select SCOPE_IDENTITY() AS‘Identity’
You can add or remove a default constraint defined for a column with an expression that generates a new sequence value (extension to the standard) You cannot add or remove the property from an existing column
You can generate new values in an UPDATE statement, let see example as
UPDATE
TableName
SET
IDD
=
Next
Values
for
MySequesnceName
You cannot generate new values in an UPDATE statement when needed, rather only in INSERT statements
In the sequence, you can simply add one property to make it a cycle.
Example :
ALTER SEQUENCE MySequesnceName
CYCLE;
You cannot perform a cycle in identity column. Meaning, you cannot restart the counter after a
particular interval.
The semantics of defining ordering in a multi-row insert are very clear using an OVER clause (extension to the standard), and are even allowed in SELECT INTO statements The semantics of defining ordering in a multi-row insert are confusing, and in SELECT INTO statements are actually not guaranteed
Sequence can be easily cached by just setting cache property of
sequence. It also improves the performance.
Example :
ALTER SEQUENCE [dbo].[Sequence_ID]
CACHE 3;
You cannot cache Identity column property.
The sequence is not table dependent so you can easily remove it
–Let Insert With Sequence object
INSERT INTO TblBooks([ID],[BookName])
VALUES (NEXT VALUE FOR MySequesnceName, ‘MICROSOFT SQL SERVER 2012′)
GO
-Now Insert Second value without Sequence object
INSERT INTO TblBooks([ID],[BookName])
VALUES (2, ‘MICROSOFT SQL SERVER 2012′)
GO
You cannot remove the identity column from the table directly.
You can define minimum and maximum values, whether to allow cycling, and a cache size option for performance
Example :
ALTER SEQUENCE MySequesnceName
MAXVALUE 2000;
You cannot define: minimum and maximum values, whether to allow cycling, and caching options
You can obtain a whole range of new sequence values in one shot using the stored procedure sp_sequence_get_range (extension to the standard), letting the application assign the individual values for increased performance You cannot obtain a whole range of new identity values in one shot, letting the application assign the individual values
You can reseed as well as change the step size.
Example :
ALTER SEQUENCE MySequesnceName
RESTART WITH 7
INCREMENT BY 2;
You can reseed it but cannot change the step size.
Example :
DBCC CHECKIDENT (TblIdentityChk,RESEED, 4)
0
Some time we have to do Row Numbering, for same we can use IDENTTY function with SELECT Statement, but we can’t use it with simply SELECT command, its always used with … INTO .. form as
SELECT Col1,Col2..Coln INTO TableName from SourceTablename
If it tried with simply select command as
Select identity(int,1,1IDDEMPNAME,DEPT,BASICSALARY from Employee
Will give error as 

Msg 177, Level 15, State 1, Line 1
The IDENTITY function can only be used when the SELECT statement has an INTO clause.

So if we want to do it, we have to pass Select statement as
Select IDENTITY(int,1,1IDDNAME,BASICSALARY into #TempEmp from Employee
SELECT 
from #TempEmp
The IDENTITY gets three mandatory parameters, namely datatype of the identity column, starting value and the increment. With these, it can customize the new column according to our requirement. For example, an integer starting from 1 and incremented for each row with 1 can be specified as:
IDENTITY(INT,1,1)

Note : Suppose in source table there is already a Identity column exist, this column should be not in Select statement otherwise it will give a error, let in Employee Table, EMPID is an IDENTITY column, and we are passing Select statement as
Select IDENTITY(int,1,1IDDEMPID,NAME,BASICSALARY into #TempEmp from Employee
Select 
from #TempEmp
Will throw error as
Msg 8108, Level 16, State 1, Line 1
Cannot add identity column, using the SELECT INTO statement, to table ‘#TempEmp’, which already has column ‘EMPID’ that inherits the identity property.


0
Some time its required we have to INSERT/UPDATE some bulk data, and due to particular Constraints, it work as a barrier to prevent as per constrains behavior. We can set the constraints on a perticular table / column to not check temporarily, then re-enable the constraints as
ALTER TABLE TableName NOCHECK CONSTRAINT ConstraintName
Then re-enable constraints using-
ALTER 
TABLE TableName CHECK CONSTRAINT ConstraintName
If its required to Disable all constraints from all of Tables of Current Database, for same we can use
–Disable all Constraints
EXEC sp_MSforeachtable ‘ALTER TABLE ? NOCHECK CONSTRAINT ALL’
–Enable all Constraints
EXEC sp_MSforeachtable ‘ALTER TABLE ? CHECK CONSTRAINT ALL’
0
  • Using INSERT INTO…SELECT to Bulk Import Data with Minimal Logging
INSERT INTO <target_table> SELECT <columns> FROM <source_table> is the best way to efficiently transfer a large number of rows from one table, such as a staging table, to another table with minimal logging. Minimal logging can improve the performance and will fill the minimum amount of available transaction log space during the transaction. Minimal logging for this statement has the following requirements:
• The recovery model of the database is set to simple or bulk-logged.
• The target table is an empty or nonempty heap.
• The target table is not used in replication.
• The TABLOCK hint is specified for the target table.
Rows that are inserted into a heap as the result of an insert action in a MERGE statement may also be minimally logged. Unlike the BULK INSERT statement, which holds a less restrictive Bulk Update lock, INSERT INTO…SELECT with the TABLOCK hint holds an exclusive (X) lock on the table. This means that we cannot insert rows using parallel insert operations.
  • Using OPENROWSET and BULK to Bulk Import Data
The OPENROWSET function can accept the following table hints, which provide bulk-load optimizations with the INSERT statement:
• The TABLOCK hint can minimize the number of log records for the insert operation. The recovery model of the database must be
set to simple or bulk-logged and the target table cannot be used in replication.
•The IGNORE_CONSTRAINTS hint can temporarily disable FOREIGN KEY and CHECK constraint checking.
• The IGNORE_TRIGGERS hint can temporarily disable trigger execution.
• The KEEPDEFAULTS hint allows the insertion of a table column’s default value, if any, instead of NULL when the data record
lacks a value for the column.
• The KEEPIDENTITY hint allows the identity values in the imported data file to be used for the identity column in the target table.
0
Here are Database Engine features that are no longer available in SQL Server 2012.

Category
Discontinued feature
Replacement
Backup and Restore
BACKUP { DATABASE | LOG } WITH PASSWORD and BACKUP { DATABASE | LOG } WITH MEDIAPASSWORD are discontinued. RESTORE { DATABASE | LOG } WITH [MEDIA]PASSWORD continues to be deprecated.
None
Backup and Restore
RESTORE { DATABASE | LOG } … WITH DBO_ONLY
RESTORE { DATABASE | LOG } … … WITH RESTRICTED_USER
Compatibility level
80 compatibility levels
Databases must be set to at least compatibility level 90.
Configuration Options
sp_configure ‘user instance timeout’ and ‘user instances enabled’
Use the Local Database feature. For more information, see SqlLocalDB Utility
Connection protocols
Support for the VIA protocol is discontinued.
Use TCP instead.
Database objects
WITH APPEND clause on triggers
Re-create the whole trigger.
Database options
sp_dboption
ALTER DATABASE
Mail
SQL Mail
Use Database Mail. For more information, see Database Mail and Use Database Mail Instead of SQL Mail.
Memory Management
32-bit Address Windowing Extensions (AWE) and 32-bit Hot Add memory support.
Use a 64-bit operating system.
Metadata
DATABASEPROPERTY
DATABASEPROPERTYEX
Programmability
SQL Server Distributed Management Objects (SQL-DMO)
SQL Server Management Objects (SMO)
Query hints
FASTFIRSTROW hint
OPTION (FAST n).
Remote servers
The ability for users to create new remote servers by using sp_addserver is discontinued. sp_addserver with the ‘local’ option remains available. Remote servers preserved during upgrade or created by replication can be used.
Replace remote servers by using linked servers.
Security
sp_dropalias
Replace aliases with a combination of user accounts and database roles. Use sp_dropalias to remove aliases in upgraded databases.
Security
The version parameter of PWDCOMPARE representing a value from a login earlier than SQL Server 2000 is discontinued.
None
Service Broker programmability in SMO
The Microsoft.SqlServer.Management.Smo.Broker.BrokerPriority class no longer implements theMicrosoft.SqlServer.Management.Smo.IObjectPermission interface.

SET options
SET DISABLE_DEF_CNST_CHK
None.
System tables
sys.database_principal_aliases
Use roles instead of aliases.
Transact-SQL
RAISERROR in the format RAISERROR integer ‘string’ is discontinued.
Rewrite the statement using the current RAISERROR(…) syntax.
Transact-SQL syntax
COMPUTE / COMPUTE BY
Use ROLLUP
Transact-SQL syntax
Use of *= and =*
Use ANSI join syntax. For more information, see FROM (Transact-SQL).
XEvents
databases_data_file_size_changed, databases_log_file_size_changed
eventdatabases_log_file_used_size_changed
locks_lock_timeouts_greater_than_0
locks_lock_timeouts
Replaced by database_file_size_change event, database_file_size_change
database_file_size_change event
lock_timeout_greater_than_0
lock_timeout
0

A data-tier application is an entity that contains all of the database and instance objects used by an application. It provides a single unit for authoring, deploying and managing the data tier objects, instead of having to manage them separately through scripts. A Dedicated Administrator Connection (DAC) allows tighter integration of data tier development and gives administrators an application level view of resource usage in their systems.

Data Tier Applications can be developed using Visual Studio, and then be deployed to a database using the Data Tier Deployment Wizard.
You can also Extract a Data-Tier Application for usage in Visual Studio, or easily deploy to a different instance of SQL Server, including a SQL Azure instance.

Let see a step by step example
A)
Extracting a Data Tier Application

  1. Suppose First server named as VIRENDRA1_SQL2012, and DB Name is as VIRENDRATEST
  2. Now from SQL Server Management Studio Object Explorer, right click on VIRENDRATEST on VIRENDRA1_SQL2012
  3. Click Tasks |Extract Data-Tier Application
  4. As the Data Tier Application Wizard displays, click Next
  5. In the Set Properties Window specify C:\VIRENDRATEST.dacpac
    as the filename under Save to DAC package file
  6. If promped that the file is already existing, replace the existing file.
  7. Click Next
  8. On the Validation and Summary Page, click Next
  9. As the Data Extraction completes, click Finish.

You have now successfully exported a Database as a Data-Tier Application.

B) Deploying a Data Tier Application

  1. Suppose First server named as VIRENDRA2_SQL2012, and DB Name is as VIRENDRATEST
  2. In SQL Server Management Studio Object Explorer, Right-Click VIRENDRA2_SQL2012
  3. Select Deploy Data Tier Application
  4. As the Introduction Page displays, click Next
  5. In the Select Package Window Browse to C:\VIRENDRATEST.dacpac
  6. Select the Data Tier Application Package you previously exported
  7. Click Next
  8. As Package Validation completes, click Next on the Update Configuration Page
  9. Click Next on the Summary Page
  10. As Deployment completes, click Finish
You have now successfully deployed a Data Tier Application to a SQL Server Instance running on a different server.
0
Here are some new SSMS 2012 features as
  1. Exploring Intellisense Enhancement – Enhanced Intellisense will now provide partial syntax completion, while in previous versions Intellisense would only display the objects that start with the syntax you started typing.
    Let see example

Its showing all objects which contain “TABLE”.
2) At the bottom left corner of the query window, notice there is a Zoom window that displays 100%, change this to 150%. The enhancement to be able to zoom in the query window enables you to read queries more accurately. Its for results message windows too.
3) Code Snippets are an enhancements to the Template Feature that was introduced in SQL Server Management Studio. You can also create and insert your own code snippets, or Import them from a pre-created snippet library.
0
Hi, Here are some Differences between SQL Server 2008/R2 and 2012.
Sr. No.

SQL Server 2008

SQL Server 2012

1 Exceptions handle using TRY….CATCH Unique Exceptions handling with THROW
2 High Availability features as Log Shipping, Replication, Mirroring & Clustering New Feature ALWAYS ON introduced with addition of 2008 features.
3 Web Development and Business Intelligence Enhanced with business intelligence features. Excel PowerPivot by adding more drill and KPI through. In Addition with SQL server 2008, Web Development and Business Intelligence Enhanced with business intelligence features and Excel PowerPivot by adding more drill and KPI’s.
4 Could not supported for Windows Server Core Support. Supported for Windows Server Core Support
5 Columnstore Indexes not supported. New Columnstore Indexes feature that is completely unique to SQL Server. They are special type of read-only index designed to be use with Data Warehouse queries. Basically, data is grouped and stored in a flat, compressed column index, greatly reducing I/O and memory utilization on large queries.
6 PowerShell Supported Enhanced PowerShell Supported
7 Distributed replay features not available. Distributed replay allows you to capture a workload on a production server, and replay it on another machine. This way changes in underlying schemas, support packs, or hardware changes can be tested under production conditions.
8 PowerView not available in BI features PowerView is a fairly powerful self-service BI toolkit that allows users to create mash ups of BI reports from all over the Enterprise.
9 EXECUTE … with RECOMPLIE feature Enhanced EXECUTE with many option like WITH RESULT SET….
10 Maximum numbers of concurrent connections to SQL Server 2008 is 32767 SQL server 2012 has unlimited concurrent connections.
11 The SQL Server 2008 uses 27 bit bit precision for spatial calculations.
The SQL Server 2012 uses 48 bit precision for spatial calculations
12 TRY_CONVERT() and FORMAT() functions are not available in SQL Server 2008
TRY_CONVERT() and FORMAT() functions are newly included in SQL Server 2012
13 ORDER BY Clause does not have OFFSET / FETCH options for paging ORDER BY Clause have OFFSET / FETCH options for paging
14 SQL Server 2008 is code named as Katmai. SQL Server 2012 is code named as Denali
15 In SQL Server 2008, audit is an Enterprise-only feature. Only available in Enterprise, Evaluation, and Developer Edition. In SQL Server 2012,support for server auditing is expanded to include all editions of SQL Server.
16 Sequence is not available in SQL Server 2008
Sequence is included in SQL Server 2012.Sequence is a user defined object that generates a sequence of a number
17 The Full Text Search in SQL Server 2008 does not allow us to search and index data stored in extended properties or metadata. The Full Text Search in SQL Server 2012 has been enhanced by allowing us to search and index data stored in extended properties or metadata. Consider a PDF document that has “properties” filled in like Name, Type, Folder path, Size, Date Created, etc. In the newest release of SQL Server, this data could be indexes and searched along with the data in the document itself. The data does have to be exposed to work, but it’s possible now.
18 Analysis Services in SQL Server does not have BI Semantic Model
Analysis Services will include a new BI Semantic Model (BISM). BISM is a
(BISM) concept. 3-layer model that includes: Data Model, Business Logic and Data Access
19 The BCP not support –K Option. The BCP Utility and SQLCMD Utility utilities enhanced with -K option, which allows to specify read-only access to a secondary replica in an AlwaysOn availability group.
20 sys.dm_exec_query_stats  sys.dm_exec_query_stats  added four columns to help troubleshoot long running queries. You can use the total_rows, min_rows, max_rows and last_rows aggregate row count columns to separate queries that are returning a large number of rows from problematic queries that may be missing an index or have a bad query plan.
Other more precious features will be added with this document very soon.
J
0
As a DBA we have to maintain our all databases Dictionaries, Here is a Script , from which we can generate a full view of Tables structure details.
SELECT a.Name  [Table Name],
b.name  [Column Name],
c.name  [Data Type],
b.length [Column Length],
b.isnullable [Allow Nulls],
CASE WHEN d.name is null  THEN 0 ELSE 1 END  [Primary Key],
CASE WHEN e.parent_object_id is null THEN 0 ELSE 1 END [ForiegnKey],
CASE WHEN e.parent_object_id is null
THEN ‘-’ ELSE g.name END [Reference Table],
       CASE WHEN h.value is null THEN ‘-’ ELSE h.value END [Description]
from sysobjects as a
join syscolumns as on a.id b.id
join systypes as on b.xtype  c.xtype
left join (SELECT so.id,sc.colid,sc.name FROM syscolumns sc
JOIN sysobjects so ON o.id sc.id
JOIN sysindexkeys si ON so.id si.id and sc.colid  si.colid
WHERE si.indid = 1on a.id d.id and b.colid d.colid
Left join sys.foreign_key_columns as on a.id e.parent_object_id
          and b.colid e.parent_column_id
left join sys.objects as on e.referenced_object_id g.object_id
left join sys.extended_properties as on a.id h.major_id
and b.colid h.minor_id
where a.type ‘U’
order by a.name

We can also get somehow details, not as above , using below
SELECT  FROM INFORMATION_SCHEMA.COLUMNS

JJJ
0
Here are initial tips for writing efficient/  cost-effective Queries

  • When using ANDput the condition least likely to be true first. The database system evaluates conditions from left to right, subject to operator precedence. If you have two or more AND operators in a condition, the one to the left is evaluated first, and if and only if it’s true is the next condition evaluated. Finally, if that condition is true, then the third condition is evaluated. You can save the database system work, and hence increase speed, by putting the least likely condition first. For example, if you were looking for all members living in Delhi and born before January 1, 1960, you could write the following query:
    SELECT  FirstName LastName  FROM  EMPLOYEE  WHERE  State  ‘Delhi’  AND  DateOfBirth  ’1960-01-01′
    The query would work fine; however, the number of members born before that date is very small, whereas plenty of people live in New State. This means that State = Delhi will occur a number of times and the database system will go on to check the second condition, DateOfBirth < ’1960-01-01′. If you swap the conditions around, the least likely condition (DateOfBirth < ’1960-01-01′) is evaluated first:
    SELECT  FirstName LastName  FROM  MemberDetails  WHERE
    DateOfBirth  ’1960-01-01′  AND  State  ‘Delhi’;
    Because the condition is mostly going to be false, the second condition will rarely be executed, which saves time. It’s not a big deal when there are few records, but it is when there are a lot of them.
  • When using ORput the condition most likely to be true first. Whereas AND needs both sides to be true for the overall condition to be true, OR needs only one side to be true. If the left-hand side is true, there’s no need for OR to check the other condition, so you can save time by putting the most likely condition first. Consider the following statement:
    SELECT  FirstName LastName  FROM  MemberDetails  WHERE  State
    ‘Delhi’  OR  DateOfBirth  ’1960-01-01′;
    If Delhi is true, and it is true more often than DateOfBirth < ’1960-01-01′ is true, then there’s no need for the database system to evaluate the other condition, thus saving time.
  • DISTINCT can be faster than GROUP BYDISTINCT and GROUP BY often do the same thing: limit results to unique rows. However,DISTINCT is often faster with some database systems than GROUP BY. For example, examine the following GROUP BY:
    SELECT  MemberId  FROM  Orders  GROUP  BY  MemberId;
    The 
    GROUP BY could be rewritten using the DISTINCT keyword:
    SELECT  DISTINCT  MemberId  FROM  Orders
  • Use IN with your subqueries. When you write a query similar to the following, the database system has to get all the results from the subquery to make sure that it returns only one value,
    SELECT  FirstName LastName  FROM  EMPLOYEE  WHERE
    EMPID  (SELECT  EMPID  FROM  Orders  WHERE  OrderId  = 2);
    If you rewrite the query using the IN operator, the database system only needs to get results until there’s a match with the values returned by the subquery; it doesn’t necessarily have to get all the values:
    SELECT  FirstName LastName  FROM  EMPLOYEE  WHERE  EMPID  IN (SELECT  EMPID  FROM  Orders  WHERE  OrderId  = 2);
  • Avoid using SELECT * FROM. Specifying which columns you need has a few advantages, not all of them about efficiency. First, it makes clear which columns you’re actually using. If you use SELECT * and actually use only two out of seven of the columns, it’s hard to guess from the SQL alone which ones you’re using. If you say SELECT FirstName, LastName…..then it’s quite obvious which columns you’re using. From an efficiency standpoint, specifying columns reduces the amount of data that has to pass between the database and the application connecting to the database. This is especially important where the database is connected over a network.
  • Search on integer columns. If you have a choice, and often you don’t, search on integer columns. For example, if you are looking for the member whose name is VIRENDRA YADUVANSHI and whose MemberId is 101, then it makes sense to search via the MemberId because it’s much faster.
0
Here is the complete list of Deprecated Database Engine Features in SQL Server 2012. This topic describes the deprecated SQL Server Database Engine features that are still available in SQL Server 2012. These features are scheduled to be removed in a future release of SQL Server. Deprecated features should not be used in new applications.
For more please visit
http://msdn.microsoft.com/en-us/library/ms143729.aspx
0
Its sometime happened, our DC (Main Primary DB Instance) may be down due to some reasion and we have to start our log shipped Secondary Server which is at DR location, needs to act as Primary Server for smooth Production.
In my Example lets Primary Server is named as ‘PRIMARY’ and DB which is used for Log shipping at server ‘SECONDARY’ is VIRENDRATEST.
Now suppose I would like to change my PRIMARY Server’s production environment as a ‘SECONDARY’ and ‘SECONDARY’ as a PRIMARY. Below are the steps for same.
  1. First be sure, all applications hits are stopped to DB server. For same we have to contact Application Team/Production to STOP/DISABLE the services/Application, After confirmation from there let go ahead.
  2. Take Full BACKUP of PRIMARY Server’s DB.
  3. Run LS-Backup Job at Primary Server and Disable it.
  4. Run LS-Copy Job at SECONDARY Server , after complition of this, disable this job.
  5. Run LS-Restore job at Secondary Server and after completion of it, Disable it
  6. Now Take LOG Backup from PRIMARY server as using command as
    BACKUP LOG VIRENDRATEST TO Disk=N’D:\LS_BACKUP\VIRENDRATESTLOG.BAK’ WITH NORECOVERY
  7. Copy LOG Backup file(of Step 6) of PRIMARY at SCECONDARY server manually.
  8. At SECONDARY , restores last BACKUP LOG file with option WITH RECOVERY
  9. Now our SECONDARY Server’s DB is up for production and PRIMARY Server’s DB is in RESTORING mode
  10. Now Configure LOG SHIPPING again at SECONDARY as PRIMARY and PRIMARY as SECONDARY.
  11. Run LS-Backup Job at newly PRIMARY server.
  12. Refresh both servers instance, both server should be in sysn order with proper status.
  13. Check the Logshipping where its working properly or not
  14. Inform to Production team to Start/Enable all Application/Services
0
I found a very powerful SSMS add-in as SSMSBoost.
“SSMS Boost add-in” is designed to improve productivity when working with Microsoft SQL Server in SQL Server Management Studio. The main goal of project is to speed-up daily tasks of SQL DBA and T-SQL developers. Please visit http://www.ssmsboost.com/ for more details and download and enjoy!
1
RAISERROR:

RAISERROR raise an error with a user defined error message. The error message is either created dynamically or stored in the system table sysmessages.
Example: Lets create a SP, pass 2 number, if second number is 0, then error as .
CREATE PROCEDURE SPDivision @FirstNumber int@SecondNumber int
AS
    IF 
@SecondNumber=0
B
EGIN
SELECT ‘Error’
R
AISERROR (‘Error: Division by zero.’, 16, 1)
     END
   ELSE
     select @
FirstNumber/@SecondNumber
When Run SP as  EXEC SPDivision 5, 0   will show error as

(1 row(s) affected)
Msg 50000, Level 16, State 1, Procedure spDivision, Line 6
Error: Division by zero.

If the error message is used in many SP’s, to avoid inconsistencies due to changes in the message, the message can be stored in sysmessages.. The system SP sp_addmessage can be used to add the message and sp_dropmessage fro drop message. Be sure User-Defined error messages must have a msg_id greater or equal to 50001.
Example: To add an error message into sysmessages..

EXEC sp_addmessage 50001, 16‘Error: Division by zero.’

Now above SP code will be as

CREATE PROCEDURE spDivision @FirstNumber int@SecondNumber int
AS
   IF 
@SecondNumber=0
     BEGIN
       SELECT 
‘Error’
       RAISERROR (50001, 16, 1)
    END
    ELSE
    Select 
@FirstNumber/@SecondNumber 
@@ERROR:

The system function @@ERROR returns an error code if an error was encountered after the completion of the TSQL statement immediately preceding it, otherwise it will return 0, means success. The value of @@ERROR changes for each TSQL statement and the only way to keep track of errors is by using a temporary variable to store the error code. If there is no need to keep track of the error but simply act upon it, then the value of @@ERROR can be checked after the TSQL statement to be tested.
CREATE PROCEDURE spDivision @FirstNumber int@SecondNumber int
  AS
    DECLARE 
@errnum int
    Select 
@FirstNumber/@SecondNumber
    SET 
@errnum=@@Error
IF @errnum<>0
S
ELECT ‘Error’

 EXEC spDivision1 5,0
Msg 8134, Level 16, State 1, Procedure spDivision, Line 4
Divide by zero error encountered.
(1 row(s) affected)

The return code is changed automatically to store the latest @@Error value if no RETURN statement is present. Even if there are more statements after the error occurred, the error code is still preserved.
1
Locking is a means of not allowing any other transaction to take place when one is already in progress. In this the data is locked and there won’t be any modification taking place till the transaction either gets successful or it fails. The lock has to be put up before the processing of the data whereas Multi-Versioning is an alternate to locking to control the concurrency. It provides easy way to view and modify the data. It allows two users to view and read the data till the transaction is in progress. Multiversion concurrency control is described in some detail in the 1981 paper “Concurrency Control in Distributed Database Systems” by Philip Bernstein and Nathan Goodman.
Multi-Versioning, As the name implies, each record in the system might have multiple versions visible to different transactions. When a transaction modifies a record, a new version is written to the database, and a previous version, representing only the difference between the version of the record that was read by the transaction and the new value of the record, is written as a back version of that record.
Read committed isolation using row versioning is somewhere in between Locks and Multi-Versioning. Under this isolation level, read operations do not acquire locks against the active live data. However, with update operations the process is the same for this isolation level as it is for the default read committed isolation level, The selection of rows to update is done by using a blocking scan where an update lock is taken on the data row as data values are read.
Snapshot isolation uses purely Multi-Versioning because data that is to be modified is not actually locked in advance, but the data is locked when it is selected for modification. When a data row meets the update criteria, the snapshot transaction verifies that the data has not been modified by another transaction after the snapshot transaction started. If the data has not been modified by another transaction, the snapshot transaction locks the data, updates the data, releases the lock, and moves on. If the data has been modified by another transaction, an update conflict occurs and the snapshot transaction rolls back.
Although locking can be the best concurrency-control choice for applications that require data consistency and inter-transaction protection, it can cause writers to block readers. If a transaction changes a row, another transaction cannot read the row until the writer commits. There are cases where waiting for the change to complete is the correct response; however, there are cases where the previous transactionally consistent state of the row is enough.
0
As we know blocking occurs when one SPID holds a lock on a specific resource and a second SPID attempts to acquire a conflicting lock type on the same resource. Typically, the time frame for which the first SPID locks the resource is very small. When it releases the lock, the second connection is free to acquire its own lock on the resource and continue processing. This is normal behavior and may happen many times throughout the course of a day with no noticeable effect on system performance.
By using the read-committed snapshot isolation level introduced in Microsoft SQL Server 2005, we can reduce blocking and deadlocking issues caused by lock contention.Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb. 
As per BOL, The read-committed snapshot isolation level provides the following benefits:

  • SELECT statements do not lock data during a read operation. Read transactions do not block write transactions and vice versa.
  • Since SELECT statements do not acquire locks in most cases, the number of locks required by a transaction is reduced, which reduces the system overhead required to manage locks.
  • The possibility of blocking is significantly reduced.
  • SELECT statements can access the last committed value of the row while other transactions are updating the row without getting blocked.
  • The number of blocks and deadlocks is reduced.
  • Fewer lock escalations occur.
A quick and easy way to remember how Read-Committed Snapshot Isolation works:

  • Readers DO NOT block Writers
  • Writers DO NOT block Readers
  • Writers DO block Writers
REMEMBER: TempDB is a critical resource for the performance of Read-Committed Snapshot Isolation. RCSI puts a copy of the records that are being modified in TempDB before the transaction is committed so you need to make sure that TempDB has appropriate space and is placed on fast drives that ideally are isolated away from your user database data and log files.
The default Syntax is as

SET TRANSACTION ISOLATION LEVEL
{ READ COMMITTED | READ UNCOMMITTED | REPEATABLE READ | SERIALIZABLE }


0
As we know, to check whats going on at server level, we commonly use SP_WHO or SP_WHO2 but from both of these, there are no clear picture about exact commands which currently running on server, these shows the command, not exact full text string that’s currently running, Here is a some DMV’s manipulation to find the same.
SELECT SPID er.session_id
,BlkBy 
er.blocking_session_id
,ElapsedMS
er.total_elapsed_time
,CPU 
er.cpu_time
,IOReads 
er.logical_reads er.reads
,IOWrites 
er.writes
,Executions 
ec.execution_count
,CommandType 
er.command
,ObjectName 
OBJECT_SCHEMA_NAME(qt.objectid,dbid‘.’ + OBJECT_NAME(qt.objectidqt.dbid)
,SQLStatement 
= SUBSTRING(qt.text,er.statement_start_offset/2,
(
CASE WHEN er.statement_end_offset -1 THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2 ELSE
                er.statement_end_offset 
END er.statement_start_offset)/2)
,Status 
ses.status
,[Login] 
ses.login_name
,Host 
ses.host_name
,DBName 
DB_Name(er.database_id)
,LastWaitType 
er.last_wait_type
,StartTime 
er.start_time
,Protocol 
con.net_transport
,transaction_isolation 
=CASE ses.transaction_isolation_level 
                        WHEN 0 THEN ‘Unspecified’
                        WHEN 1 THEN 
‘Read Uncommitted’
                        WHEN 2 THEN ‘Read Committed’
                        WHEN 3 THEN ‘Repeatable’
                        WHEN 4 THEN ‘Serializable’
                        WHEN 5 THEN ‘Snapshot’
                        END
,ConnectionWrites 
con.num_writes
,ConnectionReads 
con.num_reads
,ClientAddress 
con.client_net_address
,Authentication 
con.auth_scheme
FROM 
sys.dm_exec_requests er
LEFT JOIN sys.dm_exec_sessions ses ON ses.session_id er.session_id
LEFT 
JOIN sys.dm_exec_connections con ON con.session_id ses.session_id
CROSS 
APPLY sys.dm_exec_sql_text(er.sql_handleas qt
OUTER APPLY SELECT execution_count MAX(cp.usecountsFROM sys.dm_exec_cached_plans cp
WHERE cp.plan_handle er.plan_handle ec
ORDER BY er.blocking_session_id DESCer.logical_reads er.reads DESCer.session_id
2
Similarity

These both command will only delete data of the specified table, they cannot remove the whole table data structure.

Difference

  • TRUNCATE is a DDL (data definition language) command whereas DELETE is a DML (data manipulation language) command.
  • We can’t execute a trigger in case of TRUNCATE whereas with DELETE command, we can execute a trigger.
  • TRUNCATE is faster than DELETE, because when you use DELETE to delete the data, at that time it store the whole data in rollback space from where you can get the data back after deletion. In case of TRUNCATE, it will not store data in rollback space and will directly delete it. You can’t get the deleted data back when you use TRUNCATE.
  • We can use any condition in WHERE clause using DELETE but you can’t do it with TRUNCATE.
  • If table is referenced by any foreign key constraints then TRUNCATE will not work.
0
sp_executesql is use for parameterized statements while EXEC does not support Parameterized statements. Parameterized statements is more secured in terms of SQL injection and its better reuse in the plan cache which reduces overhead on the server and boost up performance. Sp_executesql executes a string of Transact-SQL in its own self-contained batch. When it is run, SQL Server compiles the code in the string into an execution plan that is separate from the batch that contained the sp_executesql and its string.
Suppose, if a query which takes a parameter i.e. “EmpID”, When we run the query with ” EmpID ” as 1 and 2 it would be creating two different cache entry (one each for value 1 and 2 respectively).
– First clear Proc cache

DBCC Freeproccache
Go

– Lets Check

    – Using EXEC

    Declare @strQuery nvarchar(1000)
Select 
@strQuery ‘Select * from Employee where EmpID = ”1”’
Exec (@strQuery)
    Select 
@strQuery ‘Select * from Employee where EmpID = ”2”’
    Exec (@strQuery)
    – Using SP_EXECUTESQL
    –Declare @strQuery nvarchar(1000)

    Select @strQuery ‘Select * from Employee where EmpID =@EmpID’
    
Exec sp_executesql @strQueryN’@EmpID int’, 1
    Exec sp_executesql @strQueryN’@EmpID int’, 2
– Lets Check execution count for both

Select sqlTxt.textQS.execution_count from sys.dm_exec_query_stats QS
Cross 
Apply (Select [text] from sys.dm_exec_sql_text(QS.sql_handle)) as sqlTxt
It means for Unparameterised (EXEC ) queries the cached plan is reused only if we ask for the same id again. So the cached plan is not of any major use. In case of SP_EXECUTESQL- means for “Parameterised” queries the cached plan would be created only once and would be reused ‘n’ number of times. Similar to that of a stored procedure. So this would have better performance.
sp_executesql works as “Forced Statement Caching” while EXEC works as “Dynamic String Execution”
1
The fill factor option is provided for fine-tuning index data storage and performance. It is available on the CREATE INDEX and ALTER INDEX statements and provide for fine-tuning index data storage and performance. The fill-factor value affects an index when it is created or rebuilt. It determines the percentage of space on each page filled with data. The unused space is reserved for future growth. For example, specifying a fill-factor value of 80 means that 20 percent of each page will empty (excluding the root page). As data fills the table, the index keys use this space. The fill-factor value is an integer from 0 to 100 representing the percentage. The server-wide default is 0 which is deemed to be 100.
The best fill-factor depends on the purpose of the database and the type of clustered index If the database is primarily for data retrieval, or the primary entry is sequential, a high fill-factor will pack as much as possible in an index page. If the clusterered index is non-sequential (such as a natural primary entry), then the table is susceptible to page splits, so use a lower page fill-factor and defragment the pages often.
Here is the way to Retrieve Count of Fill-Factor Used in a Database as

Select fill_factor, Count(1) as NoOfIndexes From Sys.Indexes Group by fill_factor

Here are more details and explained article http://www.mssqltips.com/sqlservertip/1940/understanding-sql-server-index-fill-factor-setting/
OR
http://www.sqlskills.com/blogs/kimberly/category/FILLFACTOR.aspx


2
Its very common, The GROUP BY statement to is used to summarize data, is almost as old as SQL itself. Microsoft introduced additional paradigm of ROLLUP and CUBE to add power to the GROUP BY clause in SQL Server 6.5 itself. The CUBE or ROLLUP operators, which are both part of the GROUP BY clause of the SELECT statement and The COMPUTE or COMPUTE BY operators, which are also associated with GROUP BY.

A very new feature in SQL Server 2008 is the GROUPING SETS clause, which allows us to easily specify combinations of field groupings in our queries to see different levels of aggregated data.
These operators generate result sets that contain both detail rows for each item in the result set and summary rows for each group showing the aggregate totals for that group. The GROUP BY clause can be used to generate results that contain aggregates for each group, but no detail rows.
COMPUTE and COMPUTE BY are supported for backward compatibility. The ROLLUP operator is preferred over either COMPUTE or COMPUTE BY. The summary values generated by COMPUTE or COMPUTE BY are returned as separate result sets interleaved with the result sets returning the detail rows for each group, or a result set containing the totals appended after the main result set. Handling these multiple result sets increases the complexity of the code in an application. Neither COMPUTE nor COMPUTE BY are supported with server cursors, and ROLLUP is. CUBE and ROLLUP generate a single result set containing embedded subtotal and total rows. The query optimizer can also sometimes generate more efficient execution plans for ROLLUP than it can for COMPUTE and COMPUTE BY. When GROUP BY is used without these operators, it returns a single result set with one row per group containing the aggregate subtotals for the group. There are no detail rows in the result set.
To demonstrate all these, lets create a sample table as SalesDate as

– Create a Sample SalesData table

CREATE TABLE SalesData (EmpCode Char(5), SalesYear INTSalesAmount MONEY)
– Insert some rows in SaleData table
INSERT 
SalesData VALUES(1, 2005, 10000),(1, 2006, 8000),(1, 2007, 17000),
             (2, 2005, 13000),(2, 2007, 16000),(3, 2006, 10000),
             (3, 2007, 14000),(1, 2008, 15000),(1, 2009, 15500),
             (2, 2008, 19000),(2, 2009, 16000),(3, 2008, 8000),
             (3, 2009, 7500),(1, 2010, 9000),(1, 2011, 15000),
             (2, 2011, 5000),(2, 2010, 16000),(3, 2010, 12000),
             (3, 2011, 14000),(1, 2012, 8000),(2, 2006, 6000),
             (3, 2012, 2000)

ROLLUP : – The ROLLUP operator generates reports that contain subtotals and totals. The ROLLUP option, placed after the GROUP BY clause, instructs SQL Server to generate an additional total row. Let see the examples as per our TestData from SalesData Table.
Select EmpCode,sum(SalesAmount) as SalesAMT from SalesData Group by ROLLUP(EmpCode)
Will show result as ,



Here we can see clearly sum of all EmpCode with NULL title, Once a guy asked about this NULL , How to give Name for this Null as suppose ‘Grand Total’, here is some manipulation with tricky tips and we can get the desired output and we can also change it as where its required.
Select
case when grouping(EmpCode)=1 THEN ‘Grand Total’ else EmpCode end as EmpCode,
sum(SalesAmount
as SalesAMT from SalesData Group by ROLLUP(EmpCode)
will show result as,



In ROLLUP parameter we can also set multiple grouping set like ROLLUP(EmpCode,Year), hence ROLLUP outputs are hierarchy of values as per selected columns.
CUBE : The CUBE operator is specified in the GROUP BY clause of a SELECT statement. The select list contains the dimension columns and aggregate function expressions. The GROUP BY specifies the dimension columns by using the WITH CUBE keywords. The result set contains all possible combinations of the values in the dimension columns, together with the aggregate values from the underlying rows that match that combination of dimension values. CUBE generates a result set that represents aggregates for all combinations of values as per the selected columns. Let see the examples as per our TestData from SalesData Table.
Select EmpCode,SalesYear,sum(SalesAmountas SalesAMT from SalesData Group by CUBE(SalesYear,EmpCode)

Will shows as

Select SalesYear,EmpCode,sum(SalesAmountas SalesAMT from SalesData Group by CUBE(SalesYear,EmpCode)

Will show as,



The ROLLUP and CUBE aggregate functions generate subtotals and grand totals as separate rows, and supply a null in the GROUP BY column to indicate the grand total.
the difference between the CUBE and ROLLUP operator is that the CUBE generates a result set that shows the aggregates for all combinations of values in the selected columns. By contrast, the ROLLUP operator returns only the specific result set.The ROLLUP operator generates a result set that shows the aggregates for a hierarchy of values in the selected columns. Also, the ROLLUP operator provides only one level of summarization
GROUPING SETS : The GROUPING SETS function allows you to choose whether to see subtotals and grand totals in your result set. Its allow us versatility as

  • include just one total
  • include different levels of subtotal
  • include a grand total
  • choose the position of the grand total in the result set
    Let See examples,
Select EmpCode,SalesYear,sum(SalesAmountas SalesAMT from SalesData Group by GROUPING SETS (SalesYear,EmpCode)



Select EmpCode,SalesYear,sum(SalesAmountas SalesAMT from SalesData Group by GROUPING SETS (EmpCode,SalesYear)


COMPUTE & COMPUTE BY: The summary values generated by COMPUTE appear as separate result sets in the query results. The results of a query that include a COMPUTE clause are like a control-break report. This is a report whose summary values are controlled by the groupings, or breaks, that you specify. You can produce summary values for groups, and you can also calculate more than one aggregate function for the same group.
When COMPUTE is specified with the optional BY clause, there are two result sets for each group that qualifies for the SELECT:
  • The first result set for each group has the set of detail rows that contain the select list information for that group.
  • The second result set for each group has one row that contains the subtotals of the aggregate functions specified in the COMPUTE clause for that group.
When COMPUTE is specified without the optional BY clause, there are two result sets for the SELECT:
  • The first result set for each group has all the detail rows that contain the select list information.
  • The second result set has one row that contains the totals of the aggregate functions specified in the COMPUTE clause.
Comparing COMPUTE to GROUP BY
To summarize the differences between COMPUTE and GROUP BY:
  • GROUP BY produces a single result set. There is one row for each group containing only the grouping columns and aggregate functions showing the subaggregate for that group. The select list can contain only the grouping columns and aggregate functions.
  • COMPUTE produces multiple result sets. One type of result set contains the detail rows for each group containing the expressions from the select list. The other type of result set contains the subaggregate for a group, or the total aggregate for the SELECT statement. The select list can contain expressions other than the grouping columns or aggregate functions. The aggregate functions are specified in the COMPUTE clause, not in the select list.
    Let See example
Select EmpCode,SalesYear,SalesAmount from SalesData order by EmpCode,SalesYear COMPUTE SUM(SalesAmount)

The COMPUTE and COMPUTE BY clauses are provided for backward compatibility. Instead, use the following components, both are discontinued from SQL Server 2012.
1
Magic Tables are invisible tables which created on MS SQL Server, during INSERT/UPDATE/DELETE operations on any table. These tables temporarily persists values before completing the DML statements.
Magic Tables are
internal table which is used by the SQL server to recover recently inserted, deleted and updated data into SQL server database. That is when we insert or delete any record from any table in SQL server then recently inserted or deleted data from table also inserted into inserted magic table or deleted magic table with help of which we can recover data which is recently used to modify data into table either use in delete, insert or update to table. Basically there are two types of magic table in SQL server namely: INSERTED and DELETED, update can be performed with help of these twos. When we update the record from the table, the INSERTED table contains new values and DELETED table contains the old values. Magic Tables does not contain the information about the Columns of the Data Type text , ntext or image. These are maintained by SQL Server for internal processing whenever an update,insert,Delete occur on table.
These
Magic tables is used In SQL Server 6.5, 7.0 & 2000 versions with Triggers only while in SQL Server 2005, 2008 & 2008 R2 Versions can use these Magic tables with Triggers and Non-Triggers also. Magic tables with Non-Trigger activities uses OUTPUT Clause in SQL Server 2005, 2008 & 2008 R2 versions.

Suppose we have Employee table, Now We need to create two triggers to see data with in virtual tables Inserted and Deleted and without triggers.

For INSERTED virtual table, using Trigger ( Magic Tables with Triggers)
CREATE TRIGGER trg_Emp_Ins ON Employee
FOR INSERT
AS
begin
/* Here you can write your required codes, but I am here putting only demostration of Magic Table.*/
Print ‘Data in INSERTED Table’
SELECT FROM INSERTED – It will show data in Inserted virtual table
Print ‘Data in DELETED Table’
SELECT FROM DELETED – It will show data in Deleted virtual table
end
–Now insert a new record in Employee table to see data with in Inserted virtual tables
INSERT INTO Employee(NameBasicSalaryVALUES(‘Virendra’,2000)
For INSERTED virtual table, without Trigger ( Magic Tables with Non-Triggers)

– Use INSERTED Magic Tables with OUTPUT Clause to Insert values in Temp Table or Table Variable

create table #CopyEMP(CName varchar(12),CDept varchar(12))
–Now insert a new record in Employee table to see data with in Inserted virtual tables

Insert into Employee(NameBasicSalaryOUTPUT INSERTED.NameINSERTED.BasicSalary into #CopyEMP values(‘Ram’,’100′)
Select from #CopyEMP

For DELETED virtual table, using Trigger ( Magic Tables with Triggers)

CREATE TRIGGER trg_Emp_Ins OEmployee
FOR DELETE
AS
Begin
/* Here you can write your required codes, but I am here putting only demostration of Magic Table.*/
Print ‘INSERTED Table’
SELECT FROM INSERTED – It will show data in Inserted virtual table
Print ‘DELETED Table’
SELECT FROM DELETED – It will show data in Deleted virtual table
End
–Now Delete few records in Employee table to see data with in DELETED virtual tables
DELETE Employee where BasicSalary > 10000
For DELETED virtual table, without Trigger ( Magic Tables with Non-Triggers)

– Use DELETED Magic Tables with OUTPUT Clause to Insert values in Temp Table or Table Variable

create table #CopyEMP(CName varchar(12),CDept varchar(12))
–Now Delete record in Employee table to see data with in DELETED virtual tables
DELETE Employee where BasicSalary > 10000

Select from #CopyEMP

1
SQL Server 2005 introduced four new ranking functions: ROW_NUMBER, RANK, DENSE_RANK, and NTILE. These functions allow you to analyze data and provide ranking values to result rows of a query. For example, you might use these ranking functions for assigning sequential integer row IDs to result rows or for presentation, paging, or scoring purposes.
All four ranking functions follow a similar syntax pattern:

function_name() OVER([PARTITION BY partition_by_list] ORDER BY order_by_list)

The basic syntax follows.

ROW_NUMBER() OVER ([<partition_by_clause>] <order_by_clause>)
RANK() OVER ([<partition_by_clause>] <order_by_clause>)
DENSE_RANK() OVER([<partition_by_clause>]<order_by_clause>)
NTILE(integer_expression) OVER ([<partition_by_clause>] <order_by_clause>)



Ranking functions are a subset of the built in functions in SQL Server. They are used to provide a rank of one kind or another to a set of rows in a partition. The partition can be the full result set, if there is no partition. Otherwise, the partitioning of the result set is defined using the partition clause of the OVER clause. The order of the ranking within the partition is defined by the order clause of OVER. Order is required even though partition is optional.

ROW_NUMBER: ROW-NUMBER function returns a sequential value for every row in the results. It will assign value 1 for the first row and increase the number of the subsequent rows.

Syntax:

SELECT ROW_NUMBER() OVER (ORDER BY column-name), columns FROM table-name

OVER – Specify the order of the rows

ORDER BY – Provide sort order for the records


RANK: The RANK function returns the rank based on the sort order. When two rows have the same order value, it provide same rank for the two rows and also the ranking gets incremented after the same order by clause.

Syntax:
SELECT
RANK() OVER ([< partition_by_clause >] < order_by_clause >)

SELECT RANK() OVER ([< partition_by_clause >] < order_by_clause >)
Partition_by_clause – Set of results grouped into partition in which RANK function applied.
Order_by_clause – Set of results order the within the partition



In the above example, based on the sort order Employee Name, the Rank is given.
The first two rows in the list has same Employee Name, those rows are given same Rank, followed by the rank of next for another set of rows because there are two rows that are ranked higher. Therefore, the RANK function does not always return consecutive integers.


DENSE_RANK: The DENSE_RANK function is very similar to RANK and return rank without any gaps. This function sequentially ranks for each unique order by clause.

Syntax:
SELECT DENSE_RANK() OVER ([< partition_by_clause >] <order_by_clause>)</order_by_clause>

SELECT DENSE_RANK() OVER ([< partition_by_clause >] )
Partition_by_clause – Set of reults grouped into partition in which DENSE RANK function applied.
Order_by_clause – Set of results Order the within the partition


NTILE: NTILE () splits the set of rows into the specified number of groups. It equally splits the rows for each group when the number of rows is divisible by number of group. The number is incremented for every additional group. If the number of rows in the table is not divisible by total groups count (integer_expression), then the top groups will have one row greater than the later groups. For example if the total number of rows is 6 and the number of groups is 4, the first two groups will have 2 rows and the two remaining groups will have 1 row each

Syntax:
SELECT NTILE (integer_expression) OVER ([< partition_by_clause >] <order_by_clause>) </order_by_clause>
SELECT NTILE (integer_expression) OVER ([< partition_by_clause >] )
(integer_expression) – The number of groups into which each partition must be divided.



0
  • The EXCEPT operator returns the rows that are only in the first result set but not in the second. EXCEPT introduced from SQL Server 2005.
  • The NOT IN
    returns all rows from returned from by first result set which does not exist in the second select statement
  • When we combine two queries using EXCEPT clause, it will returns distinct rows from the first SELECT statement that are not returned by the second one.
  • EXCEPT clause works the same way as the UNION operator of SQL and MINUS clause in Oracle. The syntax of EXCEPT clause is as follow

SELECT
column1
[, column2 ]
FROM
table1
[, table2 ]
[WHERE condition]

EXCEPT

SELECT
column1
[, column2 ]
FROM
table1
[, table2 ]
[WHERE condition]

  • The difference between EXCEPT and NOT IN clause is EXCEPT operator returns all distinct rows from the rows returned by first select statement which does not exist in the rows returned by second select statement. On the other hand “NOT IN” will return all rows from returned by first select statement which does not exist in the rows returned by second select statement.
Below is example for same..

– Lets Create two sample tables ( I am creating here table variables)

Declare
@VirendraTestTable1
table (id
int,
course
varchar(50)
);

Declare
@VirendraTestTable2
table (id
int,course
varchar(50)
);


– Insert Some sample date to table @VirendraTestTable1

Insert
into
@VirendraTestTable1
values(1,‘ASP .NET’),(2,‘SQL SERVER’),(2,‘SQL SERVER’),(3,‘FOXPRO’),(3,‘FOXPRO’)


– Insert Some sample date to table @VirendraTestTable2

Insert
into
@VirendraTestTable2
values(1,‘ASP .NET’),(2,‘SQL SERVER’),(2,‘SQL SERVER’)


– Run query with EXCEPT operator, Only distinct rows will return

Print
‘EXCEPT output’

Select
id,course
from
@VirendraTestTable1

except

Select
id,course
from
@VirendraTestTable2


– Run query with NOT IN operator, duplicate rows will exist in the result

Print
‘NOT IN output’

Select
id,course
from
@VirendraTestTable1

Where
id
not
in
(Select
id
from
@VirendraTestTable2
)





Hence conclusion is ,

EXCEPT is defined in terms of duplicates based on distinctness, and for example (1 is distinct from 1) is false, (1 is distinct from NULL) is true and (NULL is distinct from NULL) if false.
NOT IN is defined in terms of equality, and for example, (1=1) is true,(1=NULL) is unknown and (NULL=NULL) is unknown.

0
Hi, It’s a headache for developers/DBA to write or read a complex SQL query using a number of Joins because Dealing with sub-queries often required to select a part of the data from a sub query or even join data from a query with some other many tables, for same we have an option to name our sub-queries with an alias or to use it directly and our business processes requirement may be getting more and more complex and as a resultant query would be not maintainable. Common Table Expressions(CTE) is a new way/method to define the sub-queries at once and name it using an alias and later call the same data using the alias just like what we do with a normal table. CTE is standard ANSI SQL standard.
Common table expression (CTE) can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement.
CTE allows us to generate Tables beforehand and use it later when we actually bind the data into the output while sub-queries is parallel process in this regard.
The basic syntax structure for a CTE is:
WITH expression_name [ ( column_name [,...n] ) ]
AS
( CTE_query_definition )


Let see an example –

Using Sub-Query
SELECT * FROM
(SELECT ED.EmpNameED.EmpDOBAD.EmpAddress From AddressDetails AD
Inner join EmployeeDetails ED on ED.EmpCode AD.EmpCodeAA
WHERE AA.Age > 50 ORDER BY AA.NAME

For Same, CTE implementation will as
With AA(NameAgeAddress–Provide Column names for CTE temporary Table ‘AA’
AS
SELECT ED.EmpNameED.EmpDOBAD.EmpAddress From AddressDetails AD
  Inner join EmployeeDetails ED on ED.EmpCode AD.EmpCde)
SELECT FROM AA WHERE AA.Age > 50 ORDER BY AA.NAME
CTE offers the advantages of improved readability and ease in maintenance of complex queries. The query can be divided into separate, simple, logical building blocks. These simple blocks can then be used to build more complex, interim CTEs until the final result set is generated.
Common Table Expressions offer the same functionality as a view, but are ideal for one-off usages where you don’t necessarily need a view defined for the system. CTE Create a recursive query. It’s a Substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata. Using CTE we can Reference the resulting table multiple times in the same statement.
As per MSDN, The following guidelines apply to nonrecursive common table expressions

  • A CTE must be followed by a single SELECT, INSERT, UPDATE, MERGE, or DELETE statement that references some or all the CTE columns. A CTE can also be specified in a CREATE VIEW statement as part of the defining SELECT statement of the view.
  • Multiple CTE query definitions can be defined in a nonrecursive CTE. The definitions must be combined by one of these set operators: UNION ALL, UNION, INTERSECT, or EXCEPT.
  • A CTE can reference itself and previously defined CTEs in the same WITH clause. Forward referencing is not allowed.
  • Specifying more than one WITH clause in a CTE is not allowed. For example, if a CTE_query_definition contains a subquery, that subquery cannot contain a nested WITH clause that defines another CTE.
  • The following clauses cannot be used in the CTE_query_definition:
    • COMPUTE or COMPUTE BY
    • ORDER BY (except when a TOP clause is specified)
    • INTO
    • OPTION clause with query hints
    • FOR XML
    • FOR BROWSE
  • When a CTE is used in a statement that is part of a batch, the statement before it must be followed by a semicolon.
  • A query referencing a CTE can be used to define a cursor.
  • Tables on remote servers can be referenced in the CTE.
  • When executing a CTE, any hints that reference a CTE may conflict with other hints that are discovered when the CTE accesses its underlying tables, in the same manner as hints that reference views in queries. When this occurs, the query returns an error.
  • When a CTE is the target of an UPDATE statement, all references to the CTE in the statement must match. For example, if the CTE is assigned an alias in the FROM clause, the alias must be used for all other references to the CTE. Ambiguous CTE references may produce unexpected join behavior and unintended query results
Guidelines for Defining and Using Recursive Common Table Expressions
The following guidelines apply to defining a recursive common table expression:
  • The recursive CTE definition must contain at least two CTE query definitions, an anchor member and a recursive member. Multiple anchor members and recursive members can be defined; however, all anchor member query definitions must be put before the first recursive member definition. All CTE query definitionsare anchor members unless they reference the CTE itself.
  • Anchor members must be combined by one of these set operators: UNION ALL, UNION, INTERSECT, or EXCEPT. UNION ALL is the only set operator allowed between the last anchor member and first recursive member, and when combining multiple recursive members.
  • The number of columns in the anchor and recursive members must be the same.
  • The data type of a column in the recursive member must be the same as the data type of the corresponding column in the anchor member.
  • The FROM clause of a recursive member must refer only one time to the CTE expression_name.
  • The following items are not allowed in the CTE_query_definition of a recursive member:
    • SELECT DISTINCT
    • GROUP BY
    • HAVING
    • Scalar aggregation
    • TOP
    • LEFT, RIGHT, OUTER JOIN (INNER JOIN is allowed)
    • Subqueries
    • A hint applied to a recursive reference to a CTE inside a CTE_query_definition.
    The following guidelines apply to using a recursive common table expression:
  • All columns returned by the recursive CTE are nullable regardless of the nullability of the columns returned by the participating SELECT statements.
  • An incorrectly composed recursive CTE may cause an infinite loop. For example, if the recursive member query definition returns the same values for both the parent and child columns, an infinite loop is created. To prevent an infinite loop, you can limit the number of recursion levels allowed for a particular statement by using the MAXRECURSION hint and a value between 0 and 32,767 in the OPTION clause of the INSERT, UPDATE, MERGE, DELETE, or SELECT statement. This lets you control the execution of the statement until you resolve the code problem that is creating the loop. The server-wide default is 100. When 0 is specified, no limit is applied. Only one MAXRECURSION value can be specified per statement.
  • A view that contains a recursive common table expression cannot be used to update data.
  • Cursors may be defined on queries using CTEs. The CTE is the select_statement argument that defines the result set of the cursor. Only fast forward-only and static (snapshot) cursors are allowed for recursive CTEs. If another cursor type is specified in a recursive CTE, the cursor type is converted to static.
  • Tables on remote servers may be referenced in the CTE. If the remote server is referenced in the recursive member of the CTE, a spool is created for each remote table so the tables can be repeatedly accessed locally. If it is a CTE query, Index Spool/Lazy Spools is displayed in the query plan and will have the additional WITH STACK predicate. This is one way to confirm proper recursion.
  • Analytic and aggregate functions in the recursive part of the CTE are applied to the set for the current recursion level and not to the set for the CTE. Functions like ROW_NUMBER operate only on the subset of data passed to them by the current recursion level and not the entire set of data passed to the recursive part of the CTE.
0
SQL injection is a technique often used to attack a website.It is an attack in which attacker take the advantage of insecure application over internet by running the SQL command against the database and to steal information from it that too using GUI of the website. This attack can happen with the applications in which SQL queries are generated in the code. The attacker tries to inject their own SQL into the statement that the application will use to query the database.
Examples-
1) For example suppose the below query string is used for a search feature in a website and a user may have the inserted “VIRENDRA” as the keyword to search. If in the code behind the keyword is directly used into the SQL statement, it will be like.
String sql = “Select EmpName, City from EmployeeMaster where EmpName like ‘%” + txtSearch.Text + “%’”;
But the attacker might enter the keyword like
‘ UNION SELECT name, type, id from sysobjects;–
This way attacker can get details of all the objects of the application database and using that attacker can steal further information.

2) Web applications use string queries, where the string contains both the query itself and its parameters. The string is built using server-side script languages such as ASP, JSP and CGI, and is then sent to the database server as a single SQL statement. The following example demonstrates an ASP code that generates a SQL query.

sql_query= SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = ” & Request.QueryString(“ProductID”)
The call Request.QueryString(“ProductID”) extracts the value of the Web form variable ProductID so that it can be appended as the SELECT condition.
When a user enters the following URL:
http://www.mydomain.com/products/products.asp?productid=123
The corresponding SQL query is executed:

SELECT ProductName, ProductDescription FROM Products WHERE ProductNumber = 123
An attacker may abuse the fact that the ProductID parameter is passed to the database without sufficient validation. The attacker can manipulate the parameter’s value to build malicious SQL statements. For example, setting the value “123 OR 1=1″ to the ProductID variable results in the following URL:

http://www.mydomain.com/products/products.asp?productid=123 or 1=1

The corresponding SQL Statement is:

SELECT ProductName, Product Description FROM Products WHERE ProductNumber = 123 OR 1=1
This condition would always be true and all ProductName and ProductDescription pairs are returned. The attacker can manipulate the application even further by inserting malicious commands. For example, an attacker can request the following URL:

http://www.mydomain.com/products/products.asp?productid=123; DROP TABLE Products
In this example the semicolon is used to pass the database server multiple statements in a single execution. The second statement is “DROP TABLE Products” which causes SQL Server to delete the entire Products table.

An attacker may use SQL injection to retrieve data from other tables as well. This can be done using the SQL UNION SELECT statement. The UNION SELECT statement allows the chaining of two separate SQL SELECT queries that have nothing in common. For example, consider the following SQL query:

SELECT ProductName, ProductDescription FROM Products WHERE ProductID = ’123′ UNION SELECT Username, Password FROM Users;
The result of this query is a table with two columns, containing the results of the first and second queries, respectively. An attacker may use this type of SQL injection by requesting the following URL:

http://www.mydomain.com/products/products.asp?productid=123 UNION SELECT user-name, password FROM USERS
The security model used by many Web applications assumes that an SQL query is a trusted command. This enables attackers to exploit SQL queries to circumvent access controls, authentication and authorization checks. In some instances, SQL queries may allow access to host operating system level commands. This can be done using stored procedures. Stored procedures are SQL procedures usually bundled with the database server. For example, the extended stored procedure xp_cmdshell executes operating system commands in the context of a Microsoft SQL Server. Using the same example, the attacker can set the value of ProductID to be “123;EXEC master..xp_cmdshell dir–”, which returns the list of files in the current directory of the SQL Server process or as well as pass any DOS command using cmdshell.



2
Hi, As DBA, its many time we have to faced Questions from Auditors or Clients where your SQL Server follow SQL Server Hardening best practices?, Here are some few as,
  • During installation, Install only required components, When the SQL Server installation is complete, harden the SQL Server environment.
  • After the installation, use the SQL Server Configuration Manager tool in order to disable unnecessary features and services.
  • Install the most recent critical fixes and service packs for both Windows and SQL Server.
  • When you’re selecting authentication modes, Windows Authentication is a more secure choice
  • If there is still a need to use SQL Authentication – enforce strong password policy.
  • Do not use the SA account for day-to-day administration, logging on to the server remotely, or having applications use it to connect to SQL. It is best if the SA account is disabled and renamed.
  • Create a role-based security policy with the Security Configuration Wizard tool.
  • Create policies with Policy Based Management enable/ disable unnecessary features and services.
  • After the server has been hardened, periodically asses the server’s security using the MBSA (Microsoft Baseline Security Analyzer) and SQL Server BPA (Best Practices Analyzer).
  • For production SQL Servers running mission-critical databases, either hide the instance or disable the SQL Server Browser service.
  • Change the default ports associated with the SQL Server installation to put off hackers from port-scanning the server.
  • Enable a firewall to filter unnecessary and unknown traffic.
  • At the very least, set security auditing to failed login attempts; otherwise, both failed and successful logins should be captured and monitored.
  • If upgrading previous versions of SQL Server, remove the BUILTIN/Administrators group from the SQL Server Logins.
  • Use the IIS Lockdown and URL Scan tools to harden IIS.
0
Dear Friends, As you know, It’s the great option to delete millions or billions of records using batches transactions, in this the logs does not grow drastically and makes recovery faster in the event of failures.
Its my observation, some time if we use CHECKPOINT in each transaction, give very fruitful performance in production environment, because Database Engine performs modifications to database pages in memory—in the buffer cache—and does not write these pages to disk after every change. Rather, the Database Engine periodically issues a checkpoint on database. A checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from memory to disk and, also, records information about the transaction log.
Let see an example,
Please Note : Here one point have to keep in mind , if we are using any date comparisons to delete records, have to store that value in a variable, will be much faster than using in query’s WHERE directly. Suppose we have to delete 60 days old records, then setting the date initially would make a difference instead of calculating for each row.
DECLARE @DelDate DATETIME;
SET 
@DelDate DATEADD(d-60,GETDATE())
WHILE 1 = 1
    BEGIN
        – Choose records count as per your records size.
        DELETE 
TOP (1000FROM MyTableName WHERE column_name<@DelDate
        IF @@ROWCOUNT< 1000 BREAK;
        CHECKPOINT;
    END
One more approach , it may be this operation can generate a lot of disk I/O, So its recommend that you put a “WAITFOR DELAY ‘time_to_pass’” statement in the loop to reduce the stress on your server. Waiting for even a quarter of a second between DELETE executions can reduce stress on your production environment, but its better in case of Production Server if records amount are large not so huge.
DECLARE @DelDate DATETIME;
SET 
@DelDate DATEADD(d,-60,GETDATE());
WHILE 1 = 1
    BEGIN
     – Choose records count as per your records size.
        DELETE 
TOP (1000FROM MyTableName WHERE column_name @DelDate
        IF @@ROWCOUNT < 1000 BREAK;
        CHECKPOINT;
        WAITFOR 
DELAY ’00:00:00.250′ – wait for quarter second
    END
Hope you like this way to get better performance .
2
As a SQL Server Administrator, We have to always follow some server specific settings, below are few among them.

  • The login auditing is configured to “Failed logins only” by default. This is to record and audit the login failure attempts and is a recommended security best practice. Consider enabling Auditing for failed logins.
  • Use 2048 bit and above to create asymmetric keys.
  • There are SQL Server Agent job steps that use the incorrect brackets to call out tokens. Starting with SQL Server 2005, the token syntax changed, and square brackets – [ ], are no longer used to call out SQL Server Agent job step tokens.
  • Windows Authentication is the default authentication mode, and is much more secure than SQL Server Authentication. Windows Authentication uses Kerberos security protocol, provides password policy enforcement with regard to complexity validation for strong passwords, provides support for account lockout, and supports password expiration.
  • The SQL Server error log contains messages that indicate either the failure of database file autogrow operations or that these operations took a long period of time to complete. An application will have timeouts, or query performance will be decreased, if database file growth happens during regular activity. Enable Instant File Initialization, pre-autogrow the database files to the required size, and use a growth value in absolute size.
  • Review the backup schedules of all critical databases and schedule backups based on the various tolerance levels.
  • If The combination of the affinity mask and affinity I/O mask is incorrect then Configure the affinity mask and affinity I/O mask, such the CPU masks, do not overlap.
  • If the instance of SQL Server has recorded one or more potentially dangerous data inconsistency and data corruption errors in the past then For each one of the affected databases that encountered these errors, perform database consistency checks and take appropriate decision accordingly.
  • Sometimes one or more databases on server that have their data and log files in the same volumes can cause contention for that device and result in poor performance, for this its recomended that the Data and log files should be placed on separate volumes.
  • Inconsistencies that exist in a database can threaten the availability of the entire database or some portions of it for the application. It should be Schedule a DBCC CHECK to perform a consistency check of the problematic database.
  • Never place database files and backup files on the same volume, Always perform backups to a different volume or to a remote location/SAN.
  • Always avoid the Shrink option to manage the database file sizes. Frequent grow and shrink operations can lead to excessive fragmentation and performance issues.
  • If the databases have not CHECKSUM page protection, must need to enable this protection on those databases becausetThe CHECKSUM protection provides a high level of data integrity guarantee when reading database pages from disk.
  • Always try to avoid Auto Close database option disable because for a On systems, repeated opening and closing of the database can result in performance problems and timeout issues.
  • Be informative about Recovery Model, its always recommended that the FULL or BULK-LOGGED recovery model must be used in production servers.
  • There are databases that contain a transaction log file that contains a high number of virtual log files (VLFs).A high number of virtual log files can effect database recovery performance, and consequently, also effect various database maintenance operations that depend on recovery.in this case always Reconfigure the transaction log files for these database. For this please visit http://support.microsoft.com/kb/2028436
  • if default extended event health session is either not active or changed, Reset the changes done to the default extended event health session. The default extended event health session records critical events that can be useful for root cause investigations of certain server conditions.
  • If The max degree of parallelism option is not set to the recommended value, needs to use sp_configure stored procedure to set the max degree of parallelism option to 8 or less.Setting this option to a large value often results in unwanted resource consumption and performance degradation.
  • Incorrect partition alignment can result in adverse I/O performance impacts on SQL Server queries and operations.The hard-disk partitions are configured optimally if partitions have a starting offset of 65536 bytes, or a multiple of 65536 bytes.
  • Using cross-database or distributed transactions on a mirrored database could lead to inconsistencies after a failover.
  • Enabling the guest account allows anonymous connections to your database, The Guest account has access to databases, excluding the master, tempdb, and msdb databases.The guest user cannot be dropped. However, run REVOKE CONNECT FROM GUEST within any database other than the master or tempdb database to revoke the guest user’s CONNECT permission and disable the guest user.
  • If you see presence of Event ID 12, it indicates memory corruption due to a firmware or BIOS issue,This problem can result in system instability and data corruption, for same Check with hardware manufacturer for updated firmware and BIOS.
  • SQL Server reports Event ID 833 when I/O requests take longer than 15 seconds to finish.This indicates a potential problem with the disk I/O subsystem or configuration. This affects SQL Server performance.Use system monitor disk counters to measure disk throughput and response rates. Perform appropriate configuration changes to ensure disk response times are acceptable.
  • Event ID 825 indicates that SQL Server was unable to read data from the disk on the first try.Even though retry was successful, this is usually an indication of an underlying issue that needs investigation and fixing before it becomes worse.
  • If startup parameters for the SQL Server service are incorrect, can result in unexpected SQL Engine behavior and can also cause startup problems.Use the SQL Server Error Log to review the startup parameters that the instance of SQL Server is currently using. Then, use the SQL Configuration Manager to correct these parameters.
  • Its need to set to disable the lightweight pooling option using sp_configure stored procedure,fiber mode scheduling for routine operations can decrease performance by inhibiting the regular benefits of context switching. Furthermore, some components of SQL Server cannot function correctly in fiber mode.
  • If lock is set to a nonzero value, large transactions can encounter the “out of locks” error message, if the value specified is exceeded.Use the sp_configure system stored procedure to change the value of locks to its default setting.
  • The Microsoft Distributed Transaction Coordinator (MSDTC) needs configured properly.
  • The value for the network packet size needs not too high. Excessive memory allocations are needed to support these large network packet size, and can cause memory loss and other stability issues on the server. Be sure, Configuration of the network packet size option to be set as default value.
  • Granting users, who are not Administrators, more than read permission on the Binn folder represents a potential security risk, Only administrators should have full control permission on the Binn folder.
  • Check the permission on the Data folder. Only server administrators and SQL Server service groups should have full control permission on the Data folder. Other users and groups should not have any access.
  • The public database role has EXECUTE permissions on some of the Data Transformation Services (DTS) system stored procedures in the msdb database.A potential security issue occurs if an application uses a login that owns DTS packages to connect to an instance of SQL Server. In this scenario, there is the risk that a SQL injection attack could modify or delete existing DTS packages.Use the sp_dts_secure stored procedure to revoke public access to DTS stored procedures.
  • Every login that is created on the server is a member of the public server role. If this condition is met, every login on the server will have server permissions.Do not grant server permissions to the public server role.
  • Frequently review the Event Log entries that indicate the crash and apply the solutions provided.
  • Enable “Enforce password policy” and “Enforce password expiration” for all the SQL Server logins. Use ALTER LOGIN (Transact-SQL) to configure the password policy for the SQL Server login.Password complexity policies are designed to deter brute force attacks by increasing the number of possible passwords.Disabling the password policy could allow the user to create a SQL Server login with a weak password.
  • The sysadmin role may not be applied correctly. The sysadmin role membership should be granted to limited users. Check the membership and determine if users other than Local System, SQL Server built in admin (SA), SQL Server Engine and Agent service account or service SID are granted sysadmin permission.Grant sysadmin role membership to users who require it. Carefully evaluate the membership of the sysadmin role. Granting sysadmin membership to users who do not require it represents potential security risk.
  • If the Public Role in MSDB is enabled for the SQL Server Agent proxy account, Remove the Public Role in MSDB from the SQL Server Agent proxy account. Typically, the proxy account is given elevated privileges, so someone using the Public Role in MSDB, which has fewer privileges, could take advantage of a higher privilege level.
  • The error log files for SQL Server are very big. This could be due to repetitive errors or information messages.If the error logs are growing due to frequent error conditions, then it will cause stability and availability problems.Best Practices Recommendations Examine the error log to find out why it is growing. If it is expected information, cycle the error logs frequently and configure more error logs to be retained.
  • Do not install SQL Server on a computer that is a domain controller. Installing a SQL Server failover cluster instance on a domain controller is not supported.
  • The SQL Server uses the default values for SQL Server memory configuration [-g parameter] and is also configured to use CLR integration. It may encounter various out-of-memory conditions and associated performance problems, to configure enough memory to handle CLR integration and other needs, use the -g startup parameter.
  • To reduce contention in the tempdb database, increase the number of tempdb data files, and configure the required startup trace flags. Under a heavy load, the tempdb database can become a single point of contention and affect concurrency and performance.
  • Use AES 128-bit and greater to create symmetric keys for data encryption. If AES is not supported by the version of your operating system, use 3DES. Do not create symmetric keys in the databases shipped with SQL Server, such as the master, msdb, model, and tempdb databases.
  • Running the SQL Server Agent service/FDHOST Launcher service under a highly-privileged account represents a potential security risk.Use a specific low-privilege user account or domain account to run the SQL Server Agent service/FDHOST Launcher service.
  • Avoid using unsupported .NET Framework assemblies. If you must use them, always update them in the database when you update them in the GAC.
  • If databases whose collation is different from the model, and therefore, their collations are different from the collation used by the tempdb database, there might be collation conflict errors and unexpected behavior when using different collations to compare data in a user database against the tempdb database. to resolve this, Rebuild the user database to match the model database collation or specify COLLATE when creating temp tables.
1
Backup is an important key component of a disaster recovery strategy. Using a valid and restorable backup is the last thing when the moment comes to execute a real disaster recovery scenario during an emergency downtime.

Here are some points to be follow as

Make sure you are not storing your backups in the same physical location as the database files. When your physical drive goes bad, you should be able to use the other drive or remote location that stored the backups in order to perform a restore. Keep in mind that you could create several logical volumes or partitions from a same physical disk drive. Carefully study the disk partition and logical volume layouts before choosing a storage location for the backups.
Its necessary to restore the backups on a test server and verify that you can restore with all the options and conditions you need to use during a planned or un-planned downtime. Use the verification options provided by the backup utilities of BACKUP TSQL command, SQL Server Maintenance Plans or other your backup software or solution if you are using.
Its always also recommended to use advanced features like BACKUP CHECKSUM to detect problems with the backup media itself, like as below

BACKUP DATABASE [VirendraTest] TO DISK N’D:\Database\VirendraTest.Bak’ WITH CHECKSUM;
GO

As we know there are 5 System Databases as MASTER,MSDB,MODEL,TEMPDB ( All these are Primary System Database ) & hidden read only RESOURCE.
Primary System Databases


Master – It Contains information about logins and information about all other databases
MSDB – It Contains Jobs, Operators, Alerts, Backup and Restore History, Database Mail information …….. etc..
MODEL – It Contains a model for all new databases. If you want certain objects to be in all new databases this is where you configure this information.
TEMPDB – Its created each time SQL Server starts and no needs for Backup.
Resource
Database
The Resource database is a read-only hidden database that contains all the system objects which are included within SQL Server. The DBA needs to perform a file-based copy of mssqlsystemresource.mdf and mssqlsystemresource.ldf files of the Resource database as SQL Server doesn’t support backing up the Resource database. In SQL Server 2005 the Resource database is available in “<drive>:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\” location and in SQL Server 2008 the Resource database is available in “<drive>:\Program Files\Microsoft SQL Server\MSSQL10.<instance_name>\MSSQL\Binn\” location


/* Copy Resource Database Files Using XP_CMDSHELL */

EXEC xp_cmdshell ‘COPY /Y “D:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\Binn\mssqlsystemresource.mdf” “E:\SYSDBBKUP”‘
GO
EXEC xp_cmdshell ‘COPY /Y “D:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\Binn\mssqlsystemresource.ldf” “E:\SystemDatabaseBackups”‘
GO
1
Hi Guys, Here, some basic tips about How to write optimized Stored Procedure.

  • Its always better to use table owner in query like SELECT col1,col2….coln FROM dbo.Tablename
  • Due to this, SQL query optimizer does not have to decide whether to retrieve from dbo.Tablename or other owner’s table and avoids recompilation.
  • Always Use TRUNCATE TABLE statement instead of DELETE clause if its needed to delete all rows from a table. Its more faster then DELETE statement without any conditions. TRUNCATE TABLE always releases all the space occupied by that table’s data and indexes, without logging the individual row deletes.
  • Don’t use the “sp_” prefix in a stored procedure name as the “sp_” prefix is reserved for system stored procedures. Any stored procedure that has the “sp_” prefix will cause an extra lookup in the MASTER database If a stored procedure uses same name in both the user database and a system database, the stored procedure in the user database will execute but first it will find the procedure in resource database and then the user database (for SQL server 2005/2008/2012) hence causing an extra burden to the server.
  • To get number of rows from table never use “SELECT COUNT(*)” statement, There is other way for much faster to achieve this goal as SELECT rows FROM sysindexes WHERE id OBJECT_ID(‘Table_Name’) AND indid <2
  • Include “SET NOCOUNT ON” statement in your stored procedures to greatly reduce network traffic.
  • Try to use table variables instead of Temporary Tables – Temp tables can cause stored procedures to recompile. (From SQL 2005 onwards, using temporary table not always causing recompilations. But adding rows to temporary tables may cause recompilations). But table variables were designed specifically to guard against stored procedure recompiles during execution. If the result set is not containing a huge number of records then you should stick to table variable, otherwise temp table has its advantages. There is a misconception that temp tables always use the tembdb database but table variable do not. Table variables also use tempdb after a certain size be careful to use them only with not very huge amount of data that you want to allocate in temp tables, otherwise you can easily get the server down.
0
Here, The best practices used for Stored Procedure .
Stored Procedure Tunning 
You can also visit here http://youtu.be/A-17sMgQqnM
0
You may be confused to find this question’s answer, after so lots of google, you may read that shrinking data files is good, and in other places you may read that it’s bad.

A shrink operation on a data file tries to move the database page nearest the end of the file down toward the start of the file. This creates “empty” space at the end of the data file that can be returned to the OS. In other words, the data file is made physically smaller.

A shrink operation on a transaction log file, on the other hand, doesn’t move anything around—it simply removes the empty portion of the transaction log at the end of the file, as long as the transaction log records are not being retained for any reason. If the operation is successful, the log file is made physically smaller.

The confusion comes from the side effects of the two operations, and when they should be performed.

People are advised (or decide) to shrink data files to reclaim space. It may be that their index maintenance job causes the data files to grow, or their drive is getting full, and the natural reaction is to reclaim some of this “wasted” space. However, it is likely this space will be needed again and often it’s better to leave the remaining free space available to the data file rather than repeatedly shrink and auto-grow a file.

Shrinking a data file should be a very rare operation because it has a nasty side effect. Shrinking a data file causes massive index fragmentation, which may affect query performance. For same you may read Paul S. Randal blog post “Why You Should Not Shrink Your Data Files” includes a simple script that shows this.

2
It is always very challenging and energetic know some command line fun magically, as it is sometimes very easier and powerful tips to process commands that way. Here, I would like to say something this type of funny operations with Forfiles command in Windows Server 2003/ Windows Vista /Windows 7/ Windows 8 or higher. Forfiles can process files based on names, file extensions and file life cycle. It’s very easy to find files depending on our search criteria or all files in a directory/or including sub directories that are older than Nth days, or all documents in specific folder that have been changed since a specific date.
The forfiles command can be used with processing options to delete those files, or create a list of all files that match the search criteria.
Forfiles, Selects and executes a command on a file or set of files. This command enables batch processing of files. In conjunction with a SQL Server Agent job, we can easily use this command to delete old database backup files to remove dependencies on items such as SQL Server maintenance plans, the xp_cmdshell extended stored procedure, or VBScript objects. For example, the following command searches the directory c:\test and all its subdirectories for files with the .BAK extension that are older than 10 days

FORFILES /P C:\TEST\ /M .BAK /S /D -10

The parameter /p followed by a directory defines the starting directory, /s includes subdirectories in the search, /m filters the files and folders based on the search criteria, and /D defines the date or a time span.
For more details visit http://technet.microsoft.com/en-us/library/cc753551(v=ws.10).aspx#BKMK_examples
0
One day I have to find how many numbers of CPU are working on my DB Server, before know below options, I always take a Remote Desktop session for server and from Task Manager let know about CPU information and many more available information provided by Task Manager.

Option :1
    Using extended stored procedures XP_MSVER , there at index 16, processor count displays

Exec
xp_msver
‘processorcount’
or we can directly use processorcountExec
xp_msver
‘processorcount’


Option :2
    
Using ‘Set NUMBER_OF_PROCESSORS’ which run on command prompt.
Exec
xp_cmdshell
‘Set NUMBER_OF_PROCESSORS’


Option 3:From Registry values using extended stored procedure XP_REGREAD

Declare
@key
varchar(1000)

Declare
@value
varchar(1000)

EXEC
master..xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,@key=‘SYSTEM\CurrentControlSet\Control\Session Manager\Environment\’,@value=NUMBER_OF_PROCESSORS

0
In all version of SQL Server we face Locking and Blocking problems. Now its very first thing to know blocking and blocked SPID. When a user or DBA itself come to know that particular process seems to hung or a process is not processing properly, only checking the applicable database blocking makes a great deal to winning the battle. Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock. These forces the second connection to be blocked until the first connection completes. With this being said, locking is a natural occurrence in SQL Server in order to maintain data integrity.
In SQL Server, there are many ways to findout the blocking and blocked SPID/Process. Some options are like:
  • sp_who2    System Stored Procedure
  • sys.dm_exec_requests  DMV
  • Sys.dm_os_waiting_tasks   DMV
  • SQL Server Management Studio Activity Monitor
  • SQL Server Management Studio Reports
  • SQL Server Profiler
0
For DBA day-to-day activities, Sometimes it happened, anyone executed queries on production server and after long waiting, he/she come to DBA and ask a very funny question like I have executed this/that queries and its running since long time, now tell me how
much time it will take to complete
Sometime its very difficult to explain but from using sys.dm_exec_requests DMV, somehow we can find out the answer like
SELECT percent_complete,start_time,status,command,estimated_completion_time/1000/60 As ‘Minutes to Completion’,
total_elapsed_time/1000/60 As ‘Minutes Elapsed’wait_typelast_wait_type FROM sys.dm_exec_requests
order by ‘Minutes to Completion’ desc
0
Sometimes its needed to check where two tables having the same number of columns?
To demonstrate this first create two tables as EMP1 and EMP2.
Lets we have two Tables as EMP1 and EMP2 as below,
Suppose table EMP1 is as below
USE [VirendraTest]

GO

CREATE TABLE [dbo].[Emp1]
( [ID] [int] IDENTITY(1,1NOT NULL,
[Name] [nchar](10NULL,
[Basic] [numeric](18, 2NULL
   ) ON [PRIMARY]
GO

and table EMP2 is as below,
USE [VirendraTest]

GO

CREATE TABLE [dbo].[Emp2]
[ID] [int] IDENTITY(1,1NOT NULL,
[Name] [varchar](10NULL,
[Basic] [numeric](18, 2NULL,
[Age] [int]
   ) ON [PRIMARY]
 GO
Now to find the extra columns from EMP2 to EMP1 , we can use below query,
Select c2.table_name,c2.COLUMN_NAME from [INFORMATION_SCHEMA].[COLUMNS] c2
where 
table_name=‘Emp2′ and c2.COLUMN_NAME not in (select column_name from [INFORMATION_SCHEMA].[COLUMNS] where table_name=‘emp1′)
0
Its required in our daily practices or sometimes we have to compare two or more tables to find out where tables are same in terms of column’s data types.
Below is the query to compare data types between two tables, for this, we can use the [INFORMATION_SCHEMA].[COLUMNS] system views to verify and compare the information.
Lets we have two Tables as EMP1 and EMP2 as below,
Suppose table EMP1 is as below
USE
[VirendraTest]

GO

CREATE TABLE[dbo].[Emp1]
(  [ID][int]IDENTITY(1,1)NOT NULL,
   [Name][nchar](10)NULL,
   [Basic][numeric](18, 2)NULL
ON[PRIMARY]
GO
and table EMP2 is as below,
USE
[VirendraTest]

GO

CREATE TABLE[dbo].[Emp2]
(
 [ID][int]IDENTITY(1,1)NOTNULL,
  [Name][varchar](10)NULL,
  [Basic][numeric](18, 2)NULL,
  [Age] [int]
ON [PRIMARY]
GO
Now to find the data type mismatched columns, use below query,
Select c1.table_name,c1.COLUMN_NAME,c1.DATA_TYPE,c2.table_name,c2.DATA_TYPE,c2.COLUMN_NAME
from 
[INFORMATION_SCHEMA].[COLUMNS] c1
Left join [INFORMATION_SCHEMA].[COLUMNS] c2 on c1.COLUMN_NAME=c2.COLUMN_NAME
where 
c1.TABLE_NAME=‘emp1′ and c2.TABLE_NAME=‘emp2′ and c1.data_type<>c2.DATA_TYPE
0
I got this error when I was working with my one DBA, after so lots of basic finding, I came to know that this error occurred when anyone try to attach a Database (Physical MDF/LDF files) from any folder or copy from elsewhere, was previously attached with different SQL Instance (or same instance but previously startup account was different) and that instance Startup account (SQL Server Service) is not as your current startup account.
To resolve this issue you have to provide full access of physical files containing folder/files to that SQL Server startup account. After providing full access of that specific service account for folders/files, now you are able to attach that DB files.
0
Its happen, sometime we delete maintenance plan from SSMS, due to some reason it show succeed or error and in between that some object/steps being deleted but some not, sometimes you see maintenance plan job name in Job Activity Monitor/Jobs Listing and when you tr to delete that job, its through error like
Drop failed for Job ‘XXXXXXXXXXXXX’. (Microsoft.SqlServer.Smo)The DELETE statement conflicted with the REFERENCE constraint “FK_subplan_job_id(xxx)”. The conflict occurred in database “msdb”, table “dbo.sysmaintplan_subplans”, column ‘job_id’.The statement has been terminated. (Microsoft SQL Server, Error: 547)
We can resolve this error as
  1. Find which plans needs to delete by :
    SELECT * FROM sysmaintplan_plans
  2. Using above query, you can get plane ID of your maintenance plan which you want to delete
    DELETE FROM sysmaintplan_log WHERE plan_id = ‘Plan ID of your needs to be delete Maintenance Plan’
    DELETE FROM sysmaintplan_subplans WHERE plan_id = ‘Plan ID of your needs to be delete Maintenance Plan’ 
    DELETE FROM sysmaintplan_plans WHERE id = ‘Plan ID of your needs to be delete Maintenance Plan’
    Just Delete entries from above mentioned these 3 tables and be Happy from such problem!
2
Its happened sometimes developer or DBA needs to know Database owner,
Here are some basic tips to find out Database owner,
Select  Name as ‘Database Name’,suser_sname(owner_sid) ‘Database Owner’ from sys.databases
- OR – -
Select  Name as ‘Database Name’,suser_sname(sid) ‘Database Owner’ from sysdatabases
Note : suser_sname returns name associated with a security identification number (SID).
0
There are many options to find out SQL Server’s Version, some are as below,

– Option : 1
   SELECT  @@VERSION
– Option : 2
  SELECT  SERVERPROPERTY(‘ProductVersion’ ‘Version’, SERVERPROPERTY(‘ProductLevel’) ‘Service Pack’, SERVERPROPERTY    (‘Edition’‘Edition’
-
- Option : 3
   sp_server_info
– Option : 4
  xp_msver
– Option : 5
   sp_MSgetversion
– Option : 6
  SELECT  @@MicrosoftVersion /power(2,24)
– Option : 7

        – Finding values from Registry – For SQL Server 2005
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘Version’
GO

xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘Edition’
GO

xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup\’,
@value_name=‘SP’
GO

        – Finding values from Registry – For SQL Server 2008)
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘Version’
GO

xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘Edition’
GO
xp_regread
@rootkey=‘HKEY_LOCAL_MACHINE’,
@key=‘SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\Setup\’,
@value_name=‘SP’
GO
0
–    Step -1 Check Your Configuration Values at Server level
   SP_CONFIGURE  ‘show advanced options’,1
Reconfigure with Override
go
–    Step -2 Check About “min server memory (MB)” & “max server memory (MB)” options Config_value
    SP_CONFIGURE ‘min server memory (MB)’
    SP_CONFIGURE ‘max server memory (MB)’
–    Step -3 Now you can set your prefer amount of Memory as Min & Max as per your’s Server Actual RAM, If its pure DB Box, Its Preferd as SQL:OS is 80:20 ratio ( Its my opinion, not any Standard).

    SP_CONFIGURE ‘min server memory (MB)’,1024
   Reconfigure with Override
go
SP_CONFIGURE ‘max server memory (MB)’,80of Your Total RAM
   Reconfigure with Override
go

0
The SQL Server Network Interface (SNI) is a protocol layer that establishes the network connection between the client and the server. It consists of a set of APIs that are used by both the database engine and the SQL Server Native Client i.e SNAC

SQL Server has support for the following protocols:


Shared memory: Simple and fast, shared memory is the default protocol used to connect from a client running on the same computer as SQL Server. It can only be used locally, has no configurable properties, and is always tried first when connecting from the local machine means The limitation is that the client applications must reside on the same machine where the SQL Server is installed.


TCP/IP: TCP/IP is the most commonly / the most popular and common protocol widely used throughout the industry today. It communicates across interconnected networks and is a standard for routing network traffics and offers advanced security features.It enables you to connect to SQL Server by specifying an IP address and a port number. Typically, this happens automatically when you specify an instance to connect to. Your internal name resolution system resolves the hostname part of the instance name to an IP address, and either you connect to the default TCP port number 1433 for default instances or the SQL Browser service will find the right port for a named instance using UDP port 1434.


Named Pipes: This protocol can be used when your application and SQL Server resides on a local area network. A part of memory is used by one process to pass information to another process, so that the output of one is the input of the other. The second process can be local (on the same computer as the first) or remote (on a networked computer). TCP/IP and Named Pipes are comparable protocols in the architectures in which they can be used. Named Pipes was developed for local area networks (LANs) but it can be inefficient across slower networks such as wide area networks (WANs). To use Named Pipes you first need to enable it in SQL Server Configuration Manager (if you’ll be connecting remotely) and then create a SQL Server alias, which connects to the server using Named Pipes as the protocol. Named Pipes uses TCP port 445, so ensure that the port is open on any firewalls between the two computers, including the Windows Firewall.


VIA: Virtual Interface Adapter is a protocol that enables high-performance communications between two systems. It requires specialized hardware at both ends and a dedicated connection. Like Named Pipes, to use the VIA protocol you fi rst need to enable it in SQL Server Configuration Manager and then create a SQL Server alias that connects to the server using VIA as the protocol. This protocol has been deprecated and will no longer be available in the future versions of SQL Server.

Regardless of the network protocol used, once the connection is established, SNI creates a secure connection to a TDS endpoint

TDS Endpoint : Tabular Data Stream (TDS) Endpoint also known as TSQL.

For Details About SQL Server ENDPOINT , Very good explained blog is http://www.tobuku.com/docs/SQL%20Server%20Endpoint.pdf , Thanks for writer.
0
Its very long time waited about paging result sets, MicroSoft finally provide with SQL Server 2012 very unique feature as OFFSET and FETCH.
I got very explanatory blog of Respected Mr. Andy Hayes about it , thanks Andy for same.
http://www.dbadiaries.com/new-t-sql-features-in-sql-server-2012-offset-and-fetch/
0
So many times we forgot any SQL Statement or text what we had been written in any Stored Procedure, wants to check that Stored procedure name / Statement/Word/Text.
Here are some options, Where we can easily find wanted Text/String within Current Database’s All Stored Procedures.
Option 1:

Select
*
from
Sys.sql_modules
where
definition
like
‘%Your Text Here%’

Option 2:

Select
*
from
Sys.syscomments
where
text
like
‘%Your Text Here%’

Option 3:

Select
*
from
Syscomments
where
text
like
‘%Your Text Here%’

Option 4:

Select
*
from
INFORMATION_SCHEMA.ROUTINES
where
ROUTINE_DEFINITION
like
‘%Your Text Here%’

0
Select * from TableName in your PRODUCTION code or Stored Procedure is problematic because if a column is added or deleted, you can expect a sure bug.

It is still generally advisable not to use SELECT * because the definition of the table may change over time. For example, a table may have a column added, removed, renamed or repositioned relative to other columns,such changes can result in SQL queries returning data that are never used, attempting to implicitly access data that do not exist, implicitly relying upon a column with a given name or relying upon the relative ordering of the table’s columns. While explicitly indicating only those columns one needs does not prevent problems when a column is removed or renamed, it does prevent problems when a column is added or repositioned and makes it easier to find a problematic query when one is able to search on the text of the column name.

Lets see an example here,

—- Lets create a Test Table.

CREATE
TABLE
[dbo].[VirendraTest](

    [ID]
[int]
IDENTITY(1,1)
NOT
NULL,

    [Name]
[varchar](50)
NOT
NULL,

    [Age]
[int]
NULL

)

GO

—- Lets Insert some data in this Table.

Insert
into
VirendraTest
values

(‘Naresh’,40),

(‘AShok’,42),

(‘Ashish’,27),

(‘Paul’,35),

(‘Smitha’,29)

—- Now Create a View on This Table

Create
View
VWTest

as

(

Select
*
from
VirendraTest

)

go

—- See Data as per View

Select
*
from
VWTest

—- Now, Add a new Column in Table / Position Columns orders / Drop Any Coumn etc..

Alter
table
VirendraTest
add
Gender
varchar(1)
default
‘M’

—- See Data as per View, ha ha ha … its not as per Table.

Select
*
from
VWTest

—- But in Table its as

Select
*
from
VirendraTest

0
First of all, SP_WHO is a MS-SQL SERVER’s Documented Command while SP_WHO2 is an UnDocumented Command. SP_WHO2 provides more detailed information about Servers running objects as
SPID,Status,Login,HostName,BlkBy,DBName,Command,CPUTime, DiskIO, LastBatch, ProgramName,SPID, REQUESTID



While SP_WHO provides details as
Spid,ecid,status,loginame,hostname,blk,dbname,cmd,request_id
0
Hi Guys, As a DBA its happend so many times, You started your Database Backup from any remote PC/Server or it may be your Maintenance Plan for Backup Database(s) and you want to know your backup progress status.
Its possible using system SP / DMV, The Query for same will be as
SELECT A.NAME ‘DATABASE NAME’,B.TOTAL_ELAPSED_TIME/60000 AS ‘RUNNING SINCE (Minutes)’,
B.ESTIMATED_COMPLETION_TIME/60000 AS ‘REMAINING TIME(Minutes)’,
B.PERCENT_COMPLETE as ‘BACKUP % COMPLETED’ ,B.COMMAND ‘COMMAND TYPE’,(SELECT TEXT FROM sys.dm_exec_sql_text(B.SQL_HANDLE))AS
‘COMMAND TEXT’,
UPPER(B.STATUS) STATUS,B.SESSION_ID ‘SPID’, B.START_TIME, B.BLOCKING_SESSION_ID, B.LAST_WAIT_TYPE, B.READS, B.LOGICAL_READS, B.WRITES from
MASTER..SYSDATABASES A , sys.dm_exec_requests B
WHERE A.DBID=B.DATABASE_ID AND B.COMMAND LIKE ‘%BACKUP%’
order by percent_complete desc,B.TOTAL_ELAPSED_TIME/60000 desc

Output result will be as below


1
SQL Error : Exclusive access could not be obtained because the database is in use
One day I got this error, My Logshipping was stopped due to some system fault, after doing so lots of excersice and googling, at last I decided to take full backup and restore it to Secondary box, but by accident , in between R&D, I stopped Logshipping Restoration Job, due to this database was not accessible/Suspect mode and process was in KILLED/ROLLBACK state in SP_WHO2 output.
The reason was very simple, because DB was being used by Killed/Rollback process thus why DB was in use.
There are so many options to kill this rollback operation like killing associated KPID at OS level, or restart the server.(Here i am not afraid of any data loss, because i was going to restore it on this box, My backup was ready.)
Other way is to take this db in Single user mode using Syntax like :
USE
MASTER
ALTER DATABASE yourdatabasename SET SINGLE_USER WITH ROLLBACK
Then again it in Multiuser, and start restore process
USE
MASTER
ALTER DATABASE yourdatabasename SET MULTI_USER WITH ROLLBACK
Other option is like
  1. Find all the active connections, kill them all and restore the database.
  2. Get database to offline because this will close all the opened connections to this database, bring it back to online and restore the database.



 
0
Hi Guys, One Day I have to find out all Databases objects like User Table, Procedure & Views.
Below are the code to find out objects information like ObjectName,Type,Create_date Modify_date, Rows,Column_Count,Data_Size,index_size etc.
/* Create Table for Collect Data from All DATABASES */
createtable#VirendraTest(
TableNamevarchar(50),
ObjectTypevarchar(50),
CreateDatedatetime,
ModifyDatedatetime,
RowsCountbigint,
ColumnCounttint,
DataSizevarchar(50),
IndexSizevarchar(50))
/* Process Start to get data from Indivisual Databases */
Declare@sqltxtvarchar(50)
Declare@dbnamevarchar(50)
DECLAREVirendraCurCURSORFORSELECTNamefromSys.databaseswheredatabase_id>4
openvirendraCur
fetchnextfromvirendracurinto@dbname
WHILE@@FETCH_STATUS= 0
begin
set@sqltxt=‘USE ‘+‘['+@dbname+']‘
print@sqltxt
exec (@sqltxt)
CREATETABLE#Virendratemp(
table_namesysname,
row_countINT,
reserved_sizeVARCHAR(50),
data_sizeVARCHAR(50),
index_sizeVARCHAR(50),
unused_sizeVARCHAR(50))
INSERT#Virendratemp
EXECsp_msforeachtable‘sp_spaceused “?”‘
insert#Virendratest
Selectaa.*,bb.col_count,bb.Data_Size,bb.index_sizefrom (
Selectob.Name,ob.type_descType,ob.create_date,ob.modify_date,pt.rowsfromsys.objectsob
leftjoinsys.partitionsptonob.object_id=pt.object_id
whereob.typein(‘U’,‘V’,‘P’))aa
leftjoin(SELECTa.Table_Name,
a.Row_count,
COUNT(*)ASCol_count,
a.Data_size,a.index_size
FROM#Virendratempa
INNERJOINinformation_schema.columnsb
ONa.table_namecollatedatabase_default
=b.table_namecollatedatabase_default
GROUPBYa.table_name,a.row_count,a.data_size,a.index_size)bb
onaa.name=bb.table_name
droptable#Virendratemp
set@sqltxt=‘USE ‘+‘['+@dbname+']‘
exec (@sqltxt)
fetchnextfromvirendracurinto@dbname
end
closevirendracur
deallocatevirendracur
/* Display Collected Data */
Select*from#VirendraTest
/* Drop Temp Table */
DropTable#VirendraTest

Statistics Details in SQL Server

Posted: November 8, 2011 in SQL General
0
SQL Server tries to use statistics to react intelligently in its query optimization. Knowing number of records, density of pages, histogram, or available indexes help the SQL Server optimizer guess more accurately how it can best retrieve data. A common misnomer is that if you have indexes, SQL Server will use those indexes to retrieve records in your query.
Below is the query using DMVs to find out statistics details about current database’s tables.
SELECT
OBJECT_NAME(A.OBJECT_ID)
AS Object_Name,

A.name AS index_name, a.type_desc AS Index_Type,STATS_DATE(A.OBJECT_ID, index_id)
AS StatsUpdated ,DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate()) DaysOld

FROM
sys.indexes A

INNER
JOIN
sys.tables B ON A.object_id = B.object_id

WHERE A.name IS
NOT
NULL

ORDER
BY
DATEDIFF(d,STATS_DATE(A.OBJECT_ID, index_id),getdate())
DESC



0
As we know, by default, tempdb is placed on the same drive where SQL Server Instance installed on. Sometime its happened that where its installed, that disk space is running in very tight position or disk I/O operation is very slow or not performing upto mark. The solution is to move tempdb to a spacious and faster drive and improve performance, follow these steps:


1. Open SQL Server Management Studio.

Click on Start -> Programme -> MicroSoft SQL Server 2005 / MicroSoft SQL Server 2008 -> SQL Server Management Studio

2. Connect to the desired server.



3. Click the New Query button.



4. Copy and paste the following into the query pane to check the TEMPDB location

SELECT name, physical_name AS CurrentLocation FROM
sys.master_files

WHERE database_id =
DB_ID(N’tempdb’);

GO

5. Now, Suppose you want to move TEMPDB at D:\SQLDATA, simply copy and paste the following in the query pane.

USE
master

GO

ALTER
DATABASE tempdb

MODIFY
FILE (NAME = tempdev, FILENAME
=
‘d:\SQLData\tempdb.mdf’);

GO

ALTER
DATABASE tempdb

MODIFY
FILE (NAME = templog, FILENAME
=
‘d:\SQLData\templog.ldf’);

GO

6. Click Execute, result may be like this,



7. Now, We have to restart SQL Server Service using via Start-> Run -> Services.msc



Or

Start -> Programme- MicroSoft SQL Server 2005 / MicroSoft SQL Server 2008 ->Configuration Tools -> SQL Server Configuration Manager



8. Stop and Start SQL Server (MSSQLSERVER).

9. Go back to SQL Server Management Studio and open a new query pane.

10. Copy and paste the following to verify that tempdb has moved to the new location:

SELECT name, physical_name AS CurrentLocation FROM
sys.master_files

WHERE database_id =
DB_ID(N’tempdb’);

GO

11. Click Execute.

12. In the physical_name column, you should see the path to the new location.


SQL Server restart time

Posted: September 21, 2011 in SQL General
Tags:
0
——1)From TempDB created date using sys.databases
            SELECT create_date from sys.Databases where database_id=2
——2)From TempDB created date using sys.sysdatabases   
            SELECT crdate from sys.sysDatabases where DBID =2
——3)From sysprocesses   
SELECT login_time FROM master..sysprocesses WHERE cmd = ‘LAZY WRITER’
——4)From sysprocesses
SELECT min(login_time) from master..sysprocesses
——5)From sysprocesses
SELECT min(login_time) from sys.sysprocesses
——6)From sp_readerrorlog SP
  sp_readerrorlog 0,1,’Copyright (c)’
——7)From dm_os_sys_info
SELECT sqlserver_start_time FROM sys.dm_os_sys_info
——8)From dm_exec_sessions
SELECT login_time FROM sys.dm_exec_sessions WHERE session_id = 1
——9)From traces
SELECT start_time from sys.traces where is_default = 1
——10)From sp_helpdb
sp_helpdb ‘tempdb’

1
USE VirendraTest
GO
CREATE TABLE #Virendratemp (
table_name sysname ,
row_count INT,
reserved_size VARCHAR(50),
data_size VARCHAR(50),
index_size VARCHAR(50),
unused_size VARCHAR(50))
SET NOCOUNT ON
INSERT #Virendratemp
EXEC sp_msforeachtable ‘sp_spaceused ”?”’
SELECT a.Table_Name,
a.Row_count,
COUNT(*) AS Col_count,
a.Data_size
FROM #Virendratemp a
INNER JOIN information_schema.columns b
ON a.table_name collate database_default
= b.table_name collate database_default
GROUP BY a.table_name, a.row_count, a.data_size
ORDER BY CAST(REPLACE(a.data_size, ‘ KB’, ”) AS integer) DESC
DROP TABLE #Virendratemp
1
In 1988, Microsoft released its first version of SQL Server. It was developed jointly by Microsoft and Sybase for the OS/2 platform.
• 1993 – SQL Server 4.21 for Windows NT
• 1995 – SQL Server 6.0, codenamed SQL95
• 1996 – SQL Server 6.5, codenamed Hydra
• 1999 – SQL Server 7.0, codenamed Sphinx
• 1999 – SQL Server 7.0 OLAP, codenamed Plato
• 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0)
• 2003 – SQL Server 2000 64-bit, codenamed Liberty
• 2005 – SQL Server 2005, codenamed Yukon (version 9.0)
• 2008 – SQL Server 2008, codenamed Katmai (version 10.0)
• 2010 – SQL Server 2008 R2, Codenamed Kilimanjaro (aka KJ)
• Next – SQL Server 2011, Codenamed Denali