Thursday, January 27, 2011

SQL Server: Table Variables Are Created In Memory Or In Tempdb

In response to the earlier post How to Create Different Type of Tables a reader asked a question that


“Is it true that temp table are created in Tempdb but Table variables are created only in memory and because of this, table variables are more efficient as compared to temp tables”


My answer is that, it’s just a misconception that table variables are created in memory and truth is that both temporary tables and table variables are created in tempdb. Pinal Dave and Ken Simmons already proved it so well. So I will suggest reading these posts for more clarifications.

SQL SERVER – Difference TempTable and Table Variable – TempTable in Memory a Myth By Pinal Dave

Yes, Table Variables and Temp Tables both use the tempdb By Ken Simmons

Wednesday, January 26, 2011

SQL Server Basics: How to Create Different Type of Tables

Most of the time we forget to write for beginners or who occasionally use SQL Server. Possibility its because we just want to pressurize others with our knowledge and resultantly we don't like to write for simple topic targeting beginners.

Today we will discuss about different types of tables, which can be created for different purposes.

Regular User Tables
Regular user table is the actually table which holds data of user for later on processing and reporting purpose. These are also called physical tables at they physically resides at hard drive until you DROP them intentionally.

CREATE TABLE [dbo].[TestTable]
      [TestTableID] [int] NOT NULL,
      [FirstCol] [varchar](200) NULL,
      [SecondCol] [int] NULL
ON  [PRIMARY] --  This part indicates, where (Database FileGroup) table will be created physically

Temporary Tables
Temporary tables and created to hold temporary data regarding intermediate results of different quires. These tables will be drooped automatically once the store procedure is executed (if they are used in stored procedure) or once the session is over. But as good programming practice will must drop these tables once they are not required.

CREATE TABLE #Temp_TestTable
      [TestTableID] [int] NOT NULL,
      [FirstCol] [varchar](200) NULL,
      [SecondCol] [int] NULL
-- DROP TABLE #Temp_TestTable --(Drop temporary table when not required)     

Global Temporary Tables
These are just like simple temporary tables but are available to all sessions and will only be dropped automatically when last session of database will be closed. If single session is active, global temporary tables will remain available.

CREATE TABLE ##GTemp_TestTable
      [TestTableID] [int] NOT NULL,
      [FirstCol] [varchar](200) NULL,
      [SecondCol] [int] NULL

-- DROP TABLE ##GTemp_TestTable      
--(Drop global temporary table when not required)

These were three types of tables that can be created in SQL Server. Lets talk about some tricks about tables.

Tables Cloning
Existing regular, temporary or global temporary tables can be cloned (structure as well as their data). Following statement will create a new table, exactly same in structure to existing one.
INTO    [dbo].[NewTestTable]
FROM    [dbo].[TestTable]
WHERE   1 = 2     -- Remove WHERE clause if you want to copy both structure and data

Inserting Data On Basis of Stored Procedure Result
Table can be populated with data from result set of stored procedure. Table variables can not be populated in this fashion.
INSERT INTO [YourTableName](CommaSeparatedColumnsName)
 EXECUTE YourStoredProcedureNameHere CommaSeparatedParameterValues

Table Variables
Table variables are just like scalar variables which possess structure of a table and can hold records for intermediate results of different quires. These are the best alternative for temporary tables as there is no need to worry about demolition of table variables after use.

      [TestTableID] [int] NOT NULL,
      [FirstCol] [varchar](200) NULL,
      [SecondCol] [int] NULL

Tuesday, January 25, 2011

SQL Server: Simple Method to Resolve All Indexes Fragmentation

Recently a blog reader asked about a script or stored procedure which can resolve all indexes fragmentation as manually exploring each index of database for its level of fragmentation and then rebuilding or re-indexing it, according to its requirement, is bit painful task.
Here is a store procedure which I like to use for this purpose. I normally execute this stored procedure through an automated job during off peak hours.

CREATE PROCEDURE dbo.Proc_IndexDefragmentation
        @TableName NVARCHAR(255),
        @SchemaName NVARCHAR(255),
        @IndexName NVARCHAR(255),
        @PctFragmentation DECIMAL


                FROM    sys.objects
                WHERE   OBJECT_ID = OBJECT_ID(N'#Frag') )
        DROP TABLE #Frag

          DBName NVARCHAR(255),
          TableName NVARCHAR(255),
          SchemaName NVARCHAR(255),
          IndexName NVARCHAR(255),
          AvgFragment DECIMAL

    EXEC sp_msforeachdb 'INSERT INTO #Frag(
            SELECT ''?'' AS DBName
            ,t.Name AS TableName
            ,sc.Name AS SchemaName
            , AS IndexName
            FROM ?.sys.dm_db_index_physical_stats(DB_ID(''?''),NULL,NULL,NULL,''Sampled'') AS s
            JOIN ?.sys.indexes i
            ON s.Object_Id = i.Object_Id
            AND s.Index_id = i.Index_id
            JOIN ?.sys.tables t
            ON i.Object_Id = t.Object_Id
            JOIN ?.sys.schemas sc
            ON t.schema_id = sc.Schema_Id
            WHERE s.avg_fragmentation_in_percent > 20
            AND t.TYPE = ''U''
            AND s.page_count > 8
            ORDER BY TableName,IndexName'

        FOR SELECT  *
            FROM    #Frag

    OPEN cList
    FETCH NEXT FROM cList INTO @DBName, @TableName, @SchemaName, @IndexName,
            IF @PctFragmentation BETWEEN 20.0 AND 40.0
                    SET @Defrag = N'ALTER INDEX ' + @IndexName + ' ON '
                        + @DBName + '.' + @SchemaName + '.[' + @TableName
                        + '] REORGANIZE'
                    EXEC sp_executesql @Defrag
                    PRINT 'Reorganize index: ' + @DBName + '.' + @SchemaName
                        + '.' + @TableName + '.' + @IndexName
                IF @PctFragmentation > 40.0
                        SET @DeFrag = N'ALTER INDEX ' + @IndexName + ' ON '
                            + @DBName + '.' + @SchemaName + '.[' + @TableName
                            + '] REBUILD'
                        EXEC sp_executesql @Defrag
                        PRINT 'Rebuild index: ' + @DBName + '.' + @SchemaName
                            + '.' + @TableName + '.' + @IndexName

            FETCH NEXT FROM cList INTO @DBName, @TableName, @SchemaName,
                @IndexName, @PctFragmentation

    CLOSE cList

    DROP TABLE #Frag

I think one more cleanly written, Tara Kizar stored procedure for this auto defregmentation process, can also help you. You can find it at Tara Kizar Blog


I have seen many database developers using UNION, when they need to merge result sets of two queries, also when they don’t need unique results out of input SELECT queries. The UNION clause processes the result set from the two SELECT statements, removing duplicates from the final result set and effectively running DISTINCT on each query.

Though query optimizer is smart enough, that when it detects that both of your SELECT queries contains distinct rows, it will automatically choose same execution plan as UNION ALL operation will. Still as good query designer, we should always use UNION ALL instead of UNION when
  • Duplicate rows in final result sets are allowed
  • Result sets of the SELECT statements participating in the UNION clause are exclusive to each other

Friday, January 21, 2011

SQL Server: How to Get Physical Path of Tables and Indexes

When database consists of multiple data files and objects (tables/indexes) are dispersed on these multiple data files. Common requirement is to get a list of objects (tables, indexes) along with their physical path.  Here is a simple query to accomplish this task.
SELECT  'table_name' = OBJECT_NAME(,
        'index_name' =,
        'filegroup' =,
        'file_name' = d.physical_name,
        'dataspace' =
FROM    sys.sysindexes i,
        sys.filegroups f,
        sys.database_files d,
        sys.data_spaces s
        AND f.data_space_id = i.groupid
        AND f.data_space_id = d.data_space_id
        AND f.data_space_id = s.data_space_id

Thursday, January 20, 2011

SQL Server: How to Insert Stored Procedure Result Set in a Table

Tables can be populated with data from result set of stored procedure. Method can be applied to regular, temporary and global temporary tables but table variables can not be populated in this fashion.

INSERT INTO [YourTableName](CommaSeparatedColumnsName)
 EXECUTE YourStoredProcedureNameHere CommaSeparatedParameterValues

 Note: Number of input and output columns, as well as their datatypes must be same.

Wednesday, January 19, 2011

SQL Server: Function Based Check Constraint

Check constraints are used to apply business logic. These checks can easily and effectively be managed on application side. But if somehow you need to apply complex business logic as check constraints, you can use user defined functions for this purpose. Let’s create a function first to restrict any address entry from Afghanistan (apology to Taliban ;) )
USE AdventureWorks
CREATE FUNCTION dbo.fnc_RestrictedAddress
      @Address NVARCHAR(60)

 DECLARE @ResultBit BIT = 1

 IF @Address LIKE '%Afghanistan%'
      SELECT @ResultBit = 0

RETURN      @ResultBit


Open table in design view, right click anywhere on table in design view, click “CHECK CONSTRAINTS” and click "ADD" button. Move to expression part and edit it as given in screen shot.

Or you can edit desired table to apply check constraint with following t-sql.
ALTER TABLE [Person].[Address]  WITH NOCHECK ADD  CONSTRAINT [CK_Address] CHECK  (([dbo].[fnc_RestrictedAddress]([AddressLine1])=(1)))

Let’s check out constraint efficiency by inserting new record in “Address” table
USE AdventureWorks
INSERT INTO [AdventureWorks].[Person].[Address]
           ('Zahir Shah Road, Kabul, Afghanistan'

Msg 547, Level 16, State 0, Line 1
The INSERT statement conflicted with the CHECK constraint "CK_Address". The conflict occurred in database "AdventureWorks", table "Person.Address", column 'AddressLine1'.
The statement has been terminated.

Tuesday, January 18, 2011

SQL SERVER: 3 Methods to Handle NULL for String Concatenation

If any of parameter/value used in string concatenation is found NULL, whole result becomes NULL.
@SecondVal VARCHAR(10),
@ThirdVal VARCHAR(10),
@ForthVal VARCHAR(10)

SELECT @FirstVal ='First',@ThirdVal ='Third'
SELECT @FirstVal + @SecondVal + @ThirdVal

Following are three commonly used methods can solve this problem
1.  Using ISNULL()
It takes two parameters, original parameter for which you want to handle NULL and second one will be the alternate value or parameter which will replace NULL.
SELECT @FirstVal + ISNULL(@SecondVal,'') + @ThirdVal AS ConcatenationResult
2.  Using COALESCE()
COALESCE () is much more useful function as compared to ISNULL(). It is useful when you think that your alternate value for NULL can also be a NULL and you need to provide second alternate, third alternate and so on. COALESCE () returns the first nonnull expression among its arguments.
SELECT @FirstVal +  COALESCE(@SecondVal,@ForthVal,'') + @ThirdVal AS ConcatenationResult
When you don’t want to use ISNULL() OR COALESCE() functions for each of your parameter in long statements to handle NULL values (as it is so lengthy process when handling  dozens of parameters for single batch or stored procedure), you can use set statement CONCAT_NULL_YIELDS_NULL to OFF. It will automatically replace every resultant NULL with empty string.
      SELECT @FirstVal + @SecondVal + @ThirdVal AS ConcatenationResult

Keep in mind that Microsoft already issued a warning that in future versions of SQL SERVER, value for CONCAT_NULL_YIELDS_NULL will be fixed to ON. So applications will generate error if option is set to OFF. But still for SQL SERVER 2008 R2 and hopefully for DENALI you can use it.