Jack Graber, Author at DawaBI

Jack Graber

About Jack Graber

Jack Graber is a Consultant at DawaBI, a Data Warehouse and Business Intelligence Consultancy firm focusing on Making Data-Driven Decisions Possible.

A Simple TSQL Pivot

I’ve always had issues writing a TSQL Pivot statement without going back to books-on-line to figure it out.  Maybe because I haven’t done it enough, but it always seems a bit convoluted.  That meant it was time to write about it and break it down so I could make sense of it.  Let’s set up the demo data first.  Based on the data I was working on pivoting I wrote up some simple demo data.  It has a name, objective and a result for the said objective.  It has one record per objective and the pivot should line up those objectives so there is one row per name instead of one row per objective.

DECLARE @BaseData table (  

Name   varchar(10)  

,Objective  varchar(5)  

,Result   varchar(15)  )


INSERT @BaseData

SELECT ‘Bob’,’Obj1′,’Complete’


SELECT ‘Bob’,’Obj2′,’Pending’


SELECT ‘Lisa’,’Obj1′,’Pending’


SELECT ‘Lisa’,’Obj2′,’Pending’


SELECT ‘Don’,’Obj1′,’Complete’


SELECT ‘Seth’,’Obj1′,’Complete’


SELECT ‘Seth’,’Obj2′,’Complete’


SELECT ‘Megan’,’Obj1′,’Pending’


SELECT ‘Megan’,’Obj2′,’Complete’





 Now to work on understanding the pivot.  Here’s the finished Pivot statement for reference: 

PIVOT(MAX(Result) for Objective in ([OBJ1],[OBJ2])) as ResultPivot

 The [Result] column is the data that needs to be pivoted.  The first stumbling part of the pivot is that the data that is being pivoted has to be wrapped in some kind of accumulation.  So if you are dealing with numerical data it can be a MIN, MAX, AVG, etc.  With string data, which is what I dealt with the easiest is MIN or MAX.  Since there is only one result per objective in this data, the MAX will be that value.

The next part of the pivot is naming the columns that the [Result] data will be pivoted into.  I associated this to telling the PIVOT command what the key was to the data to be pivoted.  In this case there are consistent values in the Objective column of Obj1 and Obj2.  Not all Names in the data have an Obj1 and Obj2 which is fine as you will see.  If you don’t have a consistent value like this I could see using a derived column like a ROWNUMBER() or something similar to give the data values to create columns on.

After naming the columns that the pivoted data will go into the entire statement needs an alias.  It won’t necessarily be used in the SELECT statement but the PIVOT statement will throw errors if it is not aliased.

Here is how the PIVOT is worked into the query. 

SELECT Name, OBJ1 as Objective1, OBJ2 as Objective2                FROM @BaseData

                PIVOT(MAX(Result) for Objective in ([OBJ1],[OBJ2])) as ResultPivot

 Notice in the SELECT statement the columns from the pivot get the names from the PIVOT statement.  I did this because I wanted to alias the column names to be more descriptive.  If you get rid of the column names and just put in SELECT it will work too.  If you put the [RESULT] column in the columns to SELECT the statement will error and throw Invalid Column Name.’  The [RESULT] column basically no longer exists when the data is pivoted.  When the query is run (with the table variable above to create the data) the results will look like this:


 If you look at Don’s record you will see that Objective 2 is NULL.  That objective does not exist for him.  If there are hundreds or thousands of records that are being pivoted and a flag needs to be created if there are missing objective records it’s simple.  Those objective columns can be used in the query just as normal columns.  In the query below you can see how a CASE statement was added to flag those people with missing records.

SELECT Name, OBJ1 as Objective1, OBJ2 as Objective2                ,CASE

                                WHEN OBJ1 is NULL THEN ‘TRUE’

                                WHEN OBJ2 is NULL THEN ‘TRUE’

                                ELSE ”

                                END as ErrorFlag

                FROM @BaseData

                PIVOT(MAX(Result) for Objective in ([OBJ1],[OBJ2])) as ResultPivot

 The results of this query will have an additional column called ErrorFlag that will have a value of TRUE if there is a NULL in Objective 1 or Objective 2.  This will allow for analysis of the data to easily point out those records.


 Another way of making the NULL data stand out or if NULL data doesn’t matter at all the OBJ1 and OBJ2 could be wrapped in an ISNULL.  That could be used to put in an empty string or some value to indicate problems with the data.

Hopefully, this helps out.  I’m sure the next time I write a PIVOT I’ll have to dig out this posting and skim through it real quick to remember the details.

By |July 8th, 2014|TSQL|

SQL 2014 Natively Compiled Stored Procedures: Unsupported TSQL by In-Memory OLTP

One of the major new features for Microsoft SQL Server 2014 is the ability to have In-Memory OLTP.  If you have not heard of this, it boils down to tables are able to be completely stored in memory and all operations against that table and it’s data occur in memory.  This gives SQL Server the ability of working with data without the I/O waits that are present in disk based tables.  In versions of SQL Server prior to 2014, stored procedures were strictly an interpreted language.  With the in-memory tables for SQL Server 2014 the ability to have Natively Compiled stored procedures was added.  The advantage this presents is better performance when stored procedures are run against in memory tables.  As with anything there are caveats that go along with it.  Natively Compiled Stored Procedures will not run against disk-based tables and there are limitations as to what TSQL code you can use in them.  Interpreted stored procedures will still run against an in-memory table, but has to do so through a query interop.

The part of the limitations that jumped out at me was the limitations to TSQL.  There is definitely going to be a learning curve because there are some very common TSQL features that are not supported.  The first thing that jumped out was that cursors are not supported! (Yay!)  I know this will cause problems for some people, but I’ve always gone out of my way to find ways around using cursors.  Temp tables will not be supported in the natively compiled stored procedures since they are created on disc.  Natively compiled stored procedures cannot make interactions with tables that are stored on disc.  This will require some work, but can be overcome by using table variables that have a defined table type.  There is also an option of defining a SCHEMA_ONLY in-memory table to use as a temp table.  This means that the table structure is durable, but not the data within it.  When the SQL server is shut down any data within the SCHEMA_ONLY table(s) will be gone but the table structure will be re-created when the server starts.

Here are some of the features that I considered commonly used that will not be supported in a Natively Compiled Stored Procedure.

CursorSet based logic or While Loops
Temp TablesTable variables (not inline) or an in-memory table created as SCHEMA_ONLY
Inline Table VariablesTable variables can still be used, but the table type has to be created as a user defined table type.
Common Table Expressions (CTE)Re-writing the query to not use a CTE.
Subqueries (nested queries)Re-write the query to not use a Subquery.
User Defined Functions (UDF)The logic from the UDF will have to be built into the stored procedure.
Execute or Insert ExecInclude the logic from the external stored proc
OR and IN and CASECreate queries for each case
NOTThis is just a change in syntax.  Use != instead


For a full list of unsupported TSQL features in Natively Compiled Stored Procedures go here:  http://msdn.microsoft.com/en-us/library/dn246937(v=sql.120).aspx

Also, here is the white paper on In-Memory OLTP if you want to dig deeper:


By |February 4th, 2014|TSQL|

Using DDL against a Common Table Expression (CTE)

In my previous life I did support, in both a direct customer support role and most recently a backline type support.  In that backline role I did a lot of data manipulation (updates and deletes) trying to get the customer’s data corrected.  Something was always wrong through their doing or ours.  (Mostly theirs, because we never made mistakes? ;) )  One of the greatest tools I found was the Common Table Expression.  I love CTEs!  They were great for hunting down subsets of data, especially when I had to aggregate data somehow.  Once I had the data I needed I could write updates or deletes directly against that data.

For example, let’s pretend that person data was imported into a table.  Someone realized after the import was complete that they imported the same phone numbers for multiple people and so those phone numbers need updated.  They don’t know what phone numbers are duplicated for what people, so they need a list of all the people in the table with the same phone number.  In this list they need FirstName, LastName and PhoneNumber.  (To keep it simple everyone has one phone number, pretend it’s 1980)  Because adding in the first and last name to the query would create a unique value (since you have to group by everything in the select list) only the phone number can be used at first.  This is a simple query, and I think one of the first SQL queries I ever learned.

SELECT PhoneNumber, COUNT(*) 
FROM Person 
GROUP BY PhoneNumber 

With this query we will have a list of all the phone numbers in the table that are in the table more than once.  But we don’t have the FirstName or LastName.  There are a number of ways to do this.  Once I started using CTEs they became my preferred method because it’s so simple to put the query in parenthesis and add the with’ statement.  Here is the resulting query that will return all of the data requested.  You will see the original query is just wrapped into the CTE and a new query added to get the additional information.

;WITH cte_Duplicates
	 AS ( SELECT PhoneNumber
				 , Count(*) [Count]
		  FROM   Person
		  GROUP  BY PhoneNumber
		  HAVING Count(*) > 1 )
SELECT FirstName
	   , LastName
	   , PhoneNumber
FROM   Person xPer
JOIN   cte_Duplicates xDupe ON xPer.PhoneNumber = xDupe.PhoneNumber

A few notes about the CTE and the syntax if you are not familiar with the SQL Server documentation on CTE’s.  The semi-colon before the with’ is required if there is any other SQL statements running before the CTE.  I found it’s just easier to remember to put it in there no matter what.  The documentation on CTE’s show a column list after the name of the cte.  Ex. ;with cte_Duplicates (PhoneNumber,Count) The column list is optional.  The requirement with column names is that they are not duplicated within the CTE and that every column has a name.  If you have duplicates you can either alias a column within the SELECT statement or put in a column name list.  The same goes for columns with no names like the COUNT in the query.  You will notice in the CTE I put in an alias of [Count] so that it has a name and I put the name in brackets since it’s a reserved word.  I could have left the count out of the CTE since the final query doesn’t display it, but it’s nice to have it in there if you want to highlight just the select statement within the CTE and run it to look at the original data.

That list went to the person lucky enough to have to sort out what phone numbers were right and wrong.  They gave us back the list with what the phone numbers should be.  Assuming that the list was worked with in Excel I would take that list and convert it to a CSV.  I had a macro written in Notepad++ that would wrap every line in parentheses and put a comma on the end.  Then I could take that data and put it into an insert for a table variable or a temp table depending on size.  Once the data is staged somehow the person data can be fixed using the same CTE from above.  To do this we’ll just wrap the second query into a second CTE and then write the update.  For this example assume there is code above the CTE’s that declares a table variable of @Corrections and inserts the correct data.

;WITH cte_Duplicates
	 AS ( SELECT PhoneNumber
				 , Count(*) [Count]
		  FROM   Person
		  GROUP  BY PhoneNumber
		  HAVING Count(*) > 1 ),
	 AS ( SELECT FirstName
				 , LastName
				 , PhoneNumber
		  FROM   Person xPer
		  JOIN   cte_Duplicates xDupe ON xPer.PhoneNumber = xDupe.PhoneNumber )
UPDATE cte_List
SET    PhoneNumber = xNew.PhoneNumber
FROM   cte_List xList
	   JOIN @Corrections xNew ON xList.FirstName = xNew.FirstName

This statement will update the underlying table (Person) with the correct phone number from the @Corrections table variable.  The second select got wrapped into a CTE, which is as easy as putting in a comma, adding a new alias and putting in the parentheses.  Then I wrote the update against the second CTE.  (The first CTE cannot be updated because it aggregates data)

There are some caveats to using CTEs with DDL, as there is with everything in SQL Server.  You can only delete if there is one table in the select statement in the CTE.  That limits the capabilities a little.  Hopefully, you are updating data more than you are deleting data, because deleting data is never good.  You cannot update multiple tables at one time.  If your CTE has multiple tables in the select you can only write updates back to columns in one of the tables.  For example, let’s pretend that a group of orders were entered in not accounting for a leap year.  So the order date needs updating.  The first thing we do is write that select to get the data.  Instead of re-writing everything for the update just wrap it into a CTE and then write the update.  It is only updating data from the Sales.Orders table so it will run successfully.

;WITH cte_Updates
	 AS ( SELECT xPer.FirstName + ' ' + xPer.LastName AS Name
				 , xOrd.OrderID
				 , xOrd.OrderDate
		  FROM   Person.Person xPer
		  JOIN   Sales.Orders xOrd ON xPer.PersonID = xOrd.PersonID
		  WHERE  xOrd.OrderDate >= '2/28/2012' )
UPDATE cte_Updates
SET    OrderDate = Dateadd(dd, 1, OrderDate)

These are really simple examples.  In the real world there would be a PersonID of some kind that could be used for a much simpler update for the first example.  (I would hope)  The CTE method really shines when you have multiple queries to get to the data that needs to be updated or deleted.  The single most compelling reason for updating/deleting via a CTE in my opinion is this:

Updating/deleting data directly through the CTE gives you a peace of mind because you can run the selects to see exactly what data you are inflicting change upon.  There is no guess work on what data you touch, if there is just re-run the select statements in the CTE to view the data.

That is the key.  Instead of writing select statements and finding the data.  Then trying to translate those select statements into an update.  You can take the select statements and wrap them up in CTE(s) and then update the CTE.  SO SIMPLE!

By |January 17th, 2014|Uncategorized|