Uncategorized Archives - DawaBI

Uncategorized

DawaBI Merges with Quanta Intelligence to Broaden and Deepen our Service Offering

We’ve been working on this for a while now and I’m excited to announce the merger of Quanta Intelligence and DawaBI. This merger will deepen and broaden our service offering to provide more value to our customers through knowledge and expertise. With Datalere, you are not just provided a resource, you get our whole team!

Going forward, we’ll be doing business as Datalere. The name, Datalere was formed from a combination of Data and Alere. Data as anything we do will involve data and Alere which the Latin translation is ‘to care for’ or ‘tend to’. The name says it all, we are data experts and we care about your data.

Our teams have been involved in projects ranging from few hour consulting engagements to multi-year high profile projects for startups and Fortune 500 companies. The services we provide are Data Warehousing, Business Intelligence, Big Data, Software Development, Mobile Development and Database Administration.

Customer Data is growing at an alarming rate and these days, businesses are collecting information from so many sources, it’s challenging to manage. The intelligence we can obtain from this data is endless. Whether running reports or predicting trends, it all starts with how your data is cared for.

Many of our clients have the horsepower and expertise to keep Data Warehouse and Business Intelligence projects in house but the data that has been compiling over the years is unhealthy. If not modeled correctly, a DW/BI project can stall or never reach its full potential. Before starting your next project, Datalere can provide you with a free data health check.

A few of the many benefits our customers have received are:

  • Insights into data that enable data-driven decision making reducing error prone manual practices
  • Reporting efficiencies of reducing reporting run times from 25 minutes to 3 seconds enabling business users to access critical data for timely decision making
  • Predictions into customer profitability or lack thereof that allow the sales team to focus on higher return potential
  • Stalled project turnaround and creating a usable solution that meets business needs
  • Team mentoring, keeping customer’s staff skills up to date and progressing

To request a free data health check and initial consultation, visit our website or give us a call, (303) 223-0248. To stay up to date with the latest at Datalere, tips and tricks, lunch and learns and more, join our mailing list.

By |August 18th, 2016|Uncategorized|

WRITING QUERIES WITH MULTIPLE GROUPING SETS

Using the basic group-by in a T-SQL query is very common when dealing with grouping, but there is one other option that often gets overlooked. In SQL-Server, you can also do multiple grouping sets, by grouping on different sets of fields. The benefits, a more efficient and all-in-one query; versus doing separate individual, group-by, queries put together via a union. Here is a basic example of using grouping sets on the Adventure Works database.

SELECT
[CustomerID]
,TerritoryID
,SUM([SubTotal]) AS TotalSpent
FROM [AdventureWorks2012].[Sales].[SalesOrderHeader]
GROUP BY GROUPING SETS
(
(CustomerID),
(TerritoryID),
()
)
ORDER BY TotalSpent desc;

In this query, we are doing three types of grouping, on CustmerID, TerritoryID, and empty group. The results of the query:

GroupingSets1

Each of the groups are distinguishable by null values, used as placeholders in the columns that are not used in the set.

 1) CustomerID Gives the total spent per customer, with null value in the TerritoryID column. Results found in row 12 and up.

 2) TerritoryID Gives the total spent in each territory, with null value in the CustomerID column. Results found in row 2 11.

 3) An empty grouping set, giving a grand total for all rows, with null values in both TerritoryID and CustomerID columns. Results found in row 1.

With the grouping set, there are also two short-cut abbreviations (CUBE, ROLLUP) that can be applied instead of writing out every possibility of those sets.

CUBE

Cube, is a way to define all possibilities of a set-group down to the empty set. A query written without the short-cut:

SELECT
[CustomerID]
,TerritoryID
,SUM([SubTotal]) AS TotalSpent
FROM [AdventureWorks2012].[Sales].[SalesOrderHeader]
GROUP BY GROUPING SETS
(
(CustomerID,TerritoryID),
(CustomerID),
(TerritoryID),
()
)
ORDER BY TotalSpent desc;

Can be shortened to:

SELECT
[CustomerID]
,TerritoryID
,SUM([SubTotal]) AS TotalSpent
FROM [AdventureWorks2012].[Sales].[SalesOrderHeader]
GROUP BY CUBE (CustomerID,TerritoryID)
ORDER BY TotalSpent desc;

ROLLUP

Rollup is a short-cut used when dealing with hierarchical fields. For this example, the hierarchy of the fields, from broad to narrow (Group, CountryRegionCode, Name, TerritoryID). For the long version of the query:

SELECT
st.[Group]
,st.CountryRegionCode
,st.[Name]
,st.TerritoryID
,SUM([SubTotal]) AS TotalSpent
FROM [AdventureWorks2012].[Sales].[SalesOrderHeader] soh
INNER JOIN [AdventureWorks2012].[Sales].[SalesTerritory] st on soh.TerritoryID = st.TerritoryID
GROUP BY GROUPING SETS
(
(st.[Group],st.CountryRegionCode,st.[Name],st.TerritoryID),
(st.[Group],st.CountryRegionCode,st.[Name]),
(st.[Group],st.CountryRegionCode),
(st.[Group]),
()
)
ORDER BY TotalSpent desc;

Can be replaced with:

SELECT
st.[Group]
,st.CountryRegionCode
,st.[Name]
,st.TerritoryID
,SUM([SubTotal]) AS TotalSpent
FROM [AdventureWorks2012].[Sales].[SalesOrderHeader] soh
INNER JOIN [AdventureWorks2012].[Sales].[SalesTerritory] st on soh.TerritoryID = st.TerritoryID
GROUP BY ROLLUP (st.[Group],st.CountryRegionCode,st.[Name],st.TerritoryID)
ORDER BY TotalSpent desc;

The table results:

GroupingSets2

COMBINING ROLLUP & CUBE

When combining grouping sets,

SELECT
st.[Group]
,st.CountryRegionCode
,st.[Name]
,st.TerritoryID
,[CustomerID]
,SUM([SubTotal]) AS TotalSpent
FROM [AdventureWorks2012].[Sales].[SalesOrderHeader] soh
INNER JOIN [AdventureWorks2012].[Sales].[SalesTerritory] st on soh.TerritoryID = st.TerritoryID
GROUP BY ROLLUP (st.[Group],st.CountryRegionCode,st.[Name],st.TerritoryID), CUBE([CustomerID],st.TerritoryID)
ORDER BY TotalSpent desc;

The total resulting set ends up as the multiplication of each set.

CUBE (CustomerID,TerritoryID) will give 4 sets
(
CustomerID + TerritoryID,
CustomerID,
TerritoryID,
{}
)

ROLLUP (st.[Group],st.CountryRegionCode,st.[Name],st.TerritoryID) will give 5 sets
(
Group + Region + Name + TerritoryID,
Group + Region + Name,
Group + Region,
Group,
{}
)

The resulting set total will end up being 20 sets:

DEALING WITH NULL

Using null-values to distinguish sets, works as long as there are no null values coming through the query. If we used a query, where SalesPersonID and TerritoryID could be null:

SELECT
SUM(SubTotal) as CalcSubTotal,
SalesPersonID,
TerritoryID
FROM
[Sales].[SalesOrderHeader] soh
GROUP BY CUBE (SalesPersonID, TerritoryID)
ORDER BY CalcSubTotal desc;

GroupingSets3

The results can end up being confusing. Is row 1 or row 2 the empty set total? To get around this problem, we have two functions that can be used, Grouping, and Grouping_Id.

Grouping

SELECT
SUM(SubTotal) as CalcSubTotal,
SalesPersonID,
GROUPING(SalesPersonID) AS SalesPersonGrouping,
TerritoryID,
GROUPING(TerritoryID) AS TerritoryGrouping
FROM
[Sales].[SalesOrderHeader] soh
GROUP BY CUBE (SalesPersonID, TerritoryID)
ORDER BY SalesPersonGrouping desc, TerritoryGrouping desc, CalcSubTotal desc;

The results from the above query, a little backwards, where 0 represents the grouping column(s), and 1 represents the column(s) not used in the grouping. From the table below, row 1 represents the empty set, and row 2 represents the total with no or unknown sales person.

GroupingSets4

Grouping_ID

 

SELECT
SUM(SubTotal) as CalcSubTotal,
SalesPersonID,
TerritoryID,
GROUPING_ID(SalesPersonID, TerritoryID) as GrpingID
FROM
[Sales].[SalesOrderHeader] soh
GROUP BY CUBE (SalesPersonID, TerritoryID)
ORDER BY CalcSubTotal desc;

The grouping_id functions accepts a list of grouping columns to be used. The results, giving an integer value, is actually the conversion from a binary value (e.g. 00 = 0, 01 = 1, 10 = 2, 11 = 3). Just like the grouping function, the concept of ones and zero are reversed; where, 0 represents the grouping and 1 is where the column was not used.

GroupingSets5

PASS SQLSaturday Events: July 2014

 

Date Event
July 12 SQLSaturday #312 – Sacramento 2014
July 26 SQLSaturday #302 – Albany 2014
July 26 SQLSaturday #322 – Guatemala 2014

A Quick Intro into SQLCLR User Defined Functions

 Initial Setup

For a quick start guide into SQLCLR, the easiest first step will mostly like be creating a user defined function.

To get started, I’m using the following installations:

  • SQL Server Data Tools for Visual Studio 2012 (http://msdn.microsoft.com/en-us/jj650015)
  • Microsoft SQL Server Data Tools Business Intelligence (http://www.microsoft.com/en-us/download/details.aspx?id=36843)
  • SQL Server 2012
  • A database to deploy to, I’ll be using Adventure Works 2012 (http://msftdbprodsamples.codeplex.com/releases/view/93587)

Creating the Project

To get started, open up SQL Server Data Tools for Visual Studio 2012. Select new project and go to the Other Languages in the templates section. Name the project (SQLCLRDemo):

CLRIntro1

Once in the project, in the Solution Explorer, right click on the project and select Add then New Item:

CLRIntro2

There are 5 basic types of SQL CLR objects that can be developed in C# for SQL Server:

  • Aggregate
  • Stored Procedure
  • Trigger
  • User Defined Function
  • User Defined Type

 Creating a User Defined Function

Let’s start with the easiest and most used SQLCLR type, User Defined Function. For this example I’ll create a simple function called IsLeapYear which will accept a year as an integer value and return Boolean:

CLRIntro3

I’m not going to do any special coding for this example, since C#’s DateTime library has a nice slew of usable functions. Here is the user defined function template with the added code:

//------------------------------------------------------------------
// <copyright file="CSSQLFunction.cs" company="Microsoft">
//    Copyright (c) Microsoft Corporation.  All rights reserved.
// </copyright>
//------------------------------------------------------------------
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

public partial class UserDefinedFunctions
{
     [Microsoft.SqlServer.Server.SqlFunction]
     public static SqlBoolean IsLeapYear(SqlInt32 year)
     {
        // Put your code here
        return DateTime.IsLeapYear((int)year);
      }
}

As you may have noticed the data types used in SQL Server and C# are different, where SQL Server’s data types are given the prefix of SQL. When using built-in C# libraries, you will need to cast to a C# type.  The return data type, for the function, will need to be a SQL data type (i.e. SqlBoolean).

So build the project, to verify that code is correct. Before deploying, you may want to check and verify the project’s properties. (PROJECT > SQLCLRDemo Properties¦) Here you will be able to set the target SQLServer platform, default schema type, database settings, permission level, etc¦

CLRIntro4

Deploying to SQL Server

To publish right-click the project and select Publish¦.

CLRIntro5

Select the target database and then click publish. Verify in the Data tools Operations pane that everything goes over smoothly:

CLRIntro6

You should then see the new scalar-valued function in the database (you may have to refresh the list):

CLRIntro7

Testing the SQLCLR function

So far so good, let’s test the function:

USE [AdventureWorks2012]
GO

;WITH cteYears AS
(
	SELECT 2013 as iYear
	UNION ALL
	SELECT 2012 as iYear
	UNION ALL
	SELECT 2011 as iYear
	UNION ALL
	SELECT 2010 as iYear
)

SELECT 
iYear,
dbo.IsLeapYear(iYear)
FROM cteYears
GO

Uh, oh. We get an error:

CLRIntro8

Simple enough, just need to enable CLR Integration (http://technet.microsoft.com/en-us/library/ms131048.aspx) just run this script on the targeted database:

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'clr enabled', 1;
GO
RECONFIGURE;
GO

Re-run the test script and finally get our results:

CLRIntro9

Conclusion

Creating some SQLCLR functions could make SQL development easier and provide a faster technique versus regular SQL-Server functions.

The other SQL CLR types can get more complicated with higher security needs, potential un-safe code, serializations, and importing dlls to SQL-Server, as some examples. I will try to do some more in-depth SQLCLR tutorials in the future that will run into some of these situations.

 

By |January 21st, 2014|Uncategorized|

Using DDL against a Common Table Expression (CTE)

In my previous life I did support, in both a direct customer support role and most recently a backline type support.  In that backline role I did a lot of data manipulation (updates and deletes) trying to get the customer’s data corrected.  Something was always wrong through their doing or ours.  (Mostly theirs, because we never made mistakes? ;) )  One of the greatest tools I found was the Common Table Expression.  I love CTEs!  They were great for hunting down subsets of data, especially when I had to aggregate data somehow.  Once I had the data I needed I could write updates or deletes directly against that data.

For example, let’s pretend that person data was imported into a table.  Someone realized after the import was complete that they imported the same phone numbers for multiple people and so those phone numbers need updated.  They don’t know what phone numbers are duplicated for what people, so they need a list of all the people in the table with the same phone number.  In this list they need FirstName, LastName and PhoneNumber.  (To keep it simple everyone has one phone number, pretend it’s 1980)  Because adding in the first and last name to the query would create a unique value (since you have to group by everything in the select list) only the phone number can be used at first.  This is a simple query, and I think one of the first SQL queries I ever learned.

SELECT PhoneNumber, COUNT(*) 
FROM Person 
GROUP BY PhoneNumber 
HAVING COUNT(*) > 1

With this query we will have a list of all the phone numbers in the table that are in the table more than once.  But we don’t have the FirstName or LastName.  There are a number of ways to do this.  Once I started using CTEs they became my preferred method because it’s so simple to put the query in parenthesis and add the with’ statement.  Here is the resulting query that will return all of the data requested.  You will see the original query is just wrapped into the CTE and a new query added to get the additional information.

;WITH cte_Duplicates
	 AS ( SELECT PhoneNumber
				 , Count(*) [Count]
		  FROM   Person
		  GROUP  BY PhoneNumber
		  HAVING Count(*) > 1 )
SELECT FirstName
	   , LastName
	   , PhoneNumber
FROM   Person xPer
JOIN   cte_Duplicates xDupe ON xPer.PhoneNumber = xDupe.PhoneNumber

A few notes about the CTE and the syntax if you are not familiar with the SQL Server documentation on CTE’s.  The semi-colon before the with’ is required if there is any other SQL statements running before the CTE.  I found it’s just easier to remember to put it in there no matter what.  The documentation on CTE’s show a column list after the name of the cte.  Ex. ;with cte_Duplicates (PhoneNumber,Count) The column list is optional.  The requirement with column names is that they are not duplicated within the CTE and that every column has a name.  If you have duplicates you can either alias a column within the SELECT statement or put in a column name list.  The same goes for columns with no names like the COUNT in the query.  You will notice in the CTE I put in an alias of [Count] so that it has a name and I put the name in brackets since it’s a reserved word.  I could have left the count out of the CTE since the final query doesn’t display it, but it’s nice to have it in there if you want to highlight just the select statement within the CTE and run it to look at the original data.

That list went to the person lucky enough to have to sort out what phone numbers were right and wrong.  They gave us back the list with what the phone numbers should be.  Assuming that the list was worked with in Excel I would take that list and convert it to a CSV.  I had a macro written in Notepad++ that would wrap every line in parentheses and put a comma on the end.  Then I could take that data and put it into an insert for a table variable or a temp table depending on size.  Once the data is staged somehow the person data can be fixed using the same CTE from above.  To do this we’ll just wrap the second query into a second CTE and then write the update.  For this example assume there is code above the CTE’s that declares a table variable of @Corrections and inserts the correct data.

;WITH cte_Duplicates
	 AS ( SELECT PhoneNumber
				 , Count(*) [Count]
		  FROM   Person
		  GROUP  BY PhoneNumber
		  HAVING Count(*) > 1 ),
	 cte_List
	 AS ( SELECT FirstName
				 , LastName
				 , PhoneNumber
		  FROM   Person xPer
		  JOIN   cte_Duplicates xDupe ON xPer.PhoneNumber = xDupe.PhoneNumber )
UPDATE cte_List
SET    PhoneNumber = xNew.PhoneNumber
FROM   cte_List xList
	   JOIN @Corrections xNew ON xList.FirstName = xNew.FirstName

This statement will update the underlying table (Person) with the correct phone number from the @Corrections table variable.  The second select got wrapped into a CTE, which is as easy as putting in a comma, adding a new alias and putting in the parentheses.  Then I wrote the update against the second CTE.  (The first CTE cannot be updated because it aggregates data)

There are some caveats to using CTEs with DDL, as there is with everything in SQL Server.  You can only delete if there is one table in the select statement in the CTE.  That limits the capabilities a little.  Hopefully, you are updating data more than you are deleting data, because deleting data is never good.  You cannot update multiple tables at one time.  If your CTE has multiple tables in the select you can only write updates back to columns in one of the tables.  For example, let’s pretend that a group of orders were entered in not accounting for a leap year.  So the order date needs updating.  The first thing we do is write that select to get the data.  Instead of re-writing everything for the update just wrap it into a CTE and then write the update.  It is only updating data from the Sales.Orders table so it will run successfully.

;WITH cte_Updates
	 AS ( SELECT xPer.FirstName + ' ' + xPer.LastName AS Name
				 , xOrd.OrderID
				 , xOrd.OrderDate
		  FROM   Person.Person xPer
		  JOIN   Sales.Orders xOrd ON xPer.PersonID = xOrd.PersonID
		  WHERE  xOrd.OrderDate >= '2/28/2012' )
UPDATE cte_Updates
SET    OrderDate = Dateadd(dd, 1, OrderDate)

These are really simple examples.  In the real world there would be a PersonID of some kind that could be used for a much simpler update for the first example.  (I would hope)  The CTE method really shines when you have multiple queries to get to the data that needs to be updated or deleted.  The single most compelling reason for updating/deleting via a CTE in my opinion is this:

Updating/deleting data directly through the CTE gives you a peace of mind because you can run the selects to see exactly what data you are inflicting change upon.  There is no guess work on what data you touch, if there is just re-run the select statements in the CTE to view the data.

That is the key.  Instead of writing select statements and finding the data.  Then trying to translate those select statements into an update.  You can take the select statements and wrap them up in CTE(s) and then update the CTE.  SO SIMPLE!

By |January 17th, 2014|Uncategorized|

PASS Virtual Chapter Events : December 2013

Join SQL Server professionals from around the world for free online technical training and networking year round. PASS Virtual Chapters (VC) are interest-based forums that unite the SQL Server community through live webcasts with top SQL Server experts, online forums and resources including an archive of past presentations, and special training events.

Application Development – Virtual Chapters

Date: Friday, December 6, 2013

Time: 2:00 PM – 3:00 PM (Mountain Time)

Cost: Free

Language: English

Featured Presentation: Isn’t That Spatial: Spatial Data for the App Dev

Register

Virtualization – Virtual Chapters

Date: Wednesday, December 11, 2013

Time: 12:00 pm – 1:00 pm (Mountain Time)

Cost: Free

Language: English

Featured Presentation: Virtualization Question & Answers with Brent Ozar

The Virtualization Virtual Chapter (aka SIG) is dedicated to better management of SQL Servers in virtual environments like Microsoft Hyper-V and VMware vSphere. We: Help the community by producing things like monthly training webcasts, a centralized list of helpful virtualization-savvy administrators, and a periodic report of what works well under virtualization and what isn’t quite working yet. Help virtualization vendors by getting training and configuration information out to the people who need it, and helping the vendors build a better product with our feedback. Help PASS management by building assets that PASS can leverage (like a library of training content) and showing that PASS solves real DBA problems.

Speaker Info:

Brent Ozar is a Microsoft Certified Master of SQL Server 2008 and a Microsoft SQL Server MVP. He has over a decade of experience with SQL Server, systems administration, SAN storage, virtualization, and project management. Today, he’s a consultant for Brent Ozar Unlimited. Previously, Brent was a SQL Server expert at Quest Software and a DBA at Southern Wine & Spirits, a Miami-based wine and spirits distributor. Brent blogs at http://www.BrentOzar.com like it’s going out of style.

Event Details >> Attend

Database Administration – Virtual Chapters

Date: Wednesday, December 11, 2013

Time: 2:00 pm – 3:00 pm (Mountain Time)

Cost: Free

Language: English

Featured Presentation: Load Testing with SQL Server Tools sponsored by Dell Software

It’s often difficult to know how your SQL Servers will perform under different loads. By performing load testing, we can gain these key insights, perform modifications to existing configurations, and understand the impact on performance levels. Come learn about the native tools at our disposal for performing these important load tests and how we can identify when performance levels begin to drop. Using demos of these native tools including Distributed Replay Utility (DRU), Database Tuning Adviser (DTA), Perfmon, Extended Events, and Profiler we’ll see how to plan and perform a load test project, gain an understanding of SQL Server’s performance under varying load scenarios, and discover which tell-tale indicators can help alert us to performance degradation.

Speaker Info:

Neil Hambly is a 14-year SQL Server veteran, with expertise in SQL Server from version 6.5 to the latest 2012 edition. He’s held database roles at major organizations, including the BBC, ABN AMRO, and Accenture, as well as at a number of market-leading companies. Currently CTO/Founder at SQLnextSteps, he’s worked as a DBA, developer, and architect. Leader of the PASS London Chapter, Neil can frequently be found speaking at UK and international events.

Attend

Big Data Virtual Chapter – Virtual Chapters

Date: Tuesday, December 17, 2013

Time: 4:00 pm – 5:00 pm (Mountain Time)

Cost: Free

Language: English

Featured Presentation: Building a Big Data Architecture

Big data solutions call for different architectural solutions than our traditional workloads. Here we examine several ideas related to the architecture”cloud versus on premises, picking toolsets and distribution, and how to plan for growth and change in the stack.

Speaker Info:

Joey D’Antoni is an Architect with over a decade of experience working in both Fortune 500 and smaller firms. He is currently solutions architect for SQL Server and Big Data for Anexinet in Blue Bell, PA. He holds a BS in Computer Information Systems from Louisiana Tech University and an MBA from North Carolina State University. He is a frequent speaker at events such as PASS Summit and TechEd.

Register

By |December 4th, 2013|Uncategorized|

PASS SQL Saturday Events : December 2013

PASS SQL Saturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. Please note that in order to attend one of these event, you must register (registration links are provided for each event). Please register soon as seating is limited, and let friends and colleagues know about the event. Most events are free, however a nominal fee may be charged so that lunch may be provided (not pizza). Note that all event times are relative to venue location.

Kharkov, Ukraine | #259

Date: Saturday, December 7th, 2013

Time: 9:00 am – 6:30 pm

Cost: Free

Event Details >> Register >> Schedule

Lima, Perú | #266

Date: Saturday, December 7th, 2013

Time: 9:00 am – 5:00 pm

Cost: Free

Venue:
Microsoft Perú
Av. Victor Andres Belaunde #147
Piso 3, San Isidro, Lima, Lima 27, Peru

Event Details >> Register >> Schedule

Washington DC | #233

Date: Saturday, December 7th, 2013

Time: 8:00 am – 5:00 pm

Cost: Free + $10 lunch

Venue:
Microsoft MTC
5404 Wisconsin Ave, Chevy Chase, MD, 20815

Event Details >> Register >> Schedule

Italy, ANCONA | #264

Date: Saturday, December 13th, 2013

Time: 9:00 am – 6:00 pm

Cost: Free

Venue:
Università Politecnica delle Marche
Via Pietro Valenti 65, Alt. entrance: Via Brecce Bianche 1,
Ancona, 60020, Italy

Event Details >> Register >> Schedule

Slovenia | #274

Date: Saturday, December 21st, 2013

Time: 9:00 am – 4:30 pm

Cost: Free

Venue:
Pixi Labs
Poslovna cona A 2, Sencur – Ljubljana
SI-4208, Slovenia

Event Details >> Register >> Schedule

By |December 4th, 2013|Uncategorized|

Colorado SQL Events : December 2013

Colorado Springs SQL

Date: Wednesday, December 18th

Time: 5:30 – ?

Main Presentation: Holiday Party – Family Welcome!

Meeting Location
Old Chicago (Austin Bluffs and Academy)
4110 North Academy Boulevard
Colorado Springs, CO

View Larger Map

BoulderSQL

Date: Tuesday, December 10th

Main Presentation: Holiday Networking & Social

Time: 5:30 pm – 8:00 pm

Appetizers and the first round of drinks will be provided! This will be a great opportunity to hob nob with other SQL savvy professionals from the area. Hope you can make it!

Meeting Location
Dave & Buster’s
10667 Westminster Blvd
Westminster, CO 80020
http://www.daveandbusters.com
phone: 303-438-1500

View Larger Map

By |December 4th, 2013|Uncategorized|

Colorado SQL Events : November 2013

All ColoradoSQL user group meetings start at 5:30 p.m. and provide food and refreshments. There is no cost to attend so bring a co-worker¦or two. These events provide for great learning opportunities, as well as networking with other local SQL Server professionals.

Northern Colorado SQL Server User Group Meeting

Date: Monday, November 18th

Time: 5:30 – 8:30 pm

Main Presentation: TBA

Presenter: TBA

Meeting Location
UNC Loveland Center at Centerra
2915 Rocky Mountain Ave.
Loveland, CO 80538
Breckenridge Conference Room 2nd Floor

View Larger Map

Colorado Springs SQL

Date: Wednesday, November 20th

Time: 5:30 – 8:00 pm

Main Presentation: TBA

Presenter: TBA

Meeting Location
Hyatt Place – Colorado Springs
503 Garden of the Gods Rd W,
Colorado Springs, CO

View Larger Map

BoulderSQL

Date: Tuesday, November 19th

Main Presentation: Ready, Set, Build a Cube:

Are you ready to do something different? Are you stuck in a rut? Let’s get out of that rut and find out what else we can do with the data we have in from of us. Let’s take the data and help others make better decisions. We have all had to learn something new. Let see what it takes to get data from an OLTP database to a cube..

Presenter: Tom Norman

In 1998, Tom changed his career focus to begin working with Sql Server. He has worked in all aspects of Sql Server including Administration, Database Development, BI and Reporting Services. He has worked in the Finance and Car Dealership industry. His experience has included International deployments. Tom is the co-leader of the PASS Virtualization chapter and the past President of the Denver Sql Server User Group.

Meeting Agenda
5:30 6:00 Food / Socializing
6:00 6:15 User Group Business
6:15 6:45 First Presentation
6:45 8:00 Main presentation
8:00 8:30 Questions/Answers, general SQL discussions

Meeting Location
Confio Software in Boulder
4772 Walnut Street
Suite 100
Boulder, CO 80301

View Larger Map

DenverSQL
The Denver SQL Server User Group is organized to bring professionals of all career levels that work with Microsoft’s SQL Server, together to learn and network with others. There is no charge to attend, and you are encouraged to bring guests. Food and refreshments are provided by event sponsors.

Date: Thursday, November 21st

Time: 5:30 – 8:00 pm

Main Presentation: Write Once, Build Many – Metadata Driven SSIS with Biml

Biml is gaining traction as a viable method for programming SSIS. In this demo rich presentation you will be introduced to Biml and see the basics of using Biml to develop SSIS packages. We will also dive into examples of how you can leverage the power of Biml to create many packages from a single Biml script using a Metadata approach to your development.
Presenter: Write Once, Build Many – Metadata Driven SSIS with Biml

Andy Vold, a SQL Server MCTS and MCITP, is an Architect with RevGen Partners working primarily in business intelligence, SSIS/ETL, and database development. He enjoys helping clients by building database and business intelligence solutions to solve their complex business needs. Andy is a past board member of the Denver SQL Server Users Group.

Meeting Location
Microsoft
7595 Technology Way, Suite 400
Denver, CO 80237

View Larger Map

By |November 18th, 2013|Uncategorized|

PASS SQL Saturday Events : November 2013

PASS SQL Saturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. Please note that in order to attend one of these event, you must register (registration links are provided for each event). Please register soon as seating is limited, and let friends and colleagues know about the event. Most events are free, however a nominal fee may be charged so that lunch may be provided (not pizza). Note that all event times are relative to venue location.

Verona, Italy | #257

Date: Saturday, November 9th, 2013

Time: 9:00 am – 6:00 pm

Cost: Free

Venue:
Strada le Grazie 15
Verona, 37134
Italy

Event Details >> Register >> Schedule

Manila | #263

Date: Saturday, November 9th, 2013

Time: 8:30 am – 5:00 pm

Cost: Free

Venue:
Microsoft Philippines
8th Floor 6750 Ayala Tower
Makati City, 1200, Philippines

Event Details >> Register >> Schedule

Tampa BI | #248

Date: Saturday, November 9th, 2013

Time: 7:00 am – 5:30 pm

Cost: Free

Venue:
University of South Florida (USF),
College of Business Bldg.
4202 E Fowler Ave.
Tampa, FL, 33620

Event Details >> Register >> Schedule

Oregon | #265

Date: Saturday, November 16th, 2013

Time: 8:30 am – 6:30 pm (Local Time)

Cost: Free

Venue:
Mittleman Community Center
6651 SW Capitol Highway
Portland, OR, 97219

Event Details >> Register >> Schedule

Moscow | #261

Date: Saturday, November 30th, 2013

Time: 9:00 am – 6:30 pm

Cost: Free

Venue:
Krylatskaya st. 17, building 1
Moscow, Russian Federation

Event Details >> Register >> Schedule

By |November 2nd, 2013|Uncategorized|