Data science in SQL Server Data analysis and transformation grouping and aggregating data II

Data science in SQL Server Data analysis and transformation grouping and aggregating data II

Data science in SQL Server Data analysis and transformation – grouping and aggregating data II

SQLShack

SQL Server training Español

Data science in SQL Server Data analysis and transformation – grouping and aggregating data II

September 28, 2018 by Dejan Sarka You might find the T-SQL GROUPING SETS I described in my previous data science article a bit complex. However, I am not done with it yet. I will show additional possibilities in this article. But before you give up on reading the article, let me tell you that I will also show a way how to make R code simpler with help of the dplyr package. Finally, I will also show some a bit more advanced techniques of aggregations in Python pandas data frame.

T-SQL Grouping Sets Subclauses

Let me start immediately with the first GROUPINS SETS query. The following query calculates the sum of the income over countries and states, over whole countries, and finally over the whole rowset. Aggregates over whole countries are sums of aggregates over countries and states; the SQL aggregate over the whole rowset is a sum of aggregates over countries. SQL Server can calculate all of the aggregations needed with a single scan through the data. 12345678910111213 USE AdventureWorksDW2016;SELECT g.EnglishCountryRegionName AS Country, g.StateProvinceName AS StateProvince, SUM(c.YearlyIncome) AS SumIncomeFROM dbo.DimCustomer AS c INNER JOIN dbo.DimGeography AS g ON c.GeographyKey = g.GeographyKeyGROUP BY GROUPING SETS( (g.EnglishCountryRegionName, g.StateProvinceName), (g.EnglishCountryRegionName), ()); The previous query can be shortened with the ROLLUP clause. Look at the following query. 12345678 SELECT g.EnglishCountryRegionName AS Country, g.StateProvinceName AS StateProvince, SUM(c.YearlyIncome) AS SumIncomeFROM dbo.DimCustomer AS c INNER JOIN dbo.DimGeography AS g ON c.GeographyKey = g.GeographyKeyGROUP BY ROLLUP (g.EnglishCountryRegionName, g.StateProvinceName); The ROLLUP clause rolls up the subtotal to the subtotals on the higher levels and to the grand total. Looking at the clause, it creates hyper-aggregates on the columns in the clause from right to left, in each pass decreasing the number of columns over which the aggregations are calculated. The ROLLUP clause calculates only those aggregates that can be calculated within a single pass through the data. There is another shortcut command for the GROUPING SETS – the CUBE command. This command creates groups for all possible combinations of columns. Look at the following query. 12345678 SELECT c.Gender, c.MaritalStatus, SUM(c.YearlyIncome) AS SumIncomeFROM dbo.DimCustomer AS c INNER JOIN dbo.DimGeography AS g ON c.GeographyKey = g.GeographyKeyGROUP BY CUBE (c.Gender, c.MaritalStatus); Here is the result. You can see the aggregates over Gender and MaritaStatus, hyper-aggregates over Gender only and over MaritalStatus only, and the hyper-aggregate over the complete input rowset. I can write the previous query in another way. Note that the GROUPING SETS clause says “sets” in plural. So far, I defined only one set in the clause. Now take a loot at the following query. 123456789101112 SELECT c.Gender, c.MaritalStatus, SUM(c.YearlyIncome) AS SumIncomeFROM dbo.DimCustomer AS c INNER JOIN dbo.DimGeography AS g ON c.GeographyKey = g.GeographyKeyGROUP BY GROUPING SETS( (c.Gender), ()),ROLLUP(c.MaritalStatus); The query has two sets defined: grouping over Gender and over the whole rowset, and then, in the ROLLUP clause, grouping over MaritalStatus and over the whole rowset. The actual grouping is over the cartesian product of all sets in the GROUPING SETS clause. Therefore, the previous query calculates the aggregates over Gender and MaritaStatus, the hyper-aggregates over Gender only and over MaritalStatus only, and the hyper-aggregate over the complete input rowset. If you add more columns in each set, the number of grouping combinations raises very quickly, and it becomes very hard to decipher what the query actually does. Therefore, I would recommend you to use this advanced way of defining the GROUPING SETS clause very carefully. There is another problem with the hyper-aggregates. In the rows with the hyper-aggregates, there are some columns showing NULL. This is correct, because when you calculate in the previous query the hyper-aggregate over the MaritalStatus column, then the value of the Gender column is unknown, and vice-versa. For the aggregate over the whole rowset, the values of both columns are unknown. However, there could be another reason to get NULLs in those two columns. There might be already NULLs in the source dataset. Now you need to have a way to distinguish the NULLs in the result that are the NULLs aggregated over the NULLs in the source data in a single group and the NULLs that come in the result because of the hyper-aggregates. Here the GROUPING() AND GROUPING_ID functions become handy. Look at the following query. 123456789101112 SELECT GROUPING(c.Gender) AS GroupingG, GROUPING(c.MaritalStatus) AS GroupingM, GROUPING_ID(c.Gender, c.MaritalStatus) AS GroupingId, c.Gender, c.MaritalStatus, SUM(c.YearlyIncome) AS SumIncomeFROM dbo.DimCustomer AS c INNER JOIN dbo.DimGeography AS g ON c.GeographyKey = g.GeographyKeyGROUP BY CUBE (c.Gender, c.MaritalStatus); Here is the result. The GROUPING() function accepts a single column as an argument and returns 1 if the NULL in the column is because it is a hyper-aggregate when the column value is not applicable, and 0 otherwise. For example, in the third row of the output, you can see that this is the aggregate over the MaritalStatus only, where Gender makes no sense, and the GROUPING(Gender) returns 1. If you read my previous article, you probably already know this function. I introduced it there, together with the problem is solves. The GROUPING_ID() function is another solution for the same problem. It accepts both columns as the argument and returns an integer bitmap for the hyper-aggregates fo these two columns. Look at the last row in the output. The GROUPING() function returs in the first two columns values 0 and 1. Let’s write them thether as a bitmap and get 01. Now let’s calculate the integer of the bitmap: 1×20 + 0x21 = 1. Ths means that the MaritalStatus NULL is there because this is a hyper-aggregate over the Gender only. Now chect the sevents row. The bitmap calulation to integer is: 1×20 + 0x21 = 3. So this is the hyper-aggregate where none of the two inpuc columns are applicable, the hyper-aggregate over the whole rowset.

Introducing the dplyr Package

After the complex GROUPING SETS clause, I guess you will appreciate the simplicity of the following R code. Let me quickly read the data from SQL Server in R. 1234567891011121314151617 library(RODBC)con <- odbcConnect("AWDW", uid = "RUser", pwd = "Pa$$w0rd")TM <- as.data.frame(sqlQuery(con, "SELECT c.CustomerKey, g.EnglishCountryRegionName AS Country, g.StateProvinceName AS StateProvince, c.EnglishEducation AS Education, c.NumberCarsOwned AS TotalCars, c.MaritalStatus, c.TotalChildren, c.NumberChildrenAtHome, c.YearlyIncome AS Income FROM dbo.DimCustomer AS c INNER JOIN dbo.DimGeography AS g ON c.GeographyKey = g.GeographyKey;"), stringsAsFactors = TRUE)close(con) I am going to install the dplyr package. This is a very popular package for data manipulation in r. It brings simple and concise syntax. Let me install it and load it. 12 install.packages("dplyr")library(dplyr) The first function to introduce from the dplyr package is the glimpse() function. If returns a brief overview of the variables in the data frame. Here is the call of that function. 1 glimpse(TM) Bellow is a narrowed result. 123456789 $ CustomerKey 11000, 11001, 11002, 11003, 11004, 11005, $ Country Australia, Australia, Australia, Australi$ StateProvince Queensland, Victoria, Tasmania, New South$ Education Bachelors, Bachelors, Bachelors, Bachelor$ TotalCars 0, 1, 1, 1, 4, 1, 1, 2, 3, 1, 1, 4, 2, 3, $ MaritalStatus M, S, M, S, S, S, S, M, S, S, S, M, M, M,$ TotalChildren 2, 3, 3, 0, 5, 0, 0, 3, 4, 0, 0, 4, 2, 2, $ NumberChildrenAtHome 0, 3, 3, 0, 5, 0, 0, 3, 4, 0, 0, 4, 0, 0, $ Income 9e+04, 6e+04, 6e+04, 7e+04, 8e+04, 7e+04, The dplyr package brings functions that allow you to manipulate the data with the syntax that briefly resembles the T-SQL SELECT statement. The select() function allows you to define the projection on the dataset, to select specific columns only. The following code uses the head() basic R function to show the first six rows. Then the second line uses the dplyr select() function to select only the columns from CustomerKey to TotalCars. The third line selects only columns with the word “Children” in the name. The fourth line selects only columns with the name that starts with letter “T”. 1234 head(TM)head(select(TM, CustomerKey:TotalCars))head(select(TM, contains("Children")))head(select(TM, starts_with("T"))) For the sake of brevity, I am showing the results of the last line only. 1234567 TotalCars TotalChildren1 0 22 1 33 1 34 1 05 4 56 1 0 The filter() function allows you to filter the data similarly like the T-SQL WHERE clause. Look at the following two examples. 12345 # Filterfilter(TM, CustomerKey < 11005)# With projectionselect(filter(TM, CustomerKey < 11005), TotalCars, MaritalStatus) Again, I am showing the results of the last command only. 123456 TotalCars MaritalStatus1 0 M2 1 S3 1 M4 1 S5 4 S The dplyr package also defines the very useful pipe operator, written as %>%. It allows you to chain the commands. The output of one command is the input for the following function. The following code is equivalent to the previous one, just that it uses the pipe operator. 123 TM %>%filter(CustomerKey < 11005) %>%select(TotalCars, MaritalStatus) The distinct() function work similarly like the T-SQL DISTINCT clause. The following code uses it. 1234 TM %>%filter(CustomerKey < 11005) %>%select(TotalCars, MaritalStatus) %>%distinct Here is the result. 12345 TotalCars MaritalStatus1 0 M2 1 S3 1 M4 4 S You can also use the dplyr package for sampling the rows. The sample_n() function allows you to select n random rows with replacement and without replacement, as the following code shows. # Sampling with replacement 12345678910 # Sampling with replacementTM %>%filter(CustomerKey < 11005) %>%select(CustomerKey, TotalCars, MaritalStatus) %>%sample_n(3, replace = TRUE)# Sampling without replacementTM %>%filter(CustomerKey < 11005) %>%select(CustomerKey, TotalCars, MaritalStatus) %>%sample_n(3, replace = FALSE) Here is the result. 12345678 CustomerKey TotalCars MaritalStatus3 11002 1 M3.1 11002 1 M1 11000 0 M CustomerKey TotalCars MaritalStatus3 11002 1 M1 11000 0 M2 11001 1 S 123456 CustomerKey TotalCars MaritalStatus 3 11002 1 M 1 11000 0 M 2 11001 1 S Note that when sampling with replacement, the same row can come in the sample multiple times. Also, note that the sampling is random; therefore, the next time you execute this code you will probably get different results. The arrange() function allows you to reorder the data frame, similarly to the T-SQL OREDER BY clause. Again, for the sake of brevity, I am not showing the results for the following code. 12 head(arrange(select(TM, CustomerKey:StateProvince), desc(Country), StateProvince)) The mutate() function allows you to add calculated columns to the data frame, like the following code shows. 1234 TM %>%filter(CustomerKey < 11005) %>%select(CustomerKey, TotalChildren, NumberChildrenAtHome) %>%mutate(NumberChildrenAway = TotalChildren - NumberChildrenAtHome) Here is the result. 123456 CustomerKey TotalChildren NumberChildrenAtHome NumberChildrenAway1 11000 2 0 22 11001 3 3 03 11002 3 3 04 11003 0 0 05 11004 5 5 0 Finally, let’s do the aggregations, like the title of this article promises. You can use the summarise() function for that task. For example, the following line of code calculates the average value for the Income variable. 1 summarise(TM, avgIncome = mean(Income)) You can also calculates aggregates in groups with the group_by() function. 1 summarise(group_by(TM, Country), avgIncome = mean(Income)) Here is the result. 12345678 Country avgIncome 1 Australia 64338.622 Canada 57167.413 France 35762.434 Germany 42943.825 United Kingdom 52169.376 United States 63616.83 The top_n() function works similarly to the TOP T-SQL clause. Look at the following code. 123456 summarise(group_by(TM, Country), avgIncome = mean(Income)) %>%top_n(3, avgIncome) %>%arrange(desc(avgIncome))summarise(group_by(TM, Country), avgIncome = mean(Income)) %>%top_n(-2, avgIncome) %>%arrange(avgIncome) I am calling the top_n() function twice, to calculate the top 3 countries by average income and the bottom two. Note that the order of the calculation is defined by the sign of the number of rows parameter. In the first call, 3 means the top 3 descending, and in the second call, 2 means top two in ascending order. Here is the result of the previous code. 12345 Country avgIncome 1 Australia 64338.622 United States 63616.833 Canada 57167.41 1234 Country avgIncome 1 France 35762.432 Germany 42943.82 Finally, you can store the results of the dplyr functions in a normal data frame. The following code creates a new data frame and then shows the data graphically. 123 TM1 =summarise(group_by(TM, Country), avgIncome = mean(Income))barchart(TM1$avgIncome ~ TM1$Country) The result is the following graph.

Advanced Python Pandas Aggregations

Time to switch to Python. Again, I need to start with importing the necessary libraries and reading the data. 1234567891011121314 import numpy as npimport pandas as pdimport pyodbcimport matplotlib.pyplot as pltcon = pyodbc.connect('DSN=AWDW;UID=RUser;PWD=Pa$$w0rd')query = """SELECT g.EnglishCountryRegionName AS Country, c.EnglishEducation AS Education, c.YearlyIncome AS Income, c.NumberCarsOwned AS CarsFROM dbo.DimCustomer AS c INNER JOIN dbo.DimGeography AS g ON c.GeographyKey = g.GeographyKey;"""TM = pd.read_sql(query, con) From the previous article, you probably remember the describe() function. The following code uses it to calculate the descriptive statistics for the Income variable over countries. 1 TM.groupby('Country')['Income'].describe() Here is an abbreviated result. 123456789101112 Country Australia count 3591.000000 mean 64338.624339 std 31829.998608 min 10000.000000 25% 40000.000000 50% 70000.000000 75% 80000.000000 max 170000.000000Canada count 1571.000000 mean 57167.409293 std 20251.523043 You can use the unstack() function to get a tabular result: 1 TM.groupby('Country')['Income'].describe().unstack() Here is the narrowed tabular result. 12345678 count mean stdCountry Australia 3591.0 64338.624339 31829.998608Canada 1571.0 57167.409293 20251.523043France 1810.0 35762.430939 27277.395389Germany 1780.0 42943.820225 35493.583662United Kingdom 1913.0 52169.367486 48431.988315United States 7819.0 63616.830797 25706.482289 You can use the SQL aggregate() function to calculate multiple aggregates on multiple columns at the same time. The agg() is a synonym for the SQL aggregate(). Look at the following example. 1234 TM.groupby('Country').aggregate({'Income': 'std', 'Cars':'mean'})TM.groupby('Country').agg({'Income': ['max', 'mean'], 'Cars':['sum', 'count']}) The first call calculates a single SQL aggregate for two variables. The second call calculates two aggregates for two variables. Here is the result of the second call. 123456789 Cars Income sum count max meanCountry Australia 6863 3591 170000.0 64338.624339Canada 2300 1571 170000.0 57167.409293France 2259 1810 110000.0 35762.430939Germany 2162 1780 130000.0 42943.820225United Kingdom 2325 1913 170000.0 52169.367486United States 11867 7819 170000.0 63616.830797 You might dislike the form of the previous result because the names of the columns are written in two different rows. You might want to flatten the names to a single word for a column. You can use the numpy ravel() function to latten the array of the column names and then concatenate them to a single name, like the following code shows. 123456789101112 # Renaming the columnsIncCars = TM.groupby('Country').aggregate( {'Income': ['max', 'mean'], 'Cars':['sum', 'count']})# IncCars# Ravel function# IncCars.columns# IncCars.columns.ravel()# RenamingIncCars.columns = ["_".join(x) for x in IncCars.columns.ravel()]# IncCars.columnsIncCars You can also try to execute the commented code to get the understanding how the ravel() function works step by step. Anyway, here is the final result. 12345678 Cars_sum Cars_count Income_max Income_meanCountry Australia 6863 3591 170000.0 64338.624339Canada 2300 1571 170000.0 57167.409293France 2259 1810 110000.0 35762.430939Germany 2162 1780 130000.0 42943.820225United Kingdom 2325 1913 170000.0 52169.367486United States 11867 7819 170000.0 63616.830797 For a nice end, let me show you the results also graphically. 12 IncCars[['Cars_sum','Cars_count']].plot()plt.show() And here is the graph.

Conclusion

I will finish with aggregations in this data science series for now. However, I am not done with data preparation yet. You will learn about other problems and solutions in the forthcoming data science articles.

Table of contents

Introduction to data science, data understanding and preparation Data science in SQL Server: Data understanding and transformation – ordinal variables and dummies Data science in SQL Server: Data analysis and transformation – binning a continuous variable Data science in SQL Server: Data analysis and transformation – Information entropy of a discrete variable Data understanding and preparation – basic work with datasets Data science in SQL Server: Data analysis and transformation – grouping and aggregating data I Data science in SQL Server Data analysis and transformation – grouping and aggregating data II Interview questions and answers about data science, data understanding and preparation
Author Recent Posts Dejan SarkaDejan Sarka, MCT and Data Platform MVP, is an independent trainer and consultant that focuses on development of database & business intelligence applications.Besides projects, he spends about half of the time on training and mentoring. He is the founder of the Slovenian SQL Server and .NET Users Group. Dejan Sarka is the main author or coauthor of sixteen books about databases and SQL Server. He also developed many courses and seminars for Microsoft, SolidQ and Pluralsight.

View all posts by Dejan Sarka Latest posts by Dejan Sarka (see all) Data Science in SQL Server: Unpivoting Data - October 29, 2018 Data science in SQL Server: Data analysis and transformation – Using SQL pivot and transpose - October 11, 2018 Data science in SQL Server Data analysis and transformation – grouping and aggregating data II - September 28, 2018

Related posts

Interview questions and answers about data science, data understanding and preparation Data understanding and preparation – basic work with datasets SQL Order by Clause overview and examples SQL Union vs Union All in SQL Server Data Science in SQL Server: Unpivoting Data 1,413 Views

Follow us

Popular

SQL Convert Date functions and formats SQL Variables: Basics and usage SQL PARTITION BY Clause overview Different ways to SQL delete duplicate rows from a SQL Table How to UPDATE from a SELECT statement in SQL Server SQL Server functions for converting a String to a Date SELECT INTO TEMP TABLE statement in SQL Server SQL WHILE loop with simple examples How to backup and restore MySQL databases using the mysqldump command CASE statement in SQL Overview of SQL RANK functions Understanding the SQL MERGE statement INSERT INTO SELECT statement overview and examples SQL multiple joins for beginners with examples Understanding the SQL Decimal data type DELETE CASCADE and UPDATE CASCADE in SQL Server foreign key SQL Not Equal Operator introduction and examples SQL CROSS JOIN with examples The Table Variable in SQL Server SQL Server table hints – WITH (NOLOCK) best practices

Trending

SQL Server Transaction Log Backup, Truncate and Shrink Operations Six different methods to copy tables between databases in SQL Server How to implement error handling in SQL Server Working with the SQL Server command line (sqlcmd) Methods to avoid the SQL divide by zero error Query optimization techniques in SQL Server: tips and tricks How to create and configure a linked server in SQL Server Management Studio SQL replace: How to replace ASCII special characters in SQL Server How to identify slow running queries in SQL Server SQL varchar data type deep dive How to implement array-like functionality in SQL Server All about locking in SQL Server SQL Server stored procedures for beginners Database table partitioning in SQL Server How to drop temp tables in SQL Server How to determine free space and file size for SQL Server databases Using PowerShell to split a string into an array KILL SPID command in SQL Server How to install SQL Server Express edition SQL Union overview, usage and examples

Solutions

Read a SQL Server transaction logSQL Server database auditing techniquesHow to recover SQL Server data from accidental UPDATE and DELETE operationsHow to quickly search for SQL database data and objectsSynchronize SQL Server databases in different remote sourcesRecover SQL data from a dropped table without backupsHow to restore specific table(s) from a SQL Server database backupRecover deleted SQL data from transaction logsHow to recover SQL Server data from accidental updates without backupsAutomatically compare and synchronize SQL Server dataOpen LDF file and view LDF file contentQuickly convert SQL code to language-specific client codeHow to recover a single table from a SQL Server database backupRecover data lost due to a TRUNCATE operation without backupsHow to recover SQL Server data from accidental DELETE, TRUNCATE and DROP operationsReverting your SQL Server database back to a specific point in timeHow to create SSIS package documentationMigrate a SQL Server database to a newer version of SQL ServerHow to restore a SQL Server database backup to an older version of SQL Server

Categories and tips

►Auditing and compliance (50) Auditing (40) Data classification (1) Data masking (9) Azure (295) Azure Data Studio (46) Backup and restore (108) ►Business Intelligence (482) Analysis Services (SSAS) (47) Biml (10) Data Mining (14) Data Quality Services (4) Data Tools (SSDT) (13) Data Warehouse (16) Excel (20) General (39) Integration Services (SSIS) (125) Master Data Services (6) OLAP cube (15) PowerBI (95) Reporting Services (SSRS) (67) Data science (21) ►Database design (233) Clustering (16) Common Table Expressions (CTE) (11) Concurrency (1) Constraints (8) Data types (11) FILESTREAM (22) General database design (104) Partitioning (13) Relationships and dependencies (12) Temporal tables (12) Views (16) ►Database development (418) Comparison (4) Continuous delivery (CD) (5) Continuous integration (CI) (11) Development (146) Functions (106) Hyper-V (1) Search (10) Source Control (15) SQL unit testing (23) Stored procedures (34) String Concatenation (2) Synonyms (1) Team Explorer (2) Testing (35) Visual Studio (14) DBAtools (35) DevOps (23) DevSecOps (2) Documentation (22) ETL (76) ►Features (213) Adaptive query processing (11) Bulk insert (16) Database mail (10) DBCC (7) Experimentation Assistant (DEA) (3) High Availability (36) Query store (10) Replication (40) Transaction log (59) Transparent Data Encryption (TDE) (21) Importing, exporting (51) Installation, setup and configuration (121) Jobs (42) ►Languages and coding (686) Cursors (9) DDL (9) DML (6) JSON (17) PowerShell (77) Python (37) R (16) SQL commands (196) SQLCMD (7) String functions (21) T-SQL (275) XML (15) Lists (12) Machine learning (37) Maintenance (99) Migration (50) Miscellaneous (1) ►Performance tuning (869) Alerting (8) Always On Availability Groups (82) Buffer Pool Extension (BPE) (9) Columnstore index (9) Deadlocks (16) Execution plans (125) In-Memory OLTP (22) Indexes (79) Latches (5) Locking (10) Monitoring (100) Performance (196) Performance counters (28) Performance Testing (9) Query analysis (121) Reports (20) SSAS monitoring (3) SSIS monitoring (10) SSRS monitoring (4) Wait types (11) ►Professional development (68) Professional development (27) Project management (9) SQL interview questions (32) Recovery (33) Security (84) Server management (24) SQL Azure (271) SQL Server Management Studio (SSMS) (90) SQL Server on Linux (21) ►SQL Server versions (177) SQL Server 2012 (6) SQL Server 2016 (63) SQL Server 2017 (49) SQL Server 2019 (57) SQL Server 2022 (2) ►Technologies (334) AWS (45) AWS RDS (56) Azure Cosmos DB (28) Containers (12) Docker (9) Graph database (13) Kerberos (2) Kubernetes (1) Linux (44) LocalDB (2) MySQL (49) Oracle (10) PolyBase (10) PostgreSQL (36) SharePoint (4) Ubuntu (13) Uncategorized (4) Utilities (21) Helpers and best practices BI performance counters SQL code smells rules SQL Server wait types © 2022 Quest Software Inc. ALL RIGHTS RESERVED. GDPR Terms of Use Privacy
Share:
0 comments

Comments (0)

Leave a Comment

Minimum 10 characters required

* All fields are required. Comments are moderated before appearing.

No comments yet. Be the first to comment!