Mounting an azure data lake in azure databricks can be done in several ways. But the most secure and recommended way of doing it is through a service principal that has its credentials stored in an azure key vault. In my attempt to set that up, I found information scattered in many different articles. So, I decided to do this blog to discuss everything you need to get that up an running.
More specifically, I will discuss the following topics:
1- Create a secret scope in azure databricks that is backed by an azure key vault instance. 2- Setup permissions for your service principal on the data lake account. 3- Store credentials necessary for your service principal in a key vault. 4- Build a function to mount your data lake.
Assumptions: This article assumes the following: 1- That you have already created a service principal and know the id and the secret key for that principal. 2- That you already have the data lake gen 2, key vault and databricks resources already created.
1- Create a secret scope in azure databricks that is backed by an azure key vault instance
First thing first, we need to authenticate azure databricks to the key vault instance so that it is at least able to read/list the keys in that key vault. To do that through the UI, you need to go to the following URL: https://<databricks-instance>#secrets/createScope
See example below of how this Url will look like (please make sure that you use your databricks instance that appears in the Url when you launch databricks from azure portal: https://adb-XXXXXXXXXXXXXXXX.X.azuredatabricks.net/#secrets/createScope Please note that this Url is case sensitive. The s in scope should be capitalized as shown above.
Once you launch this Url you will see a screen as shown below. You will need to enter the following information: – The scope name (any name of your choice). Let’s call it key-vault-secret-scope for now. Remember this name as we will use it later in our mounting function. – DNS Name and Resource ID for your azure key vault instance.
The DNS Name and Resource ID are available under the properties window in your azure key vault resource. From there, you need to use the Vault URI as your DNS name and Resource ID for Resource ID as shown below:
You can verify success of this by going to your key vault instance –> Access policies –> Verify that AzureDatabricks instance is added with Get/List permissions granted for secrets (in an earlier version, this strategy was granting databricks 16 extra permissions that it didn’t really need. So make sure that you only grant it get/list permissions on secrets, or more than that only if required).
2- Setup permissions for your service principal on the data lake account
Here there are also two ways of doing this. This can either be setup through a role based access control (RBAC) or through access control list (ACL). I will discuss the RBAC for the purpose of simplicity.
Ideally, you want to grant your service principal the ability to read/write/delete data on the data lake storage account. For that, you can grant it Storage Blob Data Contributor. This should grant you enough permissions to manipulate data and containers in the storage account. This can be done by going to your storage account –> go to Access Control –> select add –> and then add role assignment –> select the storage blob data contributor role and then select the service principal that you need to grant access to.
3- Store credentials necessary for your service principal in a key vault.
To avoid hardcoding your service principal information in databricks code, it is highly recommended that you store the information in your key vault. This is particularly important to do for the client secret key for your service principal. So for the purpose of this exercise let’s just create a key called sp-appid to store the application id and a key called sp-secret to store the service principal client key. Your key vault would look something like this
4- Build a function to mount your data lake.
Finally, we are here. Once you have all that setup, you should be ready for the databricks function. Your function should look like this:
def mount_data_lake(mount_path,
container_name,
storage_account_name,
service_principal_client_id,
azure_ad_directory_id,
service_principal_client_secret_key):
"""
mount_path: the path that you want to mount to. For example, /mnt/mydata
contianer_name: name of container on data lake.
storage_account_name: name of storage account that you want to mount.
service_principal_client_id: application (client) id for the service principal
azure_ad_directory_id: tenant id
service_principal_client_secret_key: the secret key for your service principal
"""
configs = {"fs.azure.account.auth.type": "OAuth",
"fs.azure.account.oauth.provider.type": "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider",
"fs.azure.account.oauth2.client.id": service_principal_client_id,
"fs.azure.account.oauth2.client.secret": service_principal_client_secret_key,
"fs.azure.account.oauth2.client.endpoint": "https://login.microsoftonline.com/" + azure_ad_directory_id + "/oauth2/token"}
try:
storage_account_url = "abfss://" + container_name + "@" + storage_account_name + ".dfs.core.windows.net/"
dbutils.fs.mount(
source = storage_account_url,
mount_point = mount_path,
extra_configs = configs
)
except Exception as ex:
if 'already mounted' in str(ex.args):
print(f"Mount: {mount_path} is already mounted")
else:
raise Exception(f"Unable to mount {mount_path} on {storage_account_name}. Error message: {ex}")
And now you can simply call your function on any storage account/mount point that you like. For example:
service_principal_client_id = dbutils.secrets.get(scope="key-vault-secret-scope", key="sp-appid")
service_principal_client_secret_key = dbutils.secrets.get(scope="key-vault-secret-scope", key="sp-secret")
azure_ad_directory_id = "your tenant id"
storage_account_name = "name of your storage account"
container_name = "name of your container"
mount_path = "your mount point"
mount_data_lake(mount_path,
container_name,
storage_account_name,
service_principal_client_id,
azure_ad_directory_id,
service_principal_client_secret_key)
Well that’s it. Congrats! you are ready to start reading/writing to/from your data lake gen 2 by using this mount point. For example, if you named your mount point as /mnt/mydata if under that container that you mounted there is a folder called MyFolder that has a file called MyFile.csv, then you can read the csv file like this
In this blog, I would like to discuss how you will be able to use Python to run a databricks notebook for multiple times in a parallel fashion. Noting that the whole purpose of a service like databricks is to execute code on multiple nodes called the workers in parallel fashion. But there are times where you need to implement your own parallelism logic to fit your needs.
To follow along, you need to have databricks workspace, create a databricks cluster and two notebooks. The parent notebook orchestrates the parallelism process and the child notebook will be executed in parallel fashion. The idea would be that the parent notebook will pass along a parameter for the child notebook and the child notebook will use that parameter and execute a given task. Without further to say, let’s get to it.
For simplicity let’s design a child notebook that takes a number as an input and then print the multiplication of this number by 10. Also, to make sure that we test our parallelism logic, we will introduce 20 seconds sleep time for our child notebook. So open up your child notebook and enter the following code in Command 1 (this code will help us pass a parameter from the parent notebook).
Open up a new command in child notebook and enter the following code which will calculate the 10 multiplier for our number of interest, introduce a sleep time of 20 seconds and then print the output
import time
outputNumber = numberToProcess * 10
time.sleep(20) # sleep for 20 seconds
print('The desired output is {}'.format(outputNumber))
So your child notebook will look something like this
Now that you have your child notebook setup, go to your parent notebook and paste the following code to import the multithreading packages
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
Then we define the function that will execute a child notebook passing in the number parameter
Finally, the magic command, which will take a list of numbers, spin up a multithreading executor and then map the list of numbers to that executor. This command will execute the childnotebook instance for 5 times (because the list of numbers contains five numbers) in a parallel fashion.
listOfNumbers = [87,90,12,34,78]
with ThreadPoolExecutor() as executor:
results = executor.map(processAnIntegerNumber, listOfNumbers)
Executing the parent notebook, you will notice that 5 databricks jobs will run concurrently each one of these jobs will execute the child notebook with one of the numbers in the list. This is a snapshot of the parent notebook after execution
Notice how the overall time to execute the five jobs is about 40 seconds. Now, if we open the link to any of these jobs, we will notice that the time was also about 32 seconds indicating that jobs were run in parallel
Hope this blog helps you run your jobs faster and satisfies your “Need for Speed”.
You can follow this blog for Machine learning, data engineering, devops, dynamics and power apps tips and tricks.
As discussed in this article by Databricks that during your work in a notebook, you can mount a Blob Storage container or a folder inside a container to Databricks File System. The whole point of mounting to a blob storage container is simply to use an abbreviated link to your data using the databricks file system rather than having to refer to the whole URL to your blob container every time you need to read/write data from that blob container. More details on mounting and its usage, can be found in the articles referenced above.
The purpose of this article is to suggest a way to check if the mountpoint has been created already and only attempt to create it if it doesn’t exist using python.
This can simply be done if we knew how to list existing mountpoints using python. Luckily, databricks offers this to us using the dbutils.fs.mounts() command. To access the actual mountpoint we can do something like this:
for mount in dbutils.fs.mounts():
print (mount.mountPoint)
Knowing how to access mountpoints enables us to write some Python syntax to only mount if the mountpoint doesn’t exist. The code should look like the following:
storageAccountName = "your storage account name"
storageAccountAccessKey = "your storage account access key"
blobContainerName = "your blob container name"
if not any(mount.mountPoint == '/mnt/FileStore/MountFolder/' for mount in dbutils.fs.mounts()):
try:
dbutils.fs.mount(
source = "wasbs://{}@{}.blob.core.windows.net".format(blobContainerName, storageAccountName),
mount_point = "/mnt/FileStore/MountFolder/",
extra_configs = {'fs.azure.account.key.' + storageAccountName + '.blob.core.windows.net': storageAccountAccessKey}
)
except Exception as e:
print("already mounted. Try to unmount first")
Or, you can add an error handler to print an error message if the the blob is mounted already, as such:
If you got along well with your deployment strategy, you might got to the point where you are using a service principal in DevOps to do your deployments. One of the deployment tasks that I needed to do was to create an access policy using a service principal with Powershell. I simply wanted to do that using command Set-KeyVaultAccessPolicy. More specifically, I wanted to grant an access policy for my data factory to be able to get/list secrets in my key vault. Running my code with my windows account, everything was working as expected. Running my code using the service principal, it was failing with the following error
Exception: An error occurred: System.Exception: Insufficient privileges to complete the operation.at Set-KeyVaultAccessPolicy,
This has occurred even though my service principal was granted contributor access to the resource group where the key vault lives. That was the same level of permissions that my account had. Then why was it that my account was able to run Set-KeyVaultAccessPolicy but my service principal didn’t have enough privileges to do it?
It turns out that in order for a service principal to be able to create an access policy, it will need to validate the object ID for your resource against azure active directory. To fix, you can try one of the following two solutions:
Solution 1: Change the API permissions for your service principal to allow reading of the azure active directory graph.
This can be done by going to your service principal, and then select the API permissions blade. Click add permission as can be seen in the image below
Once there, scroll all the way to the bottom and select “Azure Active Directory Graph”. Then grant Directory.Read.All permissions to your application permissions options as can be seen below
This will enable your service principal to verify object ID and therefore create necessary access policies.
Solution 2: Add flag BypassObjectIdValidation to your command
This will enable your command to skip the object id validation step and create the access policy (even though this solution has worked for many people, it didn’t work for me). So your command will look something like
Say that you have a dataframe in Pandas and you are interested in finding the top n records for each group. Depending on your need, top n can be defined based on a numeric column in your dataframe or it can simply be based on the count of occurrences for the rows in that group.
For example, suppose (god forbid) that I have a retailer store with four branches (A, B, C, D). Also, suppose that for each day, I would like to get the three branches with most number of items sold. So, I would like to take the day as a group, count the number of sold items in each branch for that day and then pick the three branches with highest number of sales. This is our problem, now, without further to say, let’s see the code for our example.
branch_name date item_no
0 A 2020-01-01 I1
1 A 2020-01-01 I2
2 B 2020-01-01 I3
3 B 2020-01-01 I4
4 C 2020-01-01 I5
5 C 2020-01-01 I6
6 C 2020-01-01 I7
7 C 2020-01-01 I8
8 D 2020-01-01 I9
9 A 2020-01-02 I10
10 A 2020-01-02 I11
11 B 2020-01-02 I12
12 B 2020-01-02 I13
13 C 2020-01-02 I14
14 D 2020-01-02 I15
15 D 2020-01-02 I16
16 D 2020-01-02 I17
17 D 2020-01-02 I18
Great, from the dataframe above, you can see that for 2020-01-01, we have most sales coming from branches A, B and C whereas for 2020-01-02, we have most sales coming from branches A, B and D. Let’s write the expression to return what we just concluded with our eyes.
Solution
The solution here should be as the following: 1- We need to count the number of items sold for each day and each branch.
sales_agg = sales.groupby(['date', 'branch_name']).agg({'item_no':'nunique'}).reset_index()
print(sales_agg)
date branch_name item_no
0 2020-01-01 A 2
1 2020-01-01 B 2
2 2020-01-01 C 4
3 2020-01-01 D 1
4 2020-01-02 A 2
5 2020-01-02 B 2
6 2020-01-02 C 1
7 2020-01-02 D 4
2- For each day, we need to sort the branches by the number of items sold in descending order
sales_sorted = sales_agg.groupby(['date']).apply(lambda x: x.sort_values(['item_no'],ascending = False)).reset_index(drop = True)
print(sales_sorted)
date branch_name item_no
0 2020-01-01 C 4
1 2020-01-01 A 2
2 2020-01-01 B 2
3 2020-01-01 D 1
4 2020-01-02 D 4
5 2020-01-02 A 2
6 2020-01-02 B 2
7 2020-01-02 C 1
3- Now for each date, we need to pick the top n records (3 in our case)
sales_sorted.groupby(['date']).head(3)
date branch_name item_no
0 2020-01-01 C 4
1 2020-01-01 A 2
2 2020-01-01 B 2
4 2020-01-02 D 4
5 2020-01-02 A 2
6 2020-01-02 B 2
Great, we are done. This has returned the top 3 branches based on total number of items sold for each day. Hope this helps you solve your own problem.
In this blog, I would like to go through a full machine learning project using regression modeling and following common sense data cleaning and processing. This project should cover most of the checkpoints for any given machine learning project in real world utilizing one of the most powerful and commonly used regression algorithms, I will be covering:
1- The process of data exploration and why it is not just a fancy thing to do to impress business users. 2- How to divide your numbers (numeric variables) from your letters (categorical variables) and then bring them back together. 3- The ultimate painful experience of missing values and how to cheat on them. 4- Letters, oh god I hate letters, show me the numbers from these categories. 5- Let’s bring our best gladiators (support vector machines, random forests, gradient boosting algorithm …etc) and see what they can learn. 6- Which algorithm was best at learning? Are they consistent in their learning? Can I simply visualize the performance for each model and let my eye do the comparison because my brain is busy with other things?
Data Ingestion and Exploration
First thing we need to do of course is to load/read our data. For the purpose of this blog, I decided to use one of the most popular data sets for regression purposes; the house pricing dataset. If you want to follow along, feel free to download the file from https://www.kaggle.com/c/house-prices-advanced-regression-techniques/download/EGpnHbJBO1XGF5B35WNU%2Fversions%2FHOSha0bPv4HEBGjo9O9x%2Ffiles%2Ftrain.csv. This dataset contains several numerical and categorical predictors to help predict the sales price for a house. So, the objective here is that we need to use sales price from historical houses to try to come up with rules that map our given predictors to the sales price.
Once downloaded on your local computer, you can read the file and show the first few rows using the following code, of course, after loading all the required libraries
""" Import required libraries Main Libraries"""
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pyodbc
import os
import sklearn
import seaborn as sns
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
%matplotlib inline
# Basic Statistics Packages
from scipy.stats import skew, norm
from scipy.special import boxcox1p
from scipy.stats import boxcox_normmax
# ML Libraries
from mlxtend.regressor import StackingCVRegressor
from lightgbm import LGBMRegressor
from xgboost import XGBRegressor
from sklearn.model_selection import GridSearchCV, KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import OneHotEncoder, LabelEncoder, scale, StandardScaler,RobustScaler
from sklearn.pipeline import make_pipeline
from sklearn.decomposition import PCA
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor, AdaBoostRegressor, BaggingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.linear_model import Ridge, RidgeCV, ElasticNet, ElasticNetCV, BayesianRidge, Lasso
from sklearn.svm import SVR
from mlxtend.regressor import StackingCVRegressor
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.pipeline import make_pipeline
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
house = pd.read_csv("C:\\train.csv")
house.head()
After having a first look at the data, it would be always best to split your data into numeric and categorical variables. The reason for doing this is because categorical variables will mostly need be handled differently than numeric variables for ML purposes. This in particular is really important when 1) we impute missing values, 2) when we do some basic data transformation like applying a one hot encoder on categorical variables and 3) numeric variables use different visuals than categorical variables during the exploration phase.
# In addition to splitting variables into numeric and categorical we need to remove
# The response variable and the the id variable from the list
response_var = 'SalePrice'
id_var = 'Id'
num_vars = [col for col in house.columns if house[col].dtype in ['int64','float64']]
cat_vars = [col for col in house.columns if house[col].dtype in ['object']]
num_vars.remove(response_var)
num_vars.remove(id_var)
print(num_vars)
print(cat_vars)
Okay, looks good. We have a better idea about what our numeric predictors and categorical predictors are from a 30,000 foot view. Now, let’s have a closer view at our most important variable, our response variable, the SalesPrice variable. Let’s check the distribution of our variable
sns.distplot(house[response_var])
Shows the distribution for the sales price response variable.
From the plot above we can see that the variable ‘SalePrice’ is fairly positively skewed (it has a few houses with a very high value for SalePrice in comparison to other houses) but not so strongly skewed which is a good thing. But do we have to treat this variable? Not necessarily but very highly recommended to treat it. Why? simply because most machine learning algorithms that we will be using here assumes that the variable is normally distributed. The most obvious solution to treat a skewed variable is to get rid of the observations (or houses) that has this very high number in their ‘SalePrice’. But other more commonly used approaches can be implemented to treat positively skewed variables; like log transformation or taking the square root for a variable. So, let’s pick log transformation for now before we delve any deeper in our analysis.
Log transformed SalePrice. Look way nicer now with more normally distributed values than the absolute values (values before log transformation)
Awesome, now that we have a fairly normally distributed response variable, let’s have a look at how this variable is correlated with other numerical predictors. Best visual to see the relationship between two numeric variables is to use scatter plot (yes, Mr. obvious is here). Scatter plots here can help tremendously in four main aspects: 1) What predictors are most strongly correlated with our response variable? 2) What predictors don’t show any variability in their values? 3) How is the relationship between each predictor and the response variable; linear or non-linear? 4) Is there very obvious outliers in any of these predictor variables?. Without further to say, let’s plot our data and check for these four points
# Define a figure and axes with a defined size and defining that we need to do subplots
fig, axs = plt.subplots(ncols=0, nrows=0, figsize=(12, 120))
plt.subplots_adjust(right=2)
plt.subplots_adjust(top=2)
# Iterate through the numeric columns from the data frame house
for i, feature in enumerate(list(house[num_vars]), 1):
# Define the location for the plot of each variable using 3 coordinates location
# 3 coordinates: number of rows, number of columns, location of the plot on the grid
plt.subplot(len(list(num_vars)), 3, i)
# do the actual plot for the given feature
sns.scatterplot(x=feature, y=response_var, data=house)
# y and x labels
plt.xlabel('{}'.format(feature), size=15,labelpad=12.5)
plt.ylabel('SalePrice', size=15, labelpad=12.5)
for j in range(2):
plt.tick_params(axis='x', labelsize=12)
plt.tick_params(axis='y', labelsize=12)
plt.show()
Okay, this result in a fairly large picture, so I will cut and paste the output in multiple images
Okay, there is a hell a lot to talk about here. As I mentioned, scatter plots never fail you on how important and revealing they are. Very powerful visualization indeed.
Outliers This blog is not intended to show you all the different fancy ways to detect outliers. Here I am following the simplest possible approach there is to detect outliers. Simply, I am looking at the scatter plot and see where observations don’t make much sense visually. Of course, these outliers can be of importance and can happen regularly in your population but based on the sample we are having here, It seems like the point I have highlighted, using my fancy yellow circles, to be outliers that we can live without. I wasn’t conservative in my selection, so as a first order of business I will get rid of all houses highlighted in the yellow circles above.
Based on what we have highlighted in yellow, we need to get rid of houses with the following values: – LotFrontage is more than 150 – Lot area is more than 50000 – MasVnrArea is more than 1000 – BsmtFinSF1 is more than 3000 – BsmtFinSF2 is more than 1200 – TotalBsmtSF is more than 4000 – 1stFlrSF is more than 3000 – 2ndFlrSF is more than 1750 – GrLivArea is more than 4000 – BedroomAbvGr is more than 7 – TotRmsAbvGrd is more than 12 – GarageArea is more than 1200 – WoodDeckSF is more than 750 – OpenPorchSF is more than 400 – EnclosedPorch is more than 500 Of course depends on your project and your data, you might need to be more or less conservative.
# Write code to remove outliers, as we decided from above
print('data frame dimensions before removing outliers is {}'.format(house.shape))
house = house[house['LotFrontage'] <= 150]
house = house[house['LotArea'] <= 20000]
house = house[house['MasVnrArea']<= 1000]
house = house[house['BsmtFinSF1']<= 3000]
house = house[house['BsmtFinSF2']<= 1200]
house = house[house['TotalBsmtSF']<= 4000]
house = house[house['1stFlrSF'] <= 3000]
house = house[house['2ndFlrSF']<= 1750]
house = house[house['GrLivArea']<= 4000]
house = house[house['BedroomAbvGr'] <= 7]
house = house[house['TotRmsAbvGrd'] <= 12]
house = house[house['GarageArea'] <= 1200]
house = house[house['WoodDeckSF']<= 750]
house = house[house['OpenPorchSF'] <= 400]
house = house[house['EnclosedPorch'] <= 500]
print('data frame dimensions after removing outliers is {}'.format(house.shape))
Great, now that outliers have been removed, we can see the relationships between our predictor variable and the response variable more clearly and it will help us define visually what variables are most important to us. For example, below I want to list the visual for the ‘LotArea’ vs ‘SalePrice’ before and after the outliers were removed. You can see how, visually, the relationship became way more clear to us. and trust me on this, ML algorithms use patterns fairly similar to how we ourselves define patterns, so by removing these outliers we first help our minds to see what is important to us and we definitely make it easier for the algorithm to better define the relationship.
Separate the wheatfrom the chaff Now that we removed the most obvious outliers, let’s look at the correlation between different predictors and the ‘SalePrice’ to keep those important variables in mind while we continue with this project. Here, there are two approaches. The manual approach where we have a look at the scatter plots from above and try to decipher the correlation visually. The other more formal way of doing it is to do some sort of correlation plot that is specifically used to cater our desire to understand correlation. Let’s try both here. By looking at the scatter plot that we generated after removing outliers, we can conclude the following – LotArea is highly correlated with the SalePrice. – LotFrontage is highly correlated with the SalePrice. – OverallQual is highly correlated with SalePrice. – TotalBsmtSF is highly correlated with SalePrice. – 1stFlrSF is highly correlated with SalePrice. – GrLivArea is highly correlated with SalePrice. – Many other variables are highly correlated with SalePrice. – BsmtFinSF2 doesn’t show much variability. We don’t loose much getting rid of it. – LowQualFinSF doesn’t show much variability. We don’t loose much getting rid of it. – PoolArea, ScreenPorch, 3SsnPorch, MiscVal don’t much variability. We wouldn’t loose much getting rid of them.
In addition of seeing what variables are correlated and what is not. We can see that many variables are linearly correlated with the SalePrice variable. This indicates that a linear model might be able to do a very great job predicting the value of the SalePrice. We shall see later if this is a valid observation.
# Remove the unwanted variables from the list of numeric variables
remove_vars = ['BsmtFinSF2','LowQualFinSF','PoolArea','ScreenPorch','3SsnPorch','MiscVal']
num_vars = [item for item in num_vars if item not in remove_vars]
print(num_vars)
Missing Values
Even though different ML algorithms differ sometimes significantly on how they map their inputs to outputs, they all seem to agree on one thing; they all do hate missing values. Again, this article is not about discussing the latest and the greatest strategies to deal with missing values, rather, to show that dealing with missing values is a very essential step during the ML process. That being said, from the projects I worked on, I have never seen a better strategy for imputing missing values than the common sense and the awareness of the data itself and of the problem at hand. For instance, sometimes a missing value might simply mean a zero for a particular variable. So knowing something like this is way more powerful and way more easier than implementing an algorithmic approach to impute that missing values. and yes, it is way better than implementing a KNN approach to estimate these missing values. Okay, let’s see how bad is the problem. Let’s first run a script to check the variables with missing values and see the percentage of missing values in each variable.
missing_dict = {}
for var in house.columns:
percent_missing = (house[var].isnull().sum()/ house.shape[0]) * 100
if percent_missing > 0:
missing_dict[var] = percent_missing
print('variable: {} , missing_values: {}'.format(var, percent_missing))
Let’s visualize missing values per variable
# Sort the dictionary for smallest to biggest value
missing_dict = {k: v for k, v in sorted(missing_dict.items(), key=lambda item: item[1])}
plt.figure(figsize=(12,8))
plt.bar(range(len(missing_dict)), list(missing_dict.values()), align='center')
plt.xticks(range(len(missing_dict)), list(missing_dict.keys()), rotation = 70)
plt.show()
Great, now we know that we have a total of 16 variables with missing values and only 5 of them they have a very high percentage of missing values of more than 50% of the whole sample size. The rest of the variables they have less than 10% missing values of the whole sample size. So, what do we need to do here?. For this given problem, we can check the documentation and see if missing values mean anything. See if we can calculate the missing values for variables that has big percentage of their values missing. If we can’t do that, then we might sadly need to get rid of these variables. We follow the same approach for variables with low percentage of missing values. But if we were unable to calculate missing values by reading about these variables then we need to use a method to find a proxy to this missing value.
# Data description states that missing values for the garage represents a no garage (we replace them with None)
for var in ['GarageType', 'GarageFinish', 'GarageQual', 'GarageCond']:
house[var] = house[var].fillna('None')
house['GarageYrBlt'] = house['GarageYrBlt'].fillna(0)
# Data description states that missing values for the basement represents a no basement (we replace them with None)
for var in ('BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2'):
house[var] = house[var].fillna('None')
# Other variables that indicate a doesn't exist if missing that we can replace with None
for var in ('PoolQC', 'Alley','Fence','FireplaceQu', 'Electrical'):
house[var] = house[var].fillna('None')
# Other variables:
# MiscFeature is a variable to add further information that were not covered by other categories. So, let's just get rid of that
# Garage
cat_vars.remove('MiscFeature')
Okay, great. This should have helped us to get rid of all missing values. You can verify that no more missing values are there by running the following script again
for var in house.columns:
percent_missing = (house[var].isnull().sum()/ house.shape[0]) * 100
if percent_missing > 0:
print('variable: {} , missing_values: {}'.format(var, percent_missing))
Feature Engineering
Awesome. We did most of the daunting tasks that are a must for cleaning and improving the quality of our dataset. Most of what we did was just to meet the most basic requirement for a ML to work correctly and smoothly. After all of that is behind you, let’s unleash the artist that lives inside (please don’t unleash your singing skills or drawing skills, keep that leashed for now). feature engineering in its simplest form, is the process of updating the existing features or derive new features from the existing features for the hope of building smarter rules to map features to our response variable. That’s to say, sometimes two independent variables don’t show strong correlation with the response variable but if we multiply them, the resultant variable might show very strong linear or non-linear correlation. This step of the ML process is usually 20% data scientist and 80% subject matter expert (SME). That’s to say, to excel in your feature engineering process you need to know the data very well, know the problem at hand, have a solid understanding of your variables, how they get measured and what do they exactly mean from a business perspective. For our problem here, we kind of have a very strong sense of what are the different variables and the house sale problem. But if you are working with a harder business problem in an engineering field for example, then you need to have a lot of communication with your SME in this step. Explain to them the power of feature engineering and let them flush all the information they have in their head right into your model. Never underestimate the importance of this step. For the purpose of this article, I want to reference the ideas discussed in this very good read at Kaggle ( https://www.kaggle.com/lavanyashukla01/how-i-made-top-0-3-on-a-kaggle-competition). You also can find many interesting notebooks there for more specific details on this dataset.
house_num = house[num_vars]
house_num.index = house.index
house_cat = house[cat_vars]
house_cat.index = house.index
house_y = house[response_var]
house_num['HasWoodDeck'] = (house_num['WoodDeckSF'] == 0) * 1
house_num['HasOpenPorch'] = (house_num['OpenPorchSF'] == 0) * 1
house_num['HasEnclosedPorch'] = (house_num['EnclosedPorch'] == 0) * 1
house_num['Total_Home_Quality'] = house_num['OverallQual'] + house_num['OverallCond']
house_num['TotalSF'] = house_num['TotalBsmtSF'] + house_num['1stFlrSF'] + house_num['2ndFlrSF']
house_num['Total_sqr_footage'] = (house_num['BsmtFinSF1'] + house_num['1stFlrSF'] + house_num['2ndFlrSF'])
house_num['Total_Bathrooms'] = (house_num['FullBath'] + (0.5 * house_num['HalfBath']) +
house_num['BsmtFullBath'] + (0.5 * house_num['BsmtHalfBath']))
house_num['TotalBsmtSF'] = house_num['TotalBsmtSF'].apply(lambda x: np.exp(6) if x <= 0.0 else x)
house_num['2ndFlrSF'] = house_num['2ndFlrSF'].apply(lambda x: np.exp(6.5) if x <= 0.0 else x)
house_num['GarageArea'] = house_num['GarageArea'].apply(lambda x: np.exp(6) if x <= 0.0 else x)
house_num['GarageCars'] = house_num['GarageCars'].apply(lambda x: 0 if x <= 0.0 else x)
house_num['LotFrontage'] = house_num['LotFrontage'].apply(lambda x: np.exp(4.2) if x <= 0.0 else x)
house_num['MasVnrArea'] = house_num['MasVnrArea'].apply(lambda x: np.exp(4) if x <= 0.0 else x)
house_num['BsmtFinSF1'] = house_num['BsmtFinSF1'].apply(lambda x: np.exp(6.5) if x <= 0.0 else x)
house_num['has2ndfloor'] = house_num['2ndFlrSF'].apply(lambda x: 1 if x > 0 else 0)
house_num['hasgarage'] = house_num['GarageArea'].apply(lambda x: 1 if x > 0 else 0)
house_num['hasbsmt'] = house_num['TotalBsmtSF'].apply(lambda x: 1 if x > 0 else 0)
house_num['hasfireplace'] = house_num['Fireplaces'].apply(lambda x: 1 if x > 0 else 0)
In addition to the transformations done above, you may also want to experiment with applying additional functions on your features; for example you can do log transformations, square root, square and cubic functions on your most important features at least.
From Letters to Numbers
Great. So far we have done a great job and hopefully easy and understandable steps to get to a very advanced stage of our ML project life-cycle. But as you may have noticed we focused mostly on our numeric variables. This is simply because of the flexibility on the transformations that we can do on numeric features. Even our ML algorithms prefer to work with numbers so don’t feel guilty about it. For that purpose, we need to pivot or spread about categorical variables to get the information from the row format to the column format. In other words, say if feature X contains values YES/NO, then we will need to create a column called X_YES that has a value of 1 for rows that has a YES value and 0 for rows that has a NO value. But the only thing that we need to watch for here, is the number of unique values these categorical features have. For instance if a variable has 100 unique values then we will end up with 100 extra columns. This will be a very expensive addition to the dataset that doesn’t necessarily add much value. So, first order of business here is that we need to check the number of unique values; if the number of unique values is large then we need to use label encoder, otherwise, we use one hot encoder. label encoder will simply change the categories from letters and encode them into numbers keeping one column in the output. One hot encoder will create a new column for each category as we discussed above. Let’s visualize how many unique values does each variable have using this code:
# Define a figure and axes with a defined size and defining that we need to do subplots
fig, axs = plt.subplots(ncols=0, nrows=0, figsize=(10, 120))
plt.subplots_adjust(right=2)
plt.subplots_adjust(top=2)
# Iterate through the numeric columns from the data frame house
for i, var in enumerate(list(house_cat[cat_vars]), 1):
# Define the location for the plot of each variable using 3 coordinates location
# 3 coordinates: number of rows, number of columns, location of the plot on the grid
plt.subplot(len(list(cat_vars)), 5, i)
value_counts = house_cat[var].value_counts()
df_counts = value_counts.rename_axis('unique_values').reset_index(name='counts')
# do the actual plot for the given feature
sns.barplot(x='unique_values',y = 'counts', data=df_counts)
# y and x labels
plt.title('{}'.format(var), size=12)
plt.xlabel('')
plt.ylabel('')
plt.xticks(rotation=70)
plt.show()
Great, I guess that we are really lucky. None of the variables is showing an extremely large number of unique values. This makes the decision really easy on us; we need to create a boolean (1/0) column for each unique value from each variable. This can be either accomplished by using the OneHotEncoder class or the get_dummies function in Python. For simplicity, I will go ahead and use the get dummies function.
By now, we have at our disposal a clean dataset with all the previous steps applied. Now, it is time to put the machine learning algorithms at work and see how do they perform in predicting the ‘SalePrice’ response variable.
Before we do any prediction, we need to define our prediction error functions. That’s to say, for any prediction that we get, we need to know how far away is our prediction from the actual value. This will give us an idea about how good our model is. For the purpose of our experiment here, we need to create two error functions; the first is one that calculate the error between predicted values and actual values for any given model. The second calculates error using cross validation. 10 folds cross validation for example splits the training data into 10 different splits. At each time, 1 fold is used for validation and the remaining 9 folds are used to train the model. Then we estimate the prediction error for this round. The second round we pick the second fold to be our validation and the remaining 9 folds to be used to train the model and we calculate the prediction error. We repeat this experiment ten times and therefore we get 10 different prediction error measures. This will give us the ability to have a more comprehensive estimate of the prediction error and we can calculate the mean and standard deviation for this prediction error. Check out the image below with the provided link to better understand the cross validation process.
So first thing we need to do is to visualize our cross validation results. Let’s get the cross validation results and plot them in the same code snippet
## Calculate and plot the cross validation error rate
cv_results = []
cv_names = []
for name, model in models:
cv_scores = cv_rmse(model = model, X = X_train, Y = y_train)
cv_results.append(cv_scores)
cv_names.append(name)
msg = "%s: %f (%f)" % (name, cv_scores.mean(), cv_scores.std())
print(msg)
fig = plt.figure()
fig.suptitle('Algorithm Comparison for House Price Cross Validation Results')
ax = fig.add_subplot(111)
plt.boxplot(cv_results)
ax.set_xticklabels(cv_names)
plt.show()
Great! we are seeing some really great results. Most algorithms are showing similar error rates. Lowest error rate surprisingly seems to be for the linear ridge regression. If we were to deploy this to production I would definitely choose the ridge regression results. In addition to its low error measure in comparison to other algorithms, ridge regression has the most powerful advantage over all of them, which is the ease of interpretability. That’s to say, it would be really easy to understand the magnitude of each feature in contributing to the final predicted sore for SalePrice. This enables us to go beyond the predicted value and into understanding what features contribute negatively or positively on this score. A very powerful tool indeed in the field of machine learning. But interpretability or explainability is out of the scope of this article and therefore I wouldn’t delve much into that concept here.
Very sweet, now that we saw that our cross validation results are reasonable, let’s make predictions using our fitted models and see how predicted values compare to actual values.
## Make prediction on the training data for each model
y_train_stack = stack_gen_model_fit.predict(X_train)
y_train_lgb = lgb_model_fit.predict(X_train)
y_train_svr = svr_model_fit.predict(X_train)
y_train_ridge = ridge_model_fit.predict(X_train)
y_train_rf = rf_model_fit.predict(X_train)
y_train_gbr = gbr_model_fit.predict(X_train)
y_train_xgb = xgb_model_fit.predict(X_train)
## Make prediction on the test data for each model
y_test_stack = stack_gen_model_fit.predict(X_test)
y_test_lgb = lgb_model_fit.predict(X_test)
y_test_svr = svr_model_fit.predict(X_test)
y_test_ridge = ridge_model_fit.predict(X_test)
y_test_rf = rf_model_fit.predict(X_test)
y_test_gbr = gbr_model_fit.predict(X_test)
y_test_xgb = xgb_model_fit.predict(X_test)
Now that we got our predicted values from all models let’s visualize our actual values versus predicted values across different models
plt.scatter(y_train_ridge, y_train, c = "blue", marker = "s", label = "Training data")
plt.scatter(y_test_ridge, y_test, c = "lightgreen", marker = "s", label = "Testing data")
plt.title("Ridge Regression For Sale Price")
plt.xlabel("Predicted values")
plt.ylabel("Real values")
plt.legend(loc = "upper left")
plt.plot([10.5, 13], [10.5, 13], c = "red")
plt.show()
plt.scatter(y_train_lgb, y_train, c = "blue", marker = "s", label = "Training data")
plt.scatter(y_test_lgb, y_test, c = "lightgreen", marker = "s", label = "Testing data")
plt.title("Light Gradient Boosting Regression For Sale Price")
plt.xlabel("Predicted values")
plt.ylabel("Real values")
plt.legend(loc = "upper left")
plt.plot([10.5, 13], [10.5, 13], c = "red")
plt.show()
Conclusion
If you made it till here then I have to tell you congratulations! you have built a fairly sophisticated model to predict house prices without using any of the advanced techniques used in machine learning. This article was meant to show you that by following common sense on your data that you can achieve fairly high accuracy on your model. That’s to say, we achieved some fairly good results without implementing any hyperparameter tuning, piplelines or advanced feature selection. This is not to underestimate the value of these steps but rather to show that good results can still be achieved by following the steps discussed in this article. I will be blogging about these topics in my future posts.
References:
In addition to the links above, some of the ideas an code snippets used in this article were adopted from the following great articles and notebooks:
Le’ts say that you have a csv file, a blob container and access to a DataBricks workspace. The purpose of this mini blog is to show how easy is the process from having a file on your local computer to reading the data into databricks. I will go through the process of uploading the csv file manually to a an azure blob container and then read it in DataBricks using python code.
Step 1: Upload the file to your blob container
This can be done simply by navigating to your blob container. From there, you can click the upload button and select the file you are interested in. Once selected, you need to click the upload button that in the upload blade. See screenshot below.
Once uploaded, you will be able to see the file available in your blob container as shown below:
Step 2: Get credentials necessary for databricks to connect to your blob container
From your azure portal, you need to navigate to all resources then select your blob storage account and from under the settings select account keys. Once their, copy the key under Key1 to a local notepad.
Step 3: Configure DataBricks to read the file
Here, you need to navigate to your databricks work space (create one if you don’t have one already) and launch it. Once launched, go to workspace and create a new python notebook.
To start reading the data, first, you need to configure your spark session to use credentials for your blob container. This can simply be done through the spark.conf.set command. More precisely, we start with the following