In this blog, I would like to go through a full machine learning project using regression modeling and following common sense data cleaning and processing. This project should cover most of the checkpoints for any given machine learning project in real world utilizing one of the most powerful and commonly used regression algorithms, I will be covering:
1- The process of data exploration and why it is not just a fancy thing to do to impress business users.
2- How to divide your numbers (numeric variables) from your letters (categorical variables) and then bring them back together.
3- The ultimate painful experience of missing values and how to cheat on them.
4- Letters, oh god I hate letters, show me the numbers from these categories.
5- Let’s bring our best gladiators (support vector machines, random forests, gradient boosting algorithm …etc) and see what they can learn.
6- Which algorithm was best at learning? Are they consistent in their learning? Can I simply visualize the performance for each model and let my eye do the comparison because my brain is busy with other things?
Data Ingestion and Exploration
First thing we need to do of course is to load/read our data. For the purpose of this blog, I decided to use one of the most popular data sets for regression purposes; the house pricing dataset. If you want to follow along, feel free to download the file from https://www.kaggle.com/c/house-prices-advanced-regression-techniques/download/EGpnHbJBO1XGF5B35WNU%2Fversions%2FHOSha0bPv4HEBGjo9O9x%2Ffiles%2Ftrain.csv. This dataset contains several numerical and categorical predictors to help predict the sales price for a house. So, the objective here is that we need to use sales price from historical houses to try to come up with rules that map our given predictors to the sales price.
Once downloaded on your local computer, you can read the file and show the first few rows using the following code, of course, after loading all the required libraries
""" Import required libraries Main Libraries"""
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pyodbc
import os
import sklearn
import seaborn as sns
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
%matplotlib inline
# Basic Statistics Packages
from scipy.stats import skew, norm
from scipy.special import boxcox1p
from scipy.stats import boxcox_normmax
# ML Libraries
from mlxtend.regressor import StackingCVRegressor
from lightgbm import LGBMRegressor
from xgboost import XGBRegressor
from sklearn.model_selection import GridSearchCV, KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import OneHotEncoder, LabelEncoder, scale, StandardScaler,RobustScaler
from sklearn.pipeline import make_pipeline
from sklearn.decomposition import PCA
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor, AdaBoostRegressor, BaggingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.linear_model import Ridge, RidgeCV, ElasticNet, ElasticNetCV, BayesianRidge, Lasso
from sklearn.svm import SVR
from mlxtend.regressor import StackingCVRegressor
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.pipeline import make_pipeline
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
house = pd.read_csv("C:\\train.csv")
house.head()

After having a first look at the data, it would be always best to split your data into numeric and categorical variables. The reason for doing this is because categorical variables will mostly need be handled differently than numeric variables for ML purposes. This in particular is really important when 1) we impute missing values, 2) when we do some basic data transformation like applying a one hot encoder on categorical variables and 3) numeric variables use different visuals than categorical variables during the exploration phase.
# In addition to splitting variables into numeric and categorical we need to remove
# The response variable and the the id variable from the list
response_var = 'SalePrice'
id_var = 'Id'
num_vars = [col for col in house.columns if house[col].dtype in ['int64','float64']]
cat_vars = [col for col in house.columns if house[col].dtype in ['object']]
num_vars.remove(response_var)
num_vars.remove(id_var)
print(num_vars)
print(cat_vars)

Okay, looks good. We have a better idea about what our numeric predictors and categorical predictors are from a 30,000 foot view. Now, let’s have a closer view at our most important variable, our response variable, the SalesPrice variable. Let’s check the distribution of our variable
sns.distplot(house[response_var])

From the plot above we can see that the variable ‘SalePrice’ is fairly positively skewed (it has a few houses with a very high value for SalePrice in comparison to other houses) but not so strongly skewed which is a good thing. But do we have to treat this variable? Not necessarily but very highly recommended to treat it. Why? simply because most machine learning algorithms that we will be using here assumes that the variable is normally distributed. The most obvious solution to treat a skewed variable is to get rid of the observations (or houses) that has this very high number in their ‘SalePrice’. But other more commonly used approaches can be implemented to treat positively skewed variables; like log transformation or taking the square root for a variable. So, let’s pick log transformation for now before we delve any deeper in our analysis.
house[response_var] = np.log1p(house[response_var])
sns.distplot(house[response_var])

Awesome, now that we have a fairly normally distributed response variable, let’s have a look at how this variable is correlated with other numerical predictors. Best visual to see the relationship between two numeric variables is to use scatter plot (yes, Mr. obvious is here). Scatter plots here can help tremendously in four main aspects:
1) What predictors are most strongly correlated with our response variable?
2) What predictors don’t show any variability in their values?
3) How is the relationship between each predictor and the response variable; linear or non-linear?
4) Is there very obvious outliers in any of these predictor variables?.
Without further to say, let’s plot our data and check for these four points
# Define a figure and axes with a defined size and defining that we need to do subplots
fig, axs = plt.subplots(ncols=0, nrows=0, figsize=(12, 120))
plt.subplots_adjust(right=2)
plt.subplots_adjust(top=2)
# Iterate through the numeric columns from the data frame house
for i, feature in enumerate(list(house[num_vars]), 1):
# Define the location for the plot of each variable using 3 coordinates location
# 3 coordinates: number of rows, number of columns, location of the plot on the grid
plt.subplot(len(list(num_vars)), 3, i)
# do the actual plot for the given feature
sns.scatterplot(x=feature, y=response_var, data=house)
# y and x labels
plt.xlabel('{}'.format(feature), size=15,labelpad=12.5)
plt.ylabel('SalePrice', size=15, labelpad=12.5)
for j in range(2):
plt.tick_params(axis='x', labelsize=12)
plt.tick_params(axis='y', labelsize=12)
plt.show()
Okay, this result in a fairly large picture, so I will cut and paste the output in multiple images



Okay, there is a hell a lot to talk about here. As I mentioned, scatter plots never fail you on how important and revealing they are. Very powerful visualization indeed.
Outliers
This blog is not intended to show you all the different fancy ways to detect outliers. Here I am following the simplest possible approach there is to detect outliers. Simply, I am looking at the scatter plot and see where observations don’t make much sense visually. Of course, these outliers can be of importance and can happen regularly in your population but based on the sample we are having here, It seems like the point I have highlighted, using my fancy yellow circles, to be outliers that we can live without. I wasn’t conservative in my selection, so as a first order of business I will get rid of all houses highlighted in the yellow circles above.
Based on what we have highlighted in yellow, we need to get rid of houses with the following values:
– LotFrontage is more than 150
– Lot area is more than 50000
– MasVnrArea is more than 1000
– BsmtFinSF1 is more than 3000
– BsmtFinSF2 is more than 1200
– TotalBsmtSF is more than 4000
– 1stFlrSF is more than 3000
– 2ndFlrSF is more than 1750
– GrLivArea is more than 4000
– BedroomAbvGr is more than 7
– TotRmsAbvGrd is more than 12
– GarageArea is more than 1200
– WoodDeckSF is more than 750
– OpenPorchSF is more than 400
– EnclosedPorch is more than 500
Of course depends on your project and your data, you might need to be more or less conservative.
# Write code to remove outliers, as we decided from above
print('data frame dimensions before removing outliers is {}'.format(house.shape))
house = house[house['LotFrontage'] <= 150]
house = house[house['LotArea'] <= 20000]
house = house[house['MasVnrArea']<= 1000]
house = house[house['BsmtFinSF1']<= 3000]
house = house[house['BsmtFinSF2']<= 1200]
house = house[house['TotalBsmtSF']<= 4000]
house = house[house['1stFlrSF'] <= 3000]
house = house[house['2ndFlrSF']<= 1750]
house = house[house['GrLivArea']<= 4000]
house = house[house['BedroomAbvGr'] <= 7]
house = house[house['TotRmsAbvGrd'] <= 12]
house = house[house['GarageArea'] <= 1200]
house = house[house['WoodDeckSF']<= 750]
house = house[house['OpenPorchSF'] <= 400]
house = house[house['EnclosedPorch'] <= 500]
print('data frame dimensions after removing outliers is {}'.format(house.shape))
Great, now that outliers have been removed, we can see the relationships between our predictor variable and the response variable more clearly and it will help us define visually what variables are most important to us. For example, below I want to list the visual for the ‘LotArea’ vs ‘SalePrice’ before and after the outliers were removed. You can see how, visually, the relationship became way more clear to us. and trust me on this, ML algorithms use patterns fairly similar to how we ourselves define patterns, so by removing these outliers we first help our minds to see what is important to us and we definitely make it easier for the algorithm to better define the relationship.

Separate the wheat from the chaff
Now that we removed the most obvious outliers, let’s look at the correlation between different predictors and the ‘SalePrice’ to keep those important variables in mind while we continue with this project. Here, there are two approaches. The manual approach where we have a look at the scatter plots from above and try to decipher the correlation visually. The other more formal way of doing it is to do some sort of correlation plot that is specifically used to cater our desire to understand correlation. Let’s try both here. By looking at the scatter plot that we generated after removing outliers, we can conclude the following
– LotArea is highly correlated with the SalePrice.
– LotFrontage is highly correlated with the SalePrice.
– OverallQual is highly correlated with SalePrice.
– TotalBsmtSF is highly correlated with SalePrice.
– 1stFlrSF is highly correlated with SalePrice.
– GrLivArea is highly correlated with SalePrice.
– Many other variables are highly correlated with SalePrice.
– BsmtFinSF2 doesn’t show much variability. We don’t loose much getting rid of it.
– LowQualFinSF doesn’t show much variability. We don’t loose much getting rid of it.
– PoolArea, ScreenPorch, 3SsnPorch, MiscVal don’t much variability. We wouldn’t loose much getting rid of them.
In addition of seeing what variables are correlated and what is not. We can see that many variables are linearly correlated with the SalePrice variable. This indicates that a linear model might be able to do a very great job predicting the value of the SalePrice. We shall see later if this is a valid observation.
# Remove the unwanted variables from the list of numeric variables
remove_vars = ['BsmtFinSF2','LowQualFinSF','PoolArea','ScreenPorch','3SsnPorch','MiscVal']
num_vars = [item for item in num_vars if item not in remove_vars]
print(num_vars)
Missing Values
Even though different ML algorithms differ sometimes significantly on how they map their inputs to outputs, they all seem to agree on one thing; they all do hate missing values. Again, this article is not about discussing the latest and the greatest strategies to deal with missing values, rather, to show that dealing with missing values is a very essential step during the ML process. That being said, from the projects I worked on, I have never seen a better strategy for imputing missing values than the common sense and the awareness of the data itself and of the problem at hand. For instance, sometimes a missing value might simply mean a zero for a particular variable. So knowing something like this is way more powerful and way more easier than implementing an algorithmic approach to impute that missing values. and yes, it is way better than implementing a KNN approach to estimate these missing values.
Okay, let’s see how bad is the problem. Let’s first run a script to check the variables with missing values and see the percentage of missing values in each variable.
missing_dict = {}
for var in house.columns:
percent_missing = (house[var].isnull().sum()/ house.shape[0]) * 100
if percent_missing > 0:
missing_dict[var] = percent_missing
print('variable: {} , missing_values: {}'.format(var, percent_missing))

Let’s visualize missing values per variable
# Sort the dictionary for smallest to biggest value
missing_dict = {k: v for k, v in sorted(missing_dict.items(), key=lambda item: item[1])}
plt.figure(figsize=(12,8))
plt.bar(range(len(missing_dict)), list(missing_dict.values()), align='center')
plt.xticks(range(len(missing_dict)), list(missing_dict.keys()), rotation = 70)
plt.show()

Great, now we know that we have a total of 16 variables with missing values and only 5 of them they have a very high percentage of missing values of more than 50% of the whole sample size. The rest of the variables they have less than 10% missing values of the whole sample size. So, what do we need to do here?.
For this given problem, we can check the documentation and see if missing values mean anything. See if we can calculate the missing values for variables that has big percentage of their values missing. If we can’t do that, then we might sadly need to get rid of these variables. We follow the same approach for variables with low percentage of missing values. But if we were unable to calculate missing values by reading about these variables then we need to use a method to find a proxy to this missing value.
# Data description states that missing values for the garage represents a no garage (we replace them with None)
for var in ['GarageType', 'GarageFinish', 'GarageQual', 'GarageCond']:
house[var] = house[var].fillna('None')
house['GarageYrBlt'] = house['GarageYrBlt'].fillna(0)
# Data description states that missing values for the basement represents a no basement (we replace them with None)
for var in ('BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2'):
house[var] = house[var].fillna('None')
# Other variables that indicate a doesn't exist if missing that we can replace with None
for var in ('PoolQC', 'Alley','Fence','FireplaceQu', 'Electrical'):
house[var] = house[var].fillna('None')
# Other variables:
# MiscFeature is a variable to add further information that were not covered by other categories. So, let's just get rid of that
# Garage
cat_vars.remove('MiscFeature')
Okay, great. This should have helped us to get rid of all missing values. You can verify that no more missing values are there by running the following script again
for var in house.columns:
percent_missing = (house[var].isnull().sum()/ house.shape[0]) * 100
if percent_missing > 0:
print('variable: {} , missing_values: {}'.format(var, percent_missing))
Feature Engineering
Awesome. We did most of the daunting tasks that are a must for cleaning and improving the quality of our dataset. Most of what we did was just to meet the most basic requirement for a ML to work correctly and smoothly. After all of that is behind you, let’s unleash the artist that lives inside (please don’t unleash your singing skills or drawing skills, keep that leashed for now). feature engineering in its simplest form, is the process of updating the existing features or derive new features from the existing features for the hope of building smarter rules to map features to our response variable. That’s to say, sometimes two independent variables don’t show strong correlation with the response variable but if we multiply them, the resultant variable might show very strong linear or non-linear correlation. This step of the ML process is usually 20% data scientist and 80% subject matter expert (SME). That’s to say, to excel in your feature engineering process you need to know the data very well, know the problem at hand, have a solid understanding of your variables, how they get measured and what do they exactly mean from a business perspective. For our problem here, we kind of have a very strong sense of what are the different variables and the house sale problem. But if you are working with a harder business problem in an engineering field for example, then you need to have a lot of communication with your SME in this step. Explain to them the power of feature engineering and let them flush all the information they have in their head right into your model. Never underestimate the importance of this step.
For the purpose of this article, I want to reference the ideas discussed in this very good read at Kaggle ( https://www.kaggle.com/lavanyashukla01/how-i-made-top-0-3-on-a-kaggle-competition). You also can find many interesting notebooks there for more specific details on this dataset.
house_num = house[num_vars]
house_num.index = house.index
house_cat = house[cat_vars]
house_cat.index = house.index
house_y = house[response_var]
house_num['HasWoodDeck'] = (house_num['WoodDeckSF'] == 0) * 1
house_num['HasOpenPorch'] = (house_num['OpenPorchSF'] == 0) * 1
house_num['HasEnclosedPorch'] = (house_num['EnclosedPorch'] == 0) * 1
house_num['Total_Home_Quality'] = house_num['OverallQual'] + house_num['OverallCond']
house_num['TotalSF'] = house_num['TotalBsmtSF'] + house_num['1stFlrSF'] + house_num['2ndFlrSF']
house_num['Total_sqr_footage'] = (house_num['BsmtFinSF1'] + house_num['1stFlrSF'] + house_num['2ndFlrSF'])
house_num['Total_Bathrooms'] = (house_num['FullBath'] + (0.5 * house_num['HalfBath']) +
house_num['BsmtFullBath'] + (0.5 * house_num['BsmtHalfBath']))
house_num['TotalBsmtSF'] = house_num['TotalBsmtSF'].apply(lambda x: np.exp(6) if x <= 0.0 else x)
house_num['2ndFlrSF'] = house_num['2ndFlrSF'].apply(lambda x: np.exp(6.5) if x <= 0.0 else x)
house_num['GarageArea'] = house_num['GarageArea'].apply(lambda x: np.exp(6) if x <= 0.0 else x)
house_num['GarageCars'] = house_num['GarageCars'].apply(lambda x: 0 if x <= 0.0 else x)
house_num['LotFrontage'] = house_num['LotFrontage'].apply(lambda x: np.exp(4.2) if x <= 0.0 else x)
house_num['MasVnrArea'] = house_num['MasVnrArea'].apply(lambda x: np.exp(4) if x <= 0.0 else x)
house_num['BsmtFinSF1'] = house_num['BsmtFinSF1'].apply(lambda x: np.exp(6.5) if x <= 0.0 else x)
house_num['has2ndfloor'] = house_num['2ndFlrSF'].apply(lambda x: 1 if x > 0 else 0)
house_num['hasgarage'] = house_num['GarageArea'].apply(lambda x: 1 if x > 0 else 0)
house_num['hasbsmt'] = house_num['TotalBsmtSF'].apply(lambda x: 1 if x > 0 else 0)
house_num['hasfireplace'] = house_num['Fireplaces'].apply(lambda x: 1 if x > 0 else 0)
In addition to the transformations done above, you may also want to experiment with applying additional functions on your features; for example you can do log transformations, square root, square and cubic functions on your most important features at least.
From Letters to Numbers
Great. So far we have done a great job and hopefully easy and understandable steps to get to a very advanced stage of our ML project life-cycle. But as you may have noticed we focused mostly on our numeric variables. This is simply because of the flexibility on the transformations that we can do on numeric features. Even our ML algorithms prefer to work with numbers so don’t feel guilty about it. For that purpose, we need to pivot or spread about categorical variables to get the information from the row format to the column format. In other words, say if feature X contains values YES/NO, then we will need to create a column called X_YES that has a value of 1 for rows that has a YES value and 0 for rows that has a NO value. But the only thing that we need to watch for here, is the number of unique values these categorical features have. For instance if a variable has 100 unique values then we will end up with 100 extra columns. This will be a very expensive addition to the dataset that doesn’t necessarily add much value. So, first order of business here is that we need to check the number of unique values; if the number of unique values is large then we need to use label encoder, otherwise, we use one hot encoder. label encoder will simply change the categories from letters and encode them into numbers keeping one column in the output. One hot encoder will create a new column for each category as we discussed above. Let’s visualize how many unique values does each variable have using this code:
# Define a figure and axes with a defined size and defining that we need to do subplots
fig, axs = plt.subplots(ncols=0, nrows=0, figsize=(10, 120))
plt.subplots_adjust(right=2)
plt.subplots_adjust(top=2)
# Iterate through the numeric columns from the data frame house
for i, var in enumerate(list(house_cat[cat_vars]), 1):
# Define the location for the plot of each variable using 3 coordinates location
# 3 coordinates: number of rows, number of columns, location of the plot on the grid
plt.subplot(len(list(cat_vars)), 5, i)
value_counts = house_cat[var].value_counts()
df_counts = value_counts.rename_axis('unique_values').reset_index(name='counts')
# do the actual plot for the given feature
sns.barplot(x='unique_values',y = 'counts', data=df_counts)
# y and x labels
plt.title('{}'.format(var), size=12)
plt.xlabel('')
plt.ylabel('')
plt.xticks(rotation=70)
plt.show()



Great, I guess that we are really lucky. None of the variables is showing an extremely large number of unique values. This makes the decision really easy on us; we need to create a boolean (1/0) column for each unique value from each variable. This can be either accomplished by using the OneHotEncoder class or the get_dummies function in Python. For simplicity, I will go ahead and use the get dummies function.
house_cat = pd.get_dummies(house_cat)
house_x = pd.concat([house_cat, house_num], axis = 1 )
Getting to the Cool Stuff: The Gladiators at Work

By now, we have at our disposal a clean dataset with all the previous steps applied. Now, it is time to put the machine learning algorithms at work and see how do they perform in predicting the ‘SalePrice’ response variable.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(house_x, house_y, test_size = 0.2, random_state = 0)
Before we do any prediction, we need to define our prediction error functions. That’s to say, for any prediction that we get, we need to know how far away is our prediction from the actual value. This will give us an idea about how good our model is. For the purpose of our experiment here, we need to create two error functions; the first is one that calculate the error between predicted values and actual values for any given model. The second calculates error using cross validation. 10 folds cross validation for example splits the training data into 10 different splits. At each time, 1 fold is used for validation and the remaining 9 folds are used to train the model. Then we estimate the prediction error for this round. The second round we pick the second fold to be our validation and the remaining 9 folds to be used to train the model and we calculate the prediction error. We repeat this experiment ten times and therefore we get 10 different prediction error measures. This will give us the ability to have a more comprehensive estimate of the prediction error and we can calculate the mean and standard deviation for this prediction error. Check out the image below with the provided link to better understand the cross validation process.

Let’s define our models for now and after defining them, let’s fit them using our training dataset
# Setup all models
# Light Gradient Boosting Regressor
lightgbm = LGBMRegressor(objective='regression',
num_leaves=6,
learning_rate=0.01,
n_estimators=7000,
max_bin=200,
bagging_fraction=0.8,
bagging_freq=4,
bagging_seed=8,
feature_fraction=0.2,
feature_fraction_seed=8,
min_sum_hessian_in_leaf = 11,
verbose=-1,
random_state=42)
# XGBoost Regressor
xgboost = XGBRegressor(learning_rate=0.01,
n_estimators=6000,
max_depth=4,
min_child_weight=0,
gamma=0.6,
subsample=0.7,
colsample_bytree=0.7,
objective='reg:squarederror',
nthread=-1,
scale_pos_weight=1,
seed=27,
reg_alpha=0.00006,
random_state=42)
# Ridge Regressor
ridge_alphas = [1e-15, 1e-10, 1e-8, 9e-4, 7e-4, 5e-4, 3e-4, 1e-4, 1e-3, 5e-2, 1e-2, 0.1, 0.3, 1, 3, 5, 10, 15, 18, 20, 30, 50, 75, 100]
ridge = make_pipeline(RobustScaler(), RidgeCV(alphas=ridge_alphas, cv=kf))
# Support Vector Regressor
svr = make_pipeline(RobustScaler(), SVR(C= 20, epsilon= 0.008, gamma=0.0003))
# Gradient Boosting Regressor
gbr = GradientBoostingRegressor(n_estimators=6000,
learning_rate=0.01,
max_depth=4,
max_features='sqrt',
min_samples_leaf=15,
min_samples_split=10,
loss='huber',
random_state=42)
# Random Forest Regressor
rf = RandomForestRegressor(n_estimators=1200,
max_depth=15,
min_samples_split=5,
min_samples_leaf=5,
max_features=None,
oob_score=True,
random_state=42)
# Stack up all the models above, optimized using xgboost
stack_gen = StackingCVRegressor(regressors=(lightgbm, svr, ridge, gbr, rf),
meta_regressor=xgboost,
use_features_in_secondary=True)
models = []
models.append(('lightgbm', lightgbm))
models.append(('xgboost', xgboost))
models.append(('ridge',ridge))
models.append(('svr', svr))
models.append(('gbr', gbr))
models.append(('rf', rf))
models.append(('stack_gen', stack_gen))
# Fit the models
print('stack_gen')
stack_gen_model_fit = stack_gen.fit(np.array(X_train), np.array(y_train))
print('lightgbm')
lgb_model_fit = lightgbm.fit(X_train, y_train)
print('Svr')
svr_model_fit = svr.fit(X_train, y_train)
print('Ridge')
ridge_model_fit = ridge.fit(X_train, y_train)
print('RandomForest')
rf_model_fit = rf.fit(X_train, y_train)
print('GradientBoosting')
gbr_model_fit = gbr.fit(X_train, y_train)
print('XBoost')
xgb_model_fit = xgboost.fit(X_train, y_train)
So first thing we need to do is to visualize our cross validation results. Let’s get the cross validation results and plot them in the same code snippet
## Calculate and plot the cross validation error rate
cv_results = []
cv_names = []
for name, model in models:
cv_scores = cv_rmse(model = model, X = X_train, Y = y_train)
cv_results.append(cv_scores)
cv_names.append(name)
msg = "%s: %f (%f)" % (name, cv_scores.mean(), cv_scores.std())
print(msg)
fig = plt.figure()
fig.suptitle('Algorithm Comparison for House Price Cross Validation Results')
ax = fig.add_subplot(111)
plt.boxplot(cv_results)
ax.set_xticklabels(cv_names)
plt.show()

Great! we are seeing some really great results. Most algorithms are showing similar error rates. Lowest error rate surprisingly seems to be for the linear ridge regression. If we were to deploy this to production I would definitely choose the ridge regression results. In addition to its low error measure in comparison to other algorithms, ridge regression has the most powerful advantage over all of them, which is the ease of interpretability. That’s to say, it would be really easy to understand the magnitude of each feature in contributing to the final predicted sore for SalePrice. This enables us to go beyond the predicted value and into understanding what features contribute negatively or positively on this score. A very powerful tool indeed in the field of machine learning. But interpretability or explainability is out of the scope of this article and therefore I wouldn’t delve much into that concept here.
Very sweet, now that we saw that our cross validation results are reasonable, let’s make predictions using our fitted models and see how predicted values compare to actual values.
## Make prediction on the training data for each model
y_train_stack = stack_gen_model_fit.predict(X_train)
y_train_lgb = lgb_model_fit.predict(X_train)
y_train_svr = svr_model_fit.predict(X_train)
y_train_ridge = ridge_model_fit.predict(X_train)
y_train_rf = rf_model_fit.predict(X_train)
y_train_gbr = gbr_model_fit.predict(X_train)
y_train_xgb = xgb_model_fit.predict(X_train)
## Make prediction on the test data for each model
y_test_stack = stack_gen_model_fit.predict(X_test)
y_test_lgb = lgb_model_fit.predict(X_test)
y_test_svr = svr_model_fit.predict(X_test)
y_test_ridge = ridge_model_fit.predict(X_test)
y_test_rf = rf_model_fit.predict(X_test)
y_test_gbr = gbr_model_fit.predict(X_test)
y_test_xgb = xgb_model_fit.predict(X_test)
Now that we got our predicted values from all models let’s visualize our actual values versus predicted values across different models
plt.scatter(y_train_ridge, y_train, c = "blue", marker = "s", label = "Training data")
plt.scatter(y_test_ridge, y_test, c = "lightgreen", marker = "s", label = "Testing data")
plt.title("Ridge Regression For Sale Price")
plt.xlabel("Predicted values")
plt.ylabel("Real values")
plt.legend(loc = "upper left")
plt.plot([10.5, 13], [10.5, 13], c = "red")
plt.show()

plt.scatter(y_train_lgb, y_train, c = "blue", marker = "s", label = "Training data")
plt.scatter(y_test_lgb, y_test, c = "lightgreen", marker = "s", label = "Testing data")
plt.title("Light Gradient Boosting Regression For Sale Price")
plt.xlabel("Predicted values")
plt.ylabel("Real values")
plt.legend(loc = "upper left")
plt.plot([10.5, 13], [10.5, 13], c = "red")
plt.show()

Conclusion
If you made it till here then I have to tell you congratulations! you have built a fairly sophisticated model to predict house prices without using any of the advanced techniques used in machine learning. This article was meant to show you that by following common sense on your data that you can achieve fairly high accuracy on your model. That’s to say, we achieved some fairly good results without implementing any hyperparameter tuning, piplelines or advanced feature selection. This is not to underestimate the value of these steps but rather to show that good results can still be achieved by following the steps discussed in this article. I will be blogging about these topics in my future posts.
References:
In addition to the links above, some of the ideas an code snippets used in this article were adopted from the following great articles and notebooks:
- https://www.kaggle.com/lavanyashukla01/how-i-made-top-0-3-on-a-kaggle-competition
- https://medium.com/@pushkarmandot/https-medium-com-pushkarmandot-what-is-lightgbm-how-to-implement-it-how-to-fine-tune-the-parameters-60347819b7fc
- https://www.kaggle.com/pmarcelino/comprehensive-data-exploration-with-python
- https://www.kaggle.com/apapiu/regularized-linear-models
- https://www.kaggle.com/juliencs/a-study-on-regression-applied-to-the-ames-dataset