Predicting House Prices

So you have a house for sale or buying one? What is a fair price for it? Can we predict it correctly?

Let’s use the “House Sales in King County” data available at Kaggle to answer that question. Each row of the dataset contains information about a home sold between May 2014 and May 2015 along with the price in US dollars. Some of the other features include:

  • bedrooms - number of bedrooms
  • bathrooms - number of bathrooms
  • floors - number of floors
  • yr_built - year built
  • zipcode
  • long - longitude
  • lat - latitude
  • condition - building condition (ordered categorical variable in the range 1 - 5)
  • grade - construction quality of improvements (ordered categorical variable in the range 1 - 13)

If not interested in house prices you still can learn something about regression, classification trees, and extreme gradient boosting.

Fire up R and load some libraries

library(ggplot2)
library(reshape2)
library(plyr)
library(dplyr)
library(rpart)
library(rpart.plot)
library(caret)
library(doMC)
library(scales)
library(GGally)

Load our utility functions, make results reproducible and instruct R to use all our CPU cores (my PC has 8 cores, you might want to revise that value for yours).

source("utils.R")

set.seed(42)
theme_set(theme_minimal())
registerDoMC(cores = 8)
options(warn=-1)

Load and preprocess the dataset

df <- read.csv("data/kc_house_data.csv", stringsAsFactors = FALSE)
print(paste("rows:", nrow(df), "cols:", ncol(df)))
[1] "rows: 21613 cols: 21"

Remove id and date columns and instruct R to interpret condition, view, grade and waterfront as factors.

df <- df[-c(1, 2)]
df$condition <- as.factor(df$condition)
df$view <- as.factor(df$view)
df$grade <- as.factor(df$grade)
df$waterfront <- as.factor(df$waterfront)

Exploration

Do we have missing data?

ggplot_missing(df)

png

It looks like everything is in here! Great!

Maps

The following awesome maps were created by Thierry Ellena. Let’s have a look at them:

House locations
Number of houses by zipcode
Price by zipcode

Let’s look at the distribution of house condtion, grade and price:

p1 <- qplot(condition, data=df, geom = "bar",
    main="Number of houses by condition")

p2 <- qplot(grade, data=df, geom = "bar",
    main="Number of houses by grade")

p3 <- ggplot(df, aes(price)) + geom_density() + 
    scale_y_continuous(labels = comma) +
    scale_x_continuous(labels = comma, limits = c(0, 2e+06)) +
    xlab("price") +
    ggtitle("Price distribution")

multiplot(p1, p2, p3)

png

And a look at price (log10) vs other features:

ggplot(df, aes(x=log10(price), y=sqft_living)) +
    geom_smooth() +
    scale_y_continuous(labels = comma) +
    scale_x_continuous(labels = comma) +
    ylab("sqft of living area") + 
    geom_point(shape=1, alpha=1/10) +
    ggtitle("Price (log10) vs sqft of living area")

png

ggplot(df, aes(x=grade, y=log10(price))) +
    geom_boxplot() +
    scale_y_continuous(labels = comma) +
    coord_flip() +
    geom_point(shape=1, alpha=1/10) +
    ggtitle("Price (log10) vs grade")

png

ggplot(df, aes(x=condition, y=log10(price))) +
    geom_boxplot() +
    scale_y_continuous(labels = comma) +
    coord_flip() +
    geom_point(shape=1, alpha=1/10) +
    ggtitle("Price (log10) vs condition")

png

ggplot(df, aes(x=as.factor(floors), y=log10(price))) +
    geom_boxplot() +
    scale_y_continuous(labels = comma) +
    xlab("floors") +
    coord_flip() +
    geom_point(shape=1, alpha=1/10) +
    ggtitle("Price (log10) vs number of floors")

png

How different features correlate?

ggcorr(df, hjust = 0.8, layout.exp = 1) + 
    ggtitle("Correlation between house features")

png

Splitting the data

We will split the data using the caret package. 90% will be used for training and 10% for testing.

train_idx = createDataPartition(df$price, p=.9, list=FALSE)

train <- df[train_idx, ]
test <- df[-train_idx, ]

We will extract the labels (true values) from our test dataset.

test_labels <- test[, 1]

First attempt of building a model

Let’s build a decision tree with the rpart package using all features (except price) as predictors:

tree_fit <- rpart(price ~ ., data=df)
tree_predicted <- predict(tree_fit, test)

And the results of our model:

summary(tree_predicted)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
 315400  315400  462800  542000  654900 5081000 
summary(test_labels)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  82000  322100  450000  536300  643900 3419000 
cor(tree_predicted, test_labels)

0.814574873081786

rmse(tree_predicted, test_labels)

197634.31260839

How the actual and predicted distributions compare to each other?

res <- data.frame(price=c(tree_predicted, test_labels), 
          type=c(replicate(length(tree_predicted), "predicted"), 
                 replicate(length(test_labels), "actual")))

ggplot(res, aes(x=price, colour=type)) +
    scale_x_continuous(labels = comma, limits = c(0, 2e+06)) +
    scale_y_continuous(labels = comma) +
    geom_density()

png

Not very good, eh? Let’s dig a bit deeper.

How does our model looks like?

rpart.plot(tree_fit, digits = 4, fallen.leaves = TRUE,
             type = 3, extra = 101)

png

It seems that the grade, location (lat, long), square feet are important factors for deciding the price of a house.

Fitting a xgbTree model

That was a good first attempt. Ok, it wasn’t even good. So, can we do better? Let’s try an ensemble of boosted trees. For good intro to boosted trees see: Introduction to Boosted Trees.

First, we will set up the resampling method used by caret. 10 cross-validation passes should do (preferably in parallel).

ctrl = trainControl(method="cv", number=10, allowParallel = TRUE)

Our next step is to find good parameters for XGBoost. See the references below to find out how to tune the parameters for your particular problem. Those are the parameters I’ve tried:

param_grid <-  expand.grid(eta = c(0.3, 0.5, 0.8), 
                        max_depth = c(4:10), 
                        gamma = c(0), 
                        colsample_bytree = c(0.5, 0.6, 0.7),
                        nrounds = c(120, 140, 150, 170), 
                        min_child_weight = c(1))

After trying them out, the following were chosen:

param_grid <- expand.grid(eta=c(0.3), 
                          max_depth= c(6), 
                          gamma = c(0), 
                          colsample_bytree = c(0.6), 
                          nrounds = c(120),
                          min_child_weight = c(1))

Finally, time to train our model using root mean squared error as score metric:

xgb_fit = train(price ~ ., 
            data=df, method="xgbTree", metric="RMSE",
            trControl=ctrl, subset = train_idx, tuneGrid=param_grid)

xgb_predicted = predict(xgb_fit, test, "raw")

and the results:

summary(xgb_predicted)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
 143100  323700  464700  542100  649700 6076000 
summary(test_labels)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  82000  322100  450000  536300  643900 3419000 
cor(xgb_predicted, test_labels)

0.926845603997267

rmse(xgb_predicted, test_labels)

132324.026367212

comparison of actual and predicted distributions:

res <- data.frame(price=c(xgb_predicted, test_labels), 
            type=c(replicate(length(xgb_predicted), "predicted"), 
                  replicate(length(test_labels), "actual")))

ggplot(res, aes(x=price, colour=type)) +
    scale_x_continuous(labels = comma, limits = c(0, 2e+06)) +
    scale_y_continuous(labels = comma) +
    geom_density()

png

The distributions look much more similar compared to the one produced by the decision tree model.

What are the most important features according to our model?

imp <- varImp(xgb_fit, scale = FALSE)

imp_names = rev(rownames(imp$importance))
imp_vals = rev(imp$importance[, 1])

var_importance <- data_frame(variable=imp_names,
                             importance=imp_vals)
var_importance <- arrange(var_importance, importance)
var_importance$variable <- factor(var_importance$variable, 
        levels=var_importance$variable)

var_importance_top_15 = var_importance[with(var_importance, 
        order(-importance)), ][1:15, ]

ggplot(var_importance_top_15, aes(x=variable, weight=importance)) +
 geom_bar(position="dodge") + ggtitle("Feature Importance (Top 15)") +
 coord_flip() + xlab("House Attribute") + ylab("Feature Importance") +
 theme(legend.position="none")

png

Compare distributions of predictions

Let’s see how the tree distributions compare to each other:

res <- data.frame(price=c(tree_predicted, xgb_predicted, test_labels), 
                  type=c(replicate(length(tree_predicted), "tree"), 
                         replicate(length(xgb_predicted), "xgb"),
                         replicate(length(test_labels), "actual")
                        ))

ggplot(res, aes(x=price, colour=type)) +
    scale_x_continuous(labels = comma, limits = c(0,2e+06)) +
    scale_y_continuous(labels = comma) +
    geom_density()

png

Again, we can confirm that the Boosted Trees model provides much more accurate distribution with its predictions.

How well we did, really?

Let’s randomly choose 10 rows and look at the difference between predicted and actual price:

test_sample <- sample_n(test, 10, replace=FALSE)
test_predictions <- predict(xgb_fit, test_sample, "raw")
actual_prices <- round(test_sample$price, 0)
predicted_prices <- round(test_predictions, 0)
data.frame(actual=actual_prices, 
    predicted=predicted_prices, 
    difference=actual_prices-predicted_prices)
actual predicted difference
680000 566726 113274
1400000 1502961 -102961
400000 465854 -65854
468000 382870 85130
220000 208510 11490
525000 553434 -28434
404000 559599 -155599
327000 316226 10774
475000 460288 14712
443000 431310 11690

Is this good? Well, personally I expected more. However, there are certainly more things to try if you are up to it. One interesting question that arises after receiving prediction is: How sure the model is that the price is what he tells us it is? But that is a topic for another post.

References

RMSE explained
Gradient Boosting explained
Dataset attributes explained
More information for the attributes

XGBoost

Introduction to XGBoost
Optimizing XGBoost
Parameter tuning in XGBoost

caret

Tuning parameters in caret