Building the regression models behind my FM predictions

In my last post, I said that I’d explain the analysis behind the conclusions in that post. This is where I’ll be writing about it. This post is a bit technical – it’s all about the models I built and the process behind building them, so it might only appeal to data geeks out there. It’s quite tedious, so I’ll jump right into it. If you’re looking for the data and statistics behind my predictions post, I’ll be writing about that soon.

I collected the data of past successful bids here, data for reserve prices here, data for cities available for auction here, and the store location data for Domino’s, Cafe Coffee Day and Hero MotoCorp from their websites. Right off the bat, there’s a lot of data and a lot of data rarely comes clean.

Cleaning the data proved to pose two main challenges. First was the problem of resolution – do I take Delhi, Gurgaon, Noida, New Delhi, Faridabad, Ghaziabad as one city or not? Nearly every large city suffered from this resolution problem. Eventually, I decided to resolve it to the same level as it was in the FM list – if they were selling a separate frequency for Gurgaon (they aren’t), I’d consider it separately from Delhi. Otherwise, it’d be a part of Delhi. I know it seems intuitive in hindsight but it took me some time to come by this. 

The second challenge was straight-up bad data. There’s four ways to spell Trichy, the others being Tiruchirapali, Tiruchurappalli and Tiruchy – and every franchise had their own way of spelling it. I tried eliminating the vowels in city names to create a common spelling-list that my data could safely rest upon, but that fell apart when my program encountered cities like Cuddapah (Kadapa) and Calicut (Kozhikode). I had to resolve these anomalies one at a time; each city really has its own challenges. After days of cleaning data, I was finally ready to build my model.

Here’s the code for my model in case you’re interested.

fm.df <- read.csv("Analysis.csv")               #read file 

#create training dataset
fmtrain.df <- subset(fm.df, !is.na(LicenseFee))

#create test dataset
fmtest.df <- subset(fm.df, is.na(LicenseFee))   

#build regression models
fm.1 <- lm(LicenseFee ~ Category, data = fmtrain.df)
fm.2 <- lm(LicenseFee ~ Category + Dominos, data = fmtrain.df)
fm.3 <- lm(LicenseFee ~ Category + CCD, data = fmtrain.df)
fm.4 <- lm(LicenseFee ~ Category + Hero, data = fmtrain.df)
fm.5 <- lm(LicenseFee ~ Category + CCD + Dominos + Hero, data = fmtrain.df)

minmax <- as.data.frame(predict(fm.1, fmtest.df, interval = 'confidence', level = 0.6)) #prepare limit for 60% confidence
df60 <- cbind(fmtest.df$City, minmax)
df60 <- subset(df60, select = c("fmtest.df$City", "upr"))

minmax <- as.data.frame(predict(fm.1, fmtest.df, interval = 'confidence', level = 0.8)) #prepare limit for 80% confidence
df80 <- cbind(fmtest.df$City, minmax)
df80 <- subset(df80, select = c("fmtest.df$City", "upr"))

minmax <- as.data.frame(predict(fm.1, fmtest.df, interval = 'confidence', level = 0.95)) #prepare limit for 95% confidence
df95 <- cbind(fmtest.df$City, minmax)
df95 <- subset(df95, select = c("fmtest.df$City", "upr"))

bizcon.fm5 <- cbind(df60, df80, df95) #return upper limits for specified confidence intervals
write.csv(bizcon.fm5, "bizcon.fm5.csv") #save file

First, I split my data into two groups – a training set (the prices I had from the round 1 auctions) and a test set (the questions I had to answer using my model). There’s a convenient function in R, lm, which builds a linear regression model. I had four tools to build my model – Domino’s, Cafe Coffee Day, Hero MotoCorp and the ‘Category’ of the city. Adding an extra variable always improved my model a little bit (I found this out by calculating the R value for every model), but I couldn’t use all of them at once. Using every variable in the mix would mean that unless a city had all four variables, it’d return a blank. I wouldn’t be able to predict for 159 of 248 cities in my dataset.

So I built 5 models (it’s in the code) and ran the program for all of them. The program’s job is to find the upper limit for each confidence level for every regression model. Here‘s a good source you could check out if you’re not entirely sure what a regression model is.

Then I averaged out what each model gave me. So if a city had a ‘Category’, a Domino’s store and a Hero dealership, it would run through all models which relied on these three variables (the models being fm.1, fm.2, fm.4). Then, I summed the upper limits each model returned and divided this sum by 5 (for every confidence interval) to place it on a common scale with the rest of the cities. 

This gave me just what I needed. A price that a city should be bid at based on how desperately a company wanted the frequency of that city. That’s how I ended up with my figures for each confidence interval and my bidding guide – the FM auction cheat sheet.

Stay tuned for a super interesting model based on new data coming up this weekend.

 

Leave a comment