Likelihood Log

Econometrics of scale

Linear Regression – How To Do It Properly Pt.3 – The Process

the-process 5Time To Summarise The Previous Two Posts

In the last two posts we talked about the maths and the theory behind linear regression.  We have covered the mathematics of fitting an OLS linear regression model and we have looked at derivation of individual regressor coefficient estimates. We have discussed the Gauss-Markov Theorem, the conditions that are necessary for the theorem to hold and more generally the conditions that must be satisfied in order for us to be able to use regression effectively and to place significant faith in coefficient estimates.  We have looked at the sample size, hypothesized about distribution of the theoretical error term \epsilon , disucussed omitted explanatory variables, unnecessary explanatory variables, proxy variables, non-linear regressors and other factors that may influence our model’s reliability, unbiasedness, efficiency and overall goodness of fit.  Finally we have looked at objective ways of measuring this goodness of fit.

This has been a long series of posts with a lot of maths and a lot of ifs and buts and today I would like to summarise all of it and attempt to come up with a simple step-by-step process that one can follow to get the most out of linear regression without getting burned by its many pitfalls and dangers.

How to do linear regression properly.

So, now that we are equipped with all the facts from the previous two posts, let’s go through all the steps and caveats of linear regression:

the process 2

Step 1. Ask If Linear Regression Is The Right Approach

Linear regression is only suitable for modeling linear relationships where the response variable is a number, ideally a continuous number. Use linear regression only when you are looking to model relations such as:

an increase in one unit in x_1 , with all other x_2,\dots, x_n held constant, causes an increase of \beta_1 units in y

If the response variable is binary (yes/no, survive/die, male/female), the something like logistic regression or decision trees will be more appropriate. If the response variable is a factor (ie. something that might be coded as a number but does not have an inherent ordinal nature) then again something other than linear regression is probably better suited in this case. For more detail on when exactly to chose which type of data modeling technique – see this link.

Now if we are still here, and have not been convinced that some other method such as K-means or random forest is a more suitable alternative, let’s proceed to the next step.

Step 2. Wrangle And Pre-Process The Data

Before running an OLS linear regression model fitting procedure on the raw data set, we would want to get rid of null records, to join a number of tables together, to sort records in a particular order, to convert text to numbers or vice versa, etc – the list is very long.

All of these tasks of cleansing, formatting, reshuffling and just general pre-processing of data from its raw state into a “cooked” state where it can actually be used by statistical analysis software in a meaningful way are referred to as data wrangling.  Data wrangling is an often underestimated but always an important and time consuming part of any data analysis workflow, according to some research it tends to take up as much as 80% of data scientists’ time.  We have a full post dedicated entirely to data wrangling, so I will not repeat all of that material here, I will just say that whatever the data analysis problem at hand, the issue of data wrangling will be there and you will need to deal with it, so be warned.

Specifically, some of the data wrangling tasks often associated with linear regression are as follows:

  • Import the dataset into a data structure that is amenable to linear regression. R and Python have “dataframes”, other languages may have something else. Usually this step is straightforward and done with one or two lines of code, but occasionally there are issues where data does not import correctly.  For example when dealing with non English language data sources, such as econometric data from China or France or Russia, issues with character encoding, date formats or number to text conversions get really messy.  There is no one simple way to safeguard against this, however generally if you are using a popular tool like R or Python or Stata, then someone else would have already come across a similar issue, so just Google and your prayers will be answered.  The R code for importing into an R data frame looks like this:
    # import data into a dataframe
    df <- data.frame(mydata)
  • Get rid of all the data records that have irrelevant data, for example records that fall outside of the date range or geo location that the analysis is concerned with.
  • Get rid of records that have NAs.  Incomplete records usually mess up calculations of averages, sums or joins in unpredictable ways. Luckily, in most languages, this can be done through one command, for example in R:
    # get rid of NA records
    df <- na.omit(df)
    
  • Convert factors to numeric and vice versa where necessary
  • Code dummy variables and factors. Coding dummy variables is an interesting topic in itself, there are several ways to do this each facilitating a different subsequent interpretation of coefficients. Someday I will write a separate blog post on this topic. For now, just keep in mind that you would need to put time and thought into doing this. If you are using R, then it will be clever enough to do it for you once it detects that some field is a factor, but in other tools you may need to do this manually. If this is the case, then look on the bright side of life – explicit manual coding of dummy variables might take up extra time but it really makes you look at your data and think about it – often a worthwhile investment of a few minutes anyway.
  • Scale data.  Are your sizes expressed in milimeters and time lengths in years?  If yes, then maybe this is still OK, provided that you are studying rock formations or tectonic plate movement.  However in most other cases you may wish to restate some of the variables in units that are more appropriate for the context of the problem that you are studying and that are more manageable for the movements and changes that you are trying to detect.
  • Examine outliers and leverage points and remove where appropriate.

Step 3. Split The Data Into Two Sets – Training And Testing

In the second half of the previous post we talked about model evaluation and model comparison.  We also looked at model evaluation and model comparison in more detail in another post that was dedicated entirely to that topic.

One of the conclusions that came out of those two discussions, was that a good way to evaluate and compare different linear regression models with each other and with other non-linear-regression models is to look at how each model performs on a new data set.  There are a few ways to do this, and we have picked the leave-one-out method as the best approach, mostly because it was easy to understand and implement and was consistent with another popular statistical methods for model evaluation, the AIC .

However, an even simpler approach is to just split the data into two subsets from the very start – “training” and “testing”.  Note that this is a different approach from leave-one-out or k-fold cross validation – here you only do the training once on one half of the data set and the testing once on the other disjoint half of the data set, as opposed to doing the training and testing n times in leave-one-out all on overlapping subsets.

There are pros and cons to each approach and these are discussed in our post on cross validation, for now suffice say that once you know all these pros and cons you may, in some situations, make an informed decision to simply split your data set once into a “training” and “testing” subsets.  And in that case, this would be the third step in our step-by-step approach to doing linear regression properly.

Recap So Far

OK so by this stage we have done the following.

We have analyzed the problem and come to the conclusion that linear regression is an appropriate method of analysis and modeling here. We have subsequently wrangled our raw data set to be amenable to fitting and testing that model – we have gotten rid of junk rows with NAs, we have coded dummy and factor variables where necessary, we have taken power, log or cross-product transformations where desired, have scaled variables where appropriate, reviewed outliers and leverage points.   Having “massaged” the dataset into the format suitable for running a linear regression fitting procedure, we have subsequently split the data set into two subsets – “training” and “cross validation” – and we have done this in a smart way that will ensure that the split will enable us to fit and test our model objectively.

And only now, after all of this due diligence has been performed are we in a position to actually start fitting linear regression. So without further ado…

Step 4.  Fit The Linear Regression Model

This part here is not just a matter of doing the 2-3 lines of code:

# linear regression of y on x1, x2, ... , xn as per data in the data frame df
df <- data.frame(mydata)
mymodel<- lm(y~x1+x2+x3+ … + xN, data = df)

but instead a place to experiment with different model specifications to find the best one in terms of fitting the training data and predicting the testing data.  This is where you would spend time thinking about what variables from your data to include in the model as explanatory variables, what variables to not include, what variables to include as proxies, what non-linear transformations and cross products of explanatory variables to take into account, etc.  This is where the real crunch that separates good approach to linear regression from sloppy approach to linear regression lies.

We have looked at the maths behind model specification in one of our previous posts.  To summarise, the following are some of the pitfalls to keep in mind:

Excluding explanatory variables that should have been included

Omitting an explanatory variable x_1 that really does have an effect on the response y means that the remaining explanatory variables that are correlated with x_1 , will act as proxies for x_1 in the relationship and thus the coefficient estimates obtained for x_2, \dots, x_n from running the regression will in fact be biased – they will contain the true effect of x_2, \dots, x_n along with partial effect exerted by x_1 .

Including explanatory variables that should have been excluded

Including unnecessary explanatory variables will not bias the coefficient estimators, but the more explanatory variables there are, the higher the chance of correlation between all of them and thus the higher the degree of multicollinearity.  When multicollinearity is present in the model, the coefficient estimates become less efficient, i.e. variances of coefficient estimates (referred to as standard errors) become large.

In the most extreme case, when there is an exact linear relationship between two explanatory variables, the linear algebra for computing regression coefficients fails (as the matrix X'X becomes not invertible) and the process of fitting the linear regression model breaks down altogether.

Generally speaking having estimates that are inefficient but unbiased is better than having biased estimates. So as a result the rule of thumb is to include variables, if in doubt.

Proxy variables

Using proxy variables can be a good thing when you don’t have the data for the actual root cause variable, but it can also lead to a dangerous confusion between causation and correlation.  It is especially important to keep this in mind when that distinction between causation and correlation is crucial to solving the business problem, for example, when testing the effects of a new drug or when analysing a new government policy, and it is less important when simply looking at prediction of future values.

Nonlinear terms

As a special case of the need to include all explanatory variables that truly belong in the model, there is the need to look at non-linear terms of the explanatory variables that already are in the model.  For any given explanatory variable x_i in the model, do we also need to look at things like {x_i} ^2, {x_i} ^3,  \frac {1}{x_i} , {x_i}^{1/2} , \log x , etc, as well as various cross-productsx_i x_j between distinct explanatory variablesx_i ,x_j ?

At the same time remember that going overboard and including too many of these combinations of powers, cross products, logs and fractions is also not a good idea as a lot of them will be highly correlated making multicollinearity significant.  And while multicollinearity, or more precisely, the resulting inefficiency in coefficient estimates, is not as bad as bias, it is still undesirable.

Solution:  Specifying The Linear Regression Model With The Right Explanatory Variables

There is no set algorithm that we can follow to guarantee that we have have included all the right regressors and excluded all the undesired ones. As mentioned in the previous posts, this is an area where data science is in fact a bit of an art rather than pure science.  There was however a rule-of-thumb sequence of steps that I personally use and that I have shared in the previous post and it goes as follows:

1. Try common sense

You must know or at least expect something about the data and the relationships that you are working with or trying to establish. Use that as a starting point. What are the variables that you expect to have influence on the response? Based on common sense, domain knowledge and your understanding of the problem, would you expect the response to be related to the explanatory variable linearly or inversely or via the logarithm or to square or some other power? What variables are likely to be strongly correlated and/or can perhaps act as proxies for each other?  Spending time going over these questions is a worthwhile investment that will help you prioritise the different combinations of explanatory variables to consider.

2. Draw up plots

Plots will confirm your conjectures from (1). Draw up plots of how the response variable y reacts to various individual candidates explanatory variables x_i or their non-linear transformations or their cross products.  This will point out the most obvious choices.  Also draw up plots of explanatory variables against other explanatory variables to detect where correlations, and this the degree of multicollinearity will be high.  All of this plotting should go together with the theoretic and common sense thinking that you do as per previous point.

3. Use Specialised Statistical Algorithms

As mentioned in the previous post, there are a number of algorithms that help automate the specification of a regression model.

There is no one clear consensus on which algorithm or which combination of algorithms is the best, but I personally prefer a combination of stepwise regression and Ramsay’s RESET Test. Together stepwise regression and Ramsay RESET Test do a fairly comprehensive job of checking all candidate explanatory variables and their non-linear combinations to guide us through selecting the best set of explanatory variables.  Stepwise regression will comb through all the individual explanatory variables x_i while RESET test will explore a wide range of non-linear functions f(x_i) of these.  These two algorithms have already been described in the previous post, but in brief their summary is as follows:

Stepwise regression

This process will essentially iterate through various sets of explanatory variables, fitting the same type of model based on each set, one by one and eventually allowing you to pick the set of explanatory variables that produces the best fitted model.

In R, stepwise regression can be done in forward or backward direction

# the target multiple regression model
full.model <- lm(y ~ x1+x2+x3+x4, data=mydata)
reduced.model <- step(full.model, direction="backwards")

or

# the target multiple regression model
min.model <- lm(y ~ 1, data=mydata)
fwd.model <- step(min.model, direction="forward", scope=(~x1+x2+x3+x4))

Other programming languages like Python or Octave will have similar procedures built in.

As the algorithm iterates through combinations of explanatory variables it may, depending on the algorithm’s implementation, automatically select the combination that is the best or it may simply output a model evaluation statistic (for example AIC )  associated with each combination, leaving it up to the user to actually interpret the output and select the best combination. In the latter case, the user would need to be able to interpret model evaluation metrics in a meaningful way – this topic has been covered in detail in our other post here.

Ramsay RESET Test

Alternatively, there is also the Ramsay RESET Test that checks whether there are any non-linear (i.e. powers) combinations of explanatory variables that may be significant but have been left out of the model.  It does so by testsing whether non-linear combinations of the fitted values hat{y} help explain the response variable y .

Step 5.  Check Back To Keep An Eye On Multicollinearity

As you think about your data relationships, stare at scatterplots and run stepwise regression, RESET and other variable selection procedures, you will find your model getting more complex with more and more explanatory variables and their nonlinear transformations and combinations thrown into the mix.  This may be making your model look better in terms of fitting the training data (for example R^2 will always increase whenever you introduce a new explanatory variable, no matter how insignificant), but it is important to remember the other side of the coin – the more explanatory variables you have, the higher the degree of multicollinearity.  And as we have already discussed, while multicollinearity is not the end of the world, it is certainly a bad thing and should be avoided where possible.

We discussed multicollinearity in detail in another post – we looked at the maths behind it, ways to detect and measure it and ways to minimise it while keeping the integrity of the model intact.  Here, I will just summarise by saying that there are two methods (in fact more than two, but we will focus on two) for detecting multicollinearity:

  • eyeballing the output of regression algorithms – these usually show the coefficient estimates along with standard errors, so you can get a sense of how large each standard errors is with respect to the corresponding coefficient estimate and how much the standard errors increase from model to model,
  • applying the variance inflation factor (VIF) metric to each of the coefficient estimates

and that there are a number of ways of eliminating or at least minimising multicollinearity, including:

  • Removing explanatory variables, especially ones that are strongly correlated with other explanatory variables in the model.  The remaining variables, being strongly correlated with the excluded variable, can still add as a reasonable proxy for it in its absence.
  • Combining correlated explanatory variables into one “index” where that makes sense.
  • Changing other parameters that decrease coefficient estimates’ variances, so that the gains in efficiency will offset the losses in efficiency that came about due to multicollinearity:
    • making \sigma^2 , the population variance of the error term \epsilon smaller by increasing n , the sample size (usually easier said than done :)).
    • introducing additional explanatory variables may sometimes decrease multicollinearity.  This will happen if the new explanatory variable is significant and really should be in the model and is at the same time not highly correlated with any explanatory variables already present in the model.  In such case, the additional explanatory term decreases the variability of the error term \epsilon dramatically, without contributing much to multicollinearity.
  • Using regularization methods where extra explanatory variables are penalised, for example:
    • ridge regression
    • LASSO
  • Reorganising the set of explanatory variables altogether by using principal component analysis (PCA).

Recap So Far (Pt 2)

So we have run our regression fitting procedure, it has done something and has produced some sort of an output. In fact, we have probably run a whole lot of linear regression fitting procedures, trying to fit various models based on many combinations or explanatory variables, their non-linear transformations and their cross-products, all with a view to find the best model.

Usually the output from running a linear regression fitting procedure is a table with coefficient estimates, their standard errors and p-values, followed by overall model evaluation stats like R^2 or AIC . However, before we go ahead and start quoting all these numbers, making statements like “for every unit change in x_n there is a \beta_n change in y ” or “the coefficient of x_m is significant at 0.01 level”, we need to validate that a number of basic statistical assumptions hold. Firstly we need to check whether the Gauss-Markov conditions are satisfied and secondly we need to check whether the suspected distribution of error terms \epsilon or the sample size n are “well behaved”.  If not, then all the confidence intervals or p-values information that may have been automatically output by R or Stata or some other statistical package does not actually mean anything.

Step 6.  Validate The Gauss-Markov Conditions

Recall that the Gauss-Markov Theorem is as follows:

Suppose a real world phenomenon follows the model:

y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_k x_k + \epsilon

and suppose the following conditions hold:

  • Linearity:  E(\epsilon) = 0
  • Homoscedasticity:  for any two observations i \text{and} j , the standard deviation of error is the same, i.e. \sigma_i = \sigma_j = \sigma , for some fixed \sigma
  • No Autocorrelation: for any two distinct observations i \text{ and } j , \text{ such that } i\neq j &s=1$ the two errors are independent
  • Non-stochastic Regressors: i.e. for all observations 0 \leq j \leq n , all explanatory variables x_i are non-stochastic.

Then, the coefficient estimates \hat{\beta} obtained through the OLS method described above:

  • are in fact unbiased estimates of the actual real world coefficients, i.e. E(\hat{\beta}) = \beta
  • have variance (\sigma_{\hat{\beta}})^2 = \sigma^2(X'X)^{-1} and are in fact the most efficient out of all other possible coefficient estimates that are unbiased and linear

It is important to remember that in order for us to be sure that this BLUE property holds, the four Gauss-Markov conditions above must hold, otherwise, the coefficients could be biased, or they could be unbiased but nonetheless not as efficient as some other unbiased coefficient estimates that could have been obtained by some other method.  Thus, it is only when the four Gauss-Markov conditions are satisfied, can we confidently make statements like “For every unit increase in x_i , there is an estimated \hat{\beta}_i increase in y , because we know now that \hat{\beta} does in the long run average out to \beta (unbiased) and that it approximates it pretty accurately, i.e. with minimal degree of variance (efficient).

Note that while all of the Gauss-Markov conditions are related to the error terms \epsilon , we do not actually know the values of the error terms \epsilon as they are a part of our theoretical model and are never actually observed in practice.  We can only observe and analyse the residuals e = y - \hat{y} , where y is the actual observed value as per real life and \hat{y} is the fitted value as per our model, i.e. both y and\hat{y} are known.  We therefore use these residuals e as proxies for the error terms \epsilon .

Validating the Gauss-Markov conditions is usually done by means of residual plot or specialised statistical tests on residuals and, in both cases, residuals are used as proxies for the theoretical error terms.

1. Validating Linearity

Testing

Plot residuals (instead of response) vs. predictor. A non-random pattern suggests that a simple linear model is not appropriate – perhaps instead of being linearly proportional to some of the explanatory variables your response variable is actually proportional to their squares or cubes or logs.

Remedy

If your model is not linear then this is a fundamental flaw where we need to stop and re-specify the model (include non-linear transformations of explanatory variables, cross products, etc) or not use linear regression altogether.

2. Validating Homoscedasticity

Testing

Remedy

If it turns out that we have heteroscedasticity, then:

  • coefficient estimators are still linear and unbiased but now they are not efficient, i.e. there could be other estimates that could have lower variance
  • the estimator s for the variance of the error terms \sigma^2 is biased and thus the standard errors are biased.

As a result of these two consequences of heteroscedasticity, we can’t make statements about confidence intervals or test hypotheses meaningfully.

Opinions differ on how bad an impact heteroskedasticity has on your model validity and interpretation. Some say that since your estimates are unbiased and only not efficient, the problem is not huge and there is no need to over-stress about it.

There is something I’d like to note here though. Recall that we didn’t actually plot the actual eror terms – we don’t know them. We plotted residuals which we use as approximate representations of error terms. Therefore, if our residuals are showing a non–random pattern it could mean that this is because the error terms themselves had this pattern, i.e. our model indeed suffers from heteroscedasticity and in this case yes, maybe this isn’t such a great deal – your coefficient estimates are still unbiased and consistent albeit inefficient. But it could also mean that the odd patterns in the residuals are a symptom of a deeper problem with the fitted model, i.e. your model is mis-specified, for example some crucial variables are missing or some relationships are non-linear. In this case the trouble is much more serious – the problem is not the ill behaved error terms, the problem is a mis-specified model and we really need to go back and re-specify the model.
resid plotIn order to remedy heteroscedasticity consider taking the following measures:

  • Redesign your model. As mentioned, an apparent heteroscedasticity as shown by residual plot could actually be not heteroscedasticity of error terms but a much more serious problem of a misspecified model and if there is such risk then we need to go to the root cause and redesign the model, rather than apply band aid solution to the error terms.
  • Robust regression with HC errors.

3. Validating Absence Of Autocorrelation

Testing

  • autocorrelation usually occurs in models where there is an element of time or space dependency.  So it is important to take a hard look at whether there are time or space relationships and plot residuals against time or distance.  A pattern that is not random suggests lack of independence.
  • Goddfrey Breusch Test: https://www.youtube.com/watch?v=JN6Sblxz7v0
  • Durbin Watson Test

Remedy

If it turns out that we have autocorrelation, then the consequences are the same as were in the case of heteroscedasticity:

  • coefficient estimators are still linear and unbiased but now they are not efficient, i.e. there could be other estimates that could have lower variance,
  • the estimator s for the variance of the error terms \sigma^2 is biased and thus the standard errors are biased.

A a result of these two consequences of autocorrelation, we cannot make statements about confidence intervals, p-values or test hypotheses meaningfully.

In order to remedy autocorrelation we can:

  • consider other approaches that may be more suitable to working with time or space dependent data, for example ARIMA time series,
  • try robust regression with HAC consistent standard errors.

4. Validating Non-Stochasticity of Regressors

Testing

To detect endogeneity due to stochastic explanatory variables, such as for example explanatory variables that are observed by measurement and therefore suffer from measurement errors, we have two options:

  • Common sense – look over the explanatory variables in the dataset, are any of them likely to have been measured or recorded imprecisely?
  • Hausman test of regressor endogeneity.

Remedy

If your model does appear to suffer from endogeneity due to non-deterministic explanatory variables, then to remedy use

  • instrumental variables and 2-stage linear regression.

Instrumental variables and 2-stage linear regression is a field in itself and I will write a separate post on it at some point, for now I will just have to refer to all the other material that is available out there on the web related to this topic.

Step 7.  Validate Normality or t-distribution

When running regression, statistical packages like R or Stata will automatically include as part of the output for each coefficient estimste \hat{\beta} its standard errors and corresponding p-values.

These p-values are based on the assumption that the coefficient estimates\hat{\beta} follow a t-distribution (with degrees of freedom equal to the difference between sample size and number of explanatory variables), centered around the actual parameter  \beta and with variance as per standard errors.  However, this assumption is only justified if the error terms \epsilon in our proposed model are normally distributed or if the sample size n is large enough.  We have looked at the maths behind this in our previous post.

Thus, in order to be able to meaningfully talk about p-values or confidence intervals for coefficient estimates \hat{\beta} we need to test for normality of error terms \epsilon .  And, as mentioned earlier, since we do not know the values of the actual error terms \epsilon , so we can only analyse the residuals e and treat these as proxies for \epsilon .

Testing

Remedy

If it appears that the error terms are not normally distributed, then we can still meaningfully talk about p-values and confidence intervals, provided that the number of observations n is sufficiently large.  However if the error terms appear to not be normal and if n is small, then we are stuck with nor being able to use p-values or confidence intervals.

Step 8.  Model Evaluation and Comparison 

The two most common metrics for the goodness of fit of linear regression models are R^2 and \text{adj} R^2 and they usually come as part of the standard output from linear regression fitting procedures in langauges like R or Stata.  However, while these two metrics are given to the user automatically and are easy to interpret, they do not actually provide an objective way to compare a linear regression model with other non-linear-regression models or indeed sometimes even with other linear regression models.

I have already talked about model evaluation and comparison in the previous post about model specification for linear regression as well as in in another post that was dedicated entirely to model evaluation.  I will just mention here that the two ways of evaluating and comparing models that have emerged as winners from that long discussion are AIC (or BIC if you are so inclined philosophically) and cross-validation.

romansmith

View more posts from this author
One thought on “Linear Regression – How To Do It Properly Pt.3 – The Process
  1. Pingback: Linear Regression – How To Do It Properly Pt.1 – The Maths | Likelihood Log

Leave a Reply

Your email address will not be published. Required fields are marked *