Linear regression is an attractive model because the representation is so simple. The representation is a linear equation that combines a specific set of input values (x) the solution to which is the predicted output for that set of input values (y). As such, both the input values (x) and the output value are numeric. The linear equation assigns one scale factor to each input value or column, called a coefficient that is commonly represented by the Greek letter Beta (β). One additional coefficient is also added, giving the line an additional degree of freedom (e.g. moving up and down on a two-dimensional plot) and is often called the intercept or the bias coefficient. For example, in a simple regression problem (a single x and a single y), the form of the model would be: 

                                         y = B0 + B1 × x (1.1)

 In higher dimensions when we have more than one input (x), the line is called a plane or a hyperplane. The representation therefore is the form of the equation and the specific values used for the coefficients (e.g. B0 and B1 in the above example). It is common to talk about the complexity of a regression model like linear regression. This refers to the number of coefficients used in the model.