After the last introduction in part 2 about Linear Regression, let’s jump to Logistic Regression along with some basic concepts.
Used to find the best fitted line for logistic regression. Th idea is the same as in linear regression, we keep rotating the line until we find the one with maximum likelihood. First, we need to project data into a x-axis and y-axis (log(odd)). Then we need to transform it back to probability, and calculate log likelihood. We keep doing this until we have the most fitted line with maximum likelihood.
The coefficients for the line are what you get when you do logistic regression with continuous data.
We have this function: y = -3.476 + 1.825*weight
- “Estimate” for (intercept) is the y-axis intercept, when weight= 0. Meaning that when weight = 0, the log(odds of obesity) = -3.476. While “Estimate” for weight is the slope.
- “Std. Error”: standard error for the estimated intercept.
- “Z-value”: estimated intercept/standard error. It’s the number of std the estimated intercept is away from 0 on a standard normal curve.
- “p-value”: <0.05 -> reject the null hypothesis, meaning that it’s statistically significant.
The coefficients for the line are what you get when you do logistic regression with discrete data.
We have this function: y = -1.5*B1 + 2.35*B2