2017-02-19 64 views
1

我使用波士顿数据集作为我的输入,我试图建立一个模型来预测MEDV(自有住房的中值为1000美元),使用RM(每间房屋的平均房间数量)R中梯度下降和线性模型之间的θ值差异

我已经从Digitheads blog中混淆了以下代码,而不是如你所见。

我的代码如下:

#library(datasets) 
#data("Boston") 

x <- Boston$rm 
y <- Boston$medv 

# fit a linear model 
res <- lm(y ~ x) 
print(res) 

Call: 
lm(formula = y ~ x) 

Coefficients: 
(Intercept)   x 
    -34.671  9.102 

# plot the data and the model 
plot(x,y, col=rgb(0.2,0.4,0.6,0.4), main='Linear regression') 
abline(res, col='blue') 

enter code here

# squared error cost function 
cost <- function(X, y, theta) { 
    sum((X %*% theta - y)^2)/(2*length(y)) 
} 

# learning rate and iteration limit 
alpha <- 0.01 
num_iters <- 1000 

# keep history 
cost_history <- double(num_iters) 
theta_history <- list(num_iters) 

# initialize coefficients 
theta <- matrix(c(0,0), nrow=2) 

# add a column of 1's for the intercept coefficient 
X <- cbind(1, matrix(x)) 

# gradient descent 
for (i in 1:num_iters) { 
    error <- (X %*% theta - y) 
    delta <- t(X) %*% error/length(y) 
    theta <- theta - alpha * delta 
    cost_history[i] <- cost(X, y, theta) 
    theta_history[[i]] <- theta 
} 

print(theta) 

      [,1] 
[1,] -3.431269 
[2,] 4.191125 

作为每Digitheads博客,其用于使用所述LM(线性模型)和他从梯度下降匹配值THETA值,而我的不是。这些数字不应该匹配吗?你可以从剧情中看到theta的各种值,我的最后y截距与打印(θ)值几行不符?

任何人都可以提出一个关于我哪里出错的建议吗?

Linear Regression Gradient Descent

+1

我认为这只是需要一段时间来收敛。如果我重新运行模型,但将迭代次数增加到50k或100k,我会得到与ols估计相同的结果 – gfgm

+0

@GabrielFGeislerMesevage是的,通过增加迭代次数,我可以得到与OLS相同的GD值。请把评论作为回答,我会很高兴地接受它作为正确的答案。非常感谢。 – TheGoat

+0

谢谢。添加评论作为答案。 – gfgm

回答

1

梯度下降需要一段时间才能收敛。增加迭代次数将使模型收敛到OLS值。例如:

# learning rate and iteration limit 
alpha <- 0.01 
num_iters <- 100000 # Here I increase the number of iterations in your code to 100k. 
# The gd algorithm now takes a minute or so to run on my admittedly 
# middle-of-the-line laptop. 

# keep history 
cost_history <- double(num_iters) 
theta_history <- list(num_iters) 

# initialize coefficients 
theta <- matrix(c(0,0), nrow=2) 

# add a column of 1's for the intercept coefficient 
X <- cbind(1, matrix(x)) 

# gradient descent (now takes a little longer!) 
for (i in 1:num_iters) { 
    error <- (X %*% theta - y) 
    delta <- (t(X) %*% error)/length(y) 
    theta <- theta - alpha * delta 
    cost_history[i] <- cost(X, y, theta) 
    theta_history[[i]] <- theta 
} 

print(theta) 
    [,1] 
[1,] -34.670410 
[2,] 9.102076