我正在尝试使用随机梯度下降作为求解器来实现Python中岭回归的解决方案。我对SGD代码如下:在Python中使用随机梯度下降进行岭回归
def fit(self, X, Y):
# Convert to data frame in case X is numpy matrix
X = pd.DataFrame(X)
# Define a function to calculate the error given a weight vector beta and a training example xi, yi
# Prepend a column of 1s to the data for the intercept
X.insert(0, 'intercept', np.array([1.0]*X.shape[0]))
# Find dimensions of train
m, d = X.shape
# Initialize weights to random
beta = self.initializeRandomWeights(d)
beta_prev = None
epochs = 0
prev_error = None
while (beta_prev is None or epochs < self.nb_epochs):
print("## Epoch: " + str(epochs))
indices = range(0, m)
shuffle(indices)
for i in indices: # Pick a training example from a randomly shuffled set
beta_prev = beta
xi = X.iloc[i]
errori = sum(beta*xi) - Y[i] # Error[i] = sum(beta*x) - y = error of ith training example
gradient_vector = xi*errori + self.l*beta_prev
beta = beta_prev - self.alpha*gradient_vector
epochs += 1
我在测试这个数据不是标准化的,我的实现总是与所有的权重为无限远,即使我初始化加权向量低值结束。只有当我将学习速率alpha设置为一个非常小的值〜1e-8时,算法才会以加权向量的有效值结束。
我的理解是,标准化/缩放输入功能仅有助于缩短收敛时间。但是如果这些特征没有被标准化,那么该算法不应该整体上不会收敛。我的理解是否正确?