3

我有一些麻烦试图建立一个多层感知器用于使用张量流的二进制分类。用张量流建立MLP用于二进制分类

我有一个非常大的数据集(大约1,5 * 10^6个例子),每个都有一个二进制(0/1)标签和100个特征。我需要做的是建立一个简单的MLP,然后尝试改变学习率和初始化模式来记录结果(这是一个任务)。 虽然我得到了奇怪的结果,因为我的MLP似乎早就陷入了一个低但不是很高的成本,并且从来没有下过。由于学习速率相当低,所以成本几乎立即上涨。我不知道是否问题在于我如何构建MLP(我做了几次尝试,发布最后一个代码)还是如果我在tensorflow实现中丢失了某些东西。

CODE

import tensorflow as tf 
import numpy as np 
import scipy.io 

# Import and transform dataset 
print("Importing dataset.") 
dataset = scipy.io.mmread('tfidf_tsvd.mtx') 

with open('labels.txt') as f: 
    all_labels = f.readlines() 

all_labels = np.asarray(all_labels) 
all_labels = all_labels.reshape((1498271,1)) 

# Split dataset into training (66%) and test (33%) set 
training_set = dataset[0:1000000] 
training_labels = all_labels[0:1000000] 
test_set  = dataset[1000000:1498272] 
test_labels  = all_labels[1000000:1498272] 

print("Dataset ready.") 

# Parameters 
learning_rate = 0.01 #argv 
mini_batch_size = 100 
training_epochs = 10000 
display_step = 500 

# Network Parameters 
n_hidden_1 = 64 # 1st hidden layer of neurons 
n_hidden_2 = 32 # 2nd hidden layer of neurons 
n_hidden_3 = 16 # 3rd hidden layer of neurons 
n_input  = 100 # number of features after LSA 

# Tensorflow Graph input 
x = tf.placeholder(tf.float64, shape=[None, n_input], name="x-data") 
y = tf.placeholder(tf.float64, shape=[None, 1], name="y-labels") 

print("Creating model.") 

# Create model 
def multilayer_perceptron(x, weights): 
    # First hidden layer with SIGMOID activation 
    layer_1 = tf.matmul(x, weights['h1']) 
    layer_1 = tf.nn.sigmoid(layer_1) 
    # Second hidden layer with SIGMOID activation 
    layer_2 = tf.matmul(layer_1, weights['h2']) 
    layer_2 = tf.nn.sigmoid(layer_2) 
    # Third hidden layer with SIGMOID activation 
    layer_3 = tf.matmul(layer_2, weights['h3']) 
    layer_3 = tf.nn.sigmoid(layer_3) 
    # Output layer with SIGMOID activation 
    out_layer = tf.matmul(layer_2, weights['out']) 
    return out_layer 

# Layer weights, should change them to see results 
weights = { 
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], dtype=np.float64)),  
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], dtype=np.float64)), 
    'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3],dtype=np.float64)), 
    'out': tf.Variable(tf.random_normal([n_hidden_2, 1], dtype=np.float64)) 
} 

# Construct model 
pred = multilayer_perceptron(x, weights) 

# Define loss and optimizer 
cost = tf.nn.l2_loss(pred-y,name="squared_error_cost") 
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) 

# Initializing the variables 
init = tf.initialize_all_variables() 

print("Model ready.") 

# Launch the graph 
with tf.Session() as sess: 
    sess.run(init) 

    print("Starting Training.") 

    # Training cycle 
    for epoch in range(training_epochs): 
     #avg_cost = 0. 
     # minibatch loading 
     minibatch_x = training_set[mini_batch_size*epoch:mini_batch_size*(epoch+1)] 
     minibatch_y = training_labels[mini_batch_size*epoch:mini_batch_size*(epoch+1)] 
     # Run optimization op (backprop) and cost op 
     _, c = sess.run([optimizer, cost], feed_dict={x: minibatch_x, y: minibatch_y}) 

     # Compute average loss 
     avg_cost = c/(minibatch_x.shape[0]) 

     # Display logs per epoch 
     if (epoch) % display_step == 0: 
     print("Epoch:", '%05d' % (epoch), "Training error=", "{:.9f}".format(avg_cost)) 

    print("Optimization Finished!") 

    # Test model 
    # Calculate accuracy 
    test_error = tf.nn.l2_loss(pred-y,name="squared_error_test_cost")/test_set.shape[0] 
    print("Test Error:", test_error.eval({x: test_set, y: test_labels})) 

输出

python nn.py 
Importing dataset. 
Dataset ready. 
Creating model. 
Model ready. 
Starting Training. 
Epoch: 00000 Training error= 0.331874878 
Epoch: 00500 Training error= 0.121587482 
Epoch: 01000 Training error= 0.112870921 
Epoch: 01500 Training error= 0.110293652 
Epoch: 02000 Training error= 0.122655269 
Epoch: 02500 Training error= 0.124971940 
Epoch: 03000 Training error= 0.125407845 
Epoch: 03500 Training error= 0.131942481 
Epoch: 04000 Training error= 0.121696954 
Epoch: 04500 Training error= 0.116669835 
Epoch: 05000 Training error= 0.129558477 
Epoch: 05500 Training error= 0.122952110 
Epoch: 06000 Training error= 0.124655344 
Epoch: 06500 Training error= 0.119827300 
Epoch: 07000 Training error= 0.125183779 
Epoch: 07500 Training error= 0.156429254 
Epoch: 08000 Training error= 0.085632880 
Epoch: 08500 Training error= 0.133913128 
Epoch: 09000 Training error= 0.114762624 
Epoch: 09500 Training error= 0.115107805 
Optimization Finished! 
Test Error: 0.116647016708 

这是MMN建议

weights = { 
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], stddev=0, dtype=np.float64)),  
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], stddev=0.01, dtype=np.float64)), 
    'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], stddev=0.01, dtype=np.float64)), 
    'out': tf.Variable(tf.random_normal([n_hidden_2, 1], dtype=np.float64)) 
} 

这是输出

Epoch: 00000 Training error= 0.107566668 
Epoch: 00500 Training error= 0.289380907 
Epoch: 01000 Training error= 0.339091784 
Epoch: 01500 Training error= 0.358559815 
Epoch: 02000 Training error= 0.122639698 
Epoch: 02500 Training error= 0.125160135 
Epoch: 03000 Training error= 0.126219718 
Epoch: 03500 Training error= 0.132500418 
Epoch: 04000 Training error= 0.121795254 
Epoch: 04500 Training error= 0.116499476 
Epoch: 05000 Training error= 0.124532673 
Epoch: 05500 Training error= 0.124484790 
Epoch: 06000 Training error= 0.118491177 
Epoch: 06500 Training error= 0.119977633 
Epoch: 07000 Training error= 0.127532511 
Epoch: 07500 Training error= 0.159053519 
Epoch: 08000 Training error= 0.083876224 
Epoch: 08500 Training error= 0.131488483 
Epoch: 09000 Training error= 0.123161189 
Epoch: 09500 Training error= 0.125011362 
Optimization Finished! 
Test Error: 0.129284643093 

相连的第三隐藏层,由于MMN

有在我的代码错误,我有两个隐含层,而不是三个。我纠正这样做的:

'out': tf.Variable(tf.random_normal([n_hidden_3, 1], dtype=np.float64)) 

out_layer = tf.matmul(layer_3, weights['out']) 

我回到了STDDEV旧值虽然,因为它似乎导致成本函数的变动小。

输出仍然困扰

Epoch: 00000 Training error= 0.477673073 
Epoch: 00500 Training error= 0.121848744 
Epoch: 01000 Training error= 0.112854530 
Epoch: 01500 Training error= 0.110597624 
Epoch: 02000 Training error= 0.122603499 
Epoch: 02500 Training error= 0.125051472 
Epoch: 03000 Training error= 0.125400717 
Epoch: 03500 Training error= 0.131999354 
Epoch: 04000 Training error= 0.121850889 
Epoch: 04500 Training error= 0.116551533 
Epoch: 05000 Training error= 0.129749704 
Epoch: 05500 Training error= 0.124600464 
Epoch: 06000 Training error= 0.121600218 
Epoch: 06500 Training error= 0.121249676 
Epoch: 07000 Training error= 0.132656938 
Epoch: 07500 Training error= 0.161801757 
Epoch: 08000 Training error= 0.084197352 
Epoch: 08500 Training error= 0.132197409 
Epoch: 09000 Training error= 0.123249055 
Epoch: 09500 Training error= 0.126602369 
Optimization Finished! 
Test Error: 0.129230736355 

两个更感谢史蒂芬 变化,使史蒂芬建议改变乙状结肠激活功能与RELU,所以我试过了。同时,我注意到我没有为输出节点设置激活函数,所以我也这样做了(应该很容易看出我改变了什么)。

Starting Training. 
Epoch: 00000 Training error= 293.245977809 
Epoch: 00500 Training error= 0.290000000 
Epoch: 01000 Training error= 0.340000000 
Epoch: 01500 Training error= 0.360000000 
Epoch: 02000 Training error= 0.285000000 
Epoch: 02500 Training error= 0.250000000 
Epoch: 03000 Training error= 0.245000000 
Epoch: 03500 Training error= 0.260000000 
Epoch: 04000 Training error= 0.290000000 
Epoch: 04500 Training error= 0.315000000 
Epoch: 05000 Training error= 0.285000000 
Epoch: 05500 Training error= 0.265000000 
Epoch: 06000 Training error= 0.340000000 
Epoch: 06500 Training error= 0.180000000 
Epoch: 07000 Training error= 0.370000000 
Epoch: 07500 Training error= 0.175000000 
Epoch: 08000 Training error= 0.105000000 
Epoch: 08500 Training error= 0.295000000 
Epoch: 09000 Training error= 0.280000000 
Epoch: 09500 Training error= 0.285000000 
Optimization Finished! 
Test Error: 0.220196439287 

这是它与每个节点上乙状结肠激活功能呢,输出包括

Epoch: 00000 Training error= 0.110878121 
Epoch: 00500 Training error= 0.119393080 
Epoch: 01000 Training error= 0.109229532 
Epoch: 01500 Training error= 0.100436962 
Epoch: 02000 Training error= 0.113160662 
Epoch: 02500 Training error= 0.114200962 
Epoch: 03000 Training error= 0.109777990 
Epoch: 03500 Training error= 0.108218725 
Epoch: 04000 Training error= 0.103001394 
Epoch: 04500 Training error= 0.084145737 
Epoch: 05000 Training error= 0.119173495 
Epoch: 05500 Training error= 0.095796251 
Epoch: 06000 Training error= 0.093336573 
Epoch: 06500 Training error= 0.085062860 
Epoch: 07000 Training error= 0.104251661 
Epoch: 07500 Training error= 0.105910949 
Epoch: 08000 Training error= 0.090347288 
Epoch: 08500 Training error= 0.124480612 
Epoch: 09000 Training error= 0.109250224 
Epoch: 09500 Training error= 0.100245836 
Optimization Finished! 
Test Error: 0.110234139674 

我发现这些数字很奇怪,在第一种情况下,它是停留在一个较高成本比乙状结肠,尽管乙状结肠应该很早就饱和。在第二种情况下,它从几乎是最后一个的训练错误开始......所以它基本上与一个小批量收敛。我开始认为我没有在这一行中正确计算成本: avg_cost = c /(minibatch_x。形状[0])

+0

您是否尝试将您的行'cost = tf.nn.l2_loss(pred-y,name =“squared_error_cost”)'改为'cost = tf.nn.square(tf.sub(pred,y))? – Kashyap

+0

您可以在培训过程中打印准确度(正确分类样本的百分比)吗? –

+0

@Kashyap:打印成本时,我得到一个“非空格式字符串传递给object .__ format__”错误,而且我似乎无法解决这个问题。 – Darkobra

回答

1

所以它可能是一两件事情:

  1. 你可能会饱和乙状结肠单位(如MMN提到的)我会建议尝试RELU单位来代替。

取代:

tf.nn.sigmoid(layer_n) 

有:

tf.nn.relu(layer_n) 
  • 您的模型可能没有足够的表现力真正了解你的数据。即它需要更深入。
  • 您也可以尝试不同的优化像亚当()这样
  • 取代:

    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) 
    

    有:

    optimizer = tf.train.AdamOptimizer().minimize(cost) 
    

    其他一些要点:

    1. 您应该添加偏置项到您的权重

    像这样:

    biases = { 
    'b1': tf.Variable(tf.random_normal([n_hidden_1], dtype=np.float64)),  
    'b2': tf.Variable(tf.random_normal([n_hidden_2], dtype=np.float64)), 
    'b3': tf.Variable(tf.random_normal([n_hidden_3],dtype=np.float64)), 
    'bout': tf.Variable(tf.random_normal([1], dtype=np.float64)) 
    } 
    
    def multilayer_perceptron(x, weights): 
        # First hidden layer with SIGMOID activation 
        layer_1 = tf.matmul(x, weights['h1']) + biases['b1'] 
        layer_1 = tf.nn.sigmoid(layer_1) 
        # Second hidden layer with SIGMOID activation 
        layer_2 = tf.matmul(layer_1, weights['h2']) + biases['b2'] 
        layer_2 = tf.nn.sigmoid(layer_2) 
        # Third hidden layer with SIGMOID activation 
        layer_3 = tf.matmul(layer_2, weights['h3']) + biases['b3'] 
        layer_3 = tf.nn.sigmoid(layer_3) 
        # Output layer with SIGMOID activation 
        out_layer = tf.matmul(layer_2, weights['out']) + biases['bout'] 
        return out_layer 
    
  • ,你可以随着时间的推移更新学习率
  • 像这样:

    learning_rate = tf.train.exponential_decay(INITIAL_LEARNING_RATE, 
                  global_step, 
                  decay_steps, 
                  LEARNING_RATE_DECAY_FACTOR, 
                  staircase=True) 
    

    你只需要定义dec ay步骤即何时衰减和LEARNING_RATE_DECAY_FACTOR即衰减多少。

    +0

    我已经用你的建议编辑了答案。注意到: 1. relu给出非常奇怪的值,你可以在编辑问题时阅读它。 2.我将模型做得更深一些,因为之前有2个隐藏层,由于我的错误,现在它有3个隐藏层。 3.我真的不能使用Adam优化器,因为它会违背我的任务的目的,这是为了发挥学习速度和一些初始化参数。 您是否认为我在每次mini_batch之后都正确计算成本? – Darkobra

    +0

    有不同的成本函数,所以它真的取决于你的任务。我无法真正回答这个问题,因为如果不知道这个任务是说l2损失是正确的还是交叉熵或其他东西。你正在使用l2丢失。 – Steven

    +0

    另一个简单的事情是“明显的”,但有时不被注意,确保您的标签符合正确的训练输入。 – Steven

    1

    您的权重初始化时stddev为1,因此第1层的输出将具有10左右的stddev。这可能会使sigmoid函数达到大多数梯度为0的点。

    您可以尝试初始化stddev为.01的隐藏权重吗?

    +0

    看起来这 00000 Tr的ERR = 0.107566 00500 Tr的ERR = 0.289380 01000 Tr的ERR = 0.339091 01500 Tr的ERR = 0.358559 02000 Tr的ERR = 0.122639 02500 Tr的ERR = 0.125160 03000 Tr的ERR = 0.126219 03500 Tr的ERR = 0.132500 04000 Tr的ERR = 0.121795 04500 Tr的ERR = 0.116499 05000 Tr的ERR = 0.124532 05500 Tr的ERR = 0.124484 06000 Tr的ERR = 0.118491 06500 Tr的ERR = 0.119977 07000 Tr的ERR = 0.127532 07500 Tr的ERR = 0.159053 08000 Tr err = 0.083876 08500 Tr的ERR = 0.131488 09000 Tr的ERR = 0.123161 09500 Tr的ERR = 0.125011 特错误:0.129284643 – Darkobra

    +0

    嗯,不能以意见给予适当的形状,但我可以告诉你,这并没有解决我的问题。 – Darkobra

    +0

    嗯,也许这是最好的你将得到一个双层网络?你的意思是不使用h3吗? – MMN

    1

    除了上面的答案,我会建议你试图成本函数tf.nn.sigmoid_cross_entropy_with_logits(logits,目标,名称=无)

    由于二元分类,你必须尝试的sigmoid_cross_entropy_with_logits成本功能

    我还建议你还必须绘制列车和测试的准确性与时代的数量的线图。即检查模型是否过度配合?

    如果不是过度拟合,尽量让你的神经网络更复杂。这是通过增加神经元的数量,增加层数。你会得到这样一个点,除此之外,你的训练精度会持续增加,但验证不会达到最佳模型。

    +0

    嘿Pramod,谢谢你的回复。我正在阅读你提到的这个成本函数,但是描述表明它最适合于标签不相互排斥的地方 - 但在我的模型中它们是。我现在在TensorBoard的帮助下调整我的网络,我一定会努力让自己的网络更加复杂。 – Darkobra

    +0

    按照问题“我有一个非常大的数据集(大约1,5 * 10^6个例子),每个都有一个二进制(0/1)标签”。它是二元类分类,每个实例都是真(1)或假(0)。你相互排斥是什么意思?我无法得到它。 –

    +0

    我想你是在谈论这个问题“衡量离散分类任务中的概率错误,其中每个类都是独立的,而不是相互排斥的。”我认为根据你的描述你的标签不是相互排斥和独立的。看看这个:http://stats.stackexchange。com/questions/107768 /多重标签与多种分类之间的差异 –