2017-10-18 96 views
1

我正在使用tensorflow的ctc_costctc_greedy_decoder。当我训练最小化模型ctc_cost的成本时,但是当我解码它总是没有投入任何东西。这有可能发生吗?我的代码如下。tensorflow - CTC丢失减少,但解码器输出为空

我想知道我是否正确预处理数据。我预测在给定fbank特征帧上的手机序列号。有48部电话(48班),每个框架有69个功能。我将num_classes设置为49,因此逻辑将具有尺寸(max_time_steps, num_samples, 49)。而对于我的稀疏张量,我的值范围从0到47(48保留空白)。我从未在数据中添加空白,我认为我不应该这样做? (我应该做那样的事情吗?)

当训练时,每次迭代和时期后成本都会下降,但编辑距离永远不会减少。事实上,它保持在1,因为解码器几乎总是预测和排空序列。有什么我做错了吗?

graph = tf.Graph() 
with graph.as_default(): 

    inputs = tf.placeholder(tf.float32, [None, None, num_features]) 
    targets = tf.sparse_placeholder(tf.int32) 
    seq_len = tf.placeholder(tf.int32, [None]) 
    seq_len_t = tf.placeholder(tf.int32, [None]) 
    cell = tf.contrib.rnn.LSTMCell(num_hidden) 
    stack = tf.contrib.rnn.MultiRNNCell([cell] * num_layers) 
    outputs, _ = tf.nn.dynamic_rnn(stack, inputs, seq_len, dtype=tf.float32) 
    outputs, _ = tf.nn.dynamic_rnn(stack, inputs, seq_len, dtype=tf.float32) 

    input_shape = tf.shape(inputs) 
    outputs = tf.reshape(outputs, [-1, num_hidden]) 
    W = tf.Variable(tf.truncated_normal([num_hidden, 
            num_classes], 
            stddev=0.1)) 

    b = tf.Variable(tf.constant(0., shape=[num_classes])) 


    logits = tf.matmul(outputs, W) + b 

    logits = tf.reshape(logits, [input_shape[0], -1, num_classes]) 

    logits = tf.transpose(logits, (1, 0, 2)) 

    loss = tf.nn.ctc_loss(targets, logits, seq_len) 
    cost = tf.reduce_mean(loss) 

    decoded, log_probabilities = tf.nn.ctc_greedy_decoder(logits, seq_len, merge_repeated=True) 
    optimizer = tf.train.MomentumOptimizer(initial_learning_rate, 0.1).minimize(cost) 
    err = tf.reduce_mean(tf.edit_distance(tf.cast(decoded[0],tf.int32), targets)) 
    saver = tf.train.Saver()  

with tf.Session(graph=graph) as session: 

    X, Y, ids, seq_length, label_to_int, int_to_label = get_data('train') 

    session.run(tf.global_variables_initializer()) 

    print(seq_length) 

    num_batches = len(X)//batch_size + 1 



    for epoch in range(epochs): 
     print ('epoch'+str(epoch)) 
     for batch in range(num_batches): 
      input_X, target_input, seq_length_X = get_next_batch(batch,X, Y ,seq_length,batch_size) 
      feed = {inputs: input_X , 
      targets: target_input, 
      seq_len: seq_length_X} 

      print ('epoch'+str(epoch)) 
      _, print_cost, print_er = session.run([optimizer, cost, err], feed_dict = feed) 
      print('epoch '+ str(epoch)+' batch '+str(batch)+ ' cost: '+str(print_cost)+' er: '+str(print_er)) 

    save_path = saver.save(session, '/tmp/model.ckpt') 
    print('model saved') 

    X_t, ids_t, seq_length_t = get_data('test') 

    feed_t = {inputs: X_t, seq_len: seq_length_t} 
    print(X.shape) 
    print(X_t.shape) 
    print(type(seq_length_t[0])) 


    de, lo = session.run([decoded[0], log_probabilities],feed_dict = feed_t) 
    with open('predict.pickle', 'wb') as f: 
     pickle.dump((de, lo), f) 
+0

是网络完全培训(培训错误停滞)吗?由于空练习通常在训练开始时遇到。例如。搜索“CTC中有趣的空白标签”。不,你不必为目标化妆品添加空白。这些空白仅供(CTC)内部使用。 – Harry

回答

0

我得到了同样的问题,并通过提高初始学习率解决。

另外,在验证集上输出LER对于检查训练过程的进度是必要的。