2017-03-09 46 views
1

我想直接从他们的语言理解教程来培训CNTK模型。CNTK培训每个时代后速度放慢

Sequential([ 
      Embedding(emb_dim), 
      OneWordWindow(), 
      BatchNormalization(), 
      BiRecurrence(LSTM(hidden_dim), LSTM(hidden_dim)), 
      BatchNormalization(), 
      Dense(num_labels) 
     ]) 

似乎在每个纪元(见下文)后训练速度会减慢。这是因为学习率计划,还是我在这里错过了一些东西?

的LR时间表亚当,是

lr_per_sample = [0.003]*4+[0.0015]*24+[0.0003] 
lr_per_minibatch = [x * minibatch_size for x in lr_per_sample] 
lr_schedule = learning_rate_schedule(lr_per_minibatch, UnitType.minibatch, epoch_size) 



Finished Epoch[1 of 1000]: [Training] loss = 0.149485 * 18059, metric = 3.46% * 18059 10.189s (1772.3 samples per second); 

Finished Epoch[2 of 1000]: [Training] loss = 0.071990 * 17974, metric = 1.47% * 17974 51.836s (346.7 samples per second); 

Finished Epoch[3 of 1000]: [Training] loss = 0.106882 * 17992, metric = 2.08% * 17992 60.175s (299.0 samples per second); 

Finished Epoch[4 of 1000]: [Training] loss = 0.074046 * 17987, metric = 1.51% * 17987 68.655s (262.0 samples per second); 

Finished Epoch[5 of 1000]: [Training] loss = 0.052539 * 17995, metric = 1.28% * 17995 77.627s (231.8 samples per second); 

Finished Epoch[6 of 1000]: [Training] loss = 0.057482 * 18011, metric = 1.55% * 18011 86.191s (209.0 samples per second); 

回答

1

有影响每秒的样本数的打印输出在ProgreessPrinter发现了一个错误。实际速度不受影响,只是报告速度。这个错误在master中解决 - 所以你现在可以得到它,或者你可以等待下一个正式发布,这个发布将在2017年3月14日发布。

+0

谢谢,这些时代似乎并没有真正放慢速度,但我不确定它的含义。 – budha