2017-10-05 236 views
0

我正在尝试使用SyncReplicaOptimizer和MonitoredTraining Session在分布式张量流中编写同步训练码。分布式Tensorflow,Master在培训时卡住了,工作人员没有开始培训,而使用SyncReplicasOptimizer和MonitoredTrainingSession?

我面临的问题是,经过一些步骤后,主人会暂停培训,并且没有工人开始培训。有没有人遇到过这个?

这是我写的代码。数据从张量流记录中读取。我遵循tensorflow网站中描述的确切方式。

def build(self): 
    self.modelObj = Model(self.imagesize, self.targetSize) 
    self.modelObj.model() 
    self.global_step = tf.contrib.framework.get_or_create_global_step() 
    self.opt = tf.train.AdamOptimizer(self.learningrate) 
    if self.syncTraining: 
     self.trainer = tf.train.SyncReplicasOptimizer(self.opt,replicas_to_aggregate=self.num_workers,total_num_replicas=self.num_workers) 
    else: 
     self.trainer = self.opt 
    self.trainstep = self.trainer.minimize(self.modelObj.loss, global_step=self.global_step) 
    self.saver = tf.train.Saver(max_to_keep=1) 
    self.summary_op = tf.summary.merge_all() 
    self.init_op = tf.global_variables_initializer() 
    if self.syncTraining: 
     self.sync_replicas_hook = self.trainer.make_session_run_hook(is_chief = (self.task_index==0)) 


def train(self): 
    if self.syncTraining: 



     with tf.train.MonitoredTrainingSession(master=self.server.target, 
               is_chief=(self.task_index==0), 
               checkpoint_dir=self.logdir, 
               hooks=[self.sync_replicas_hook]) as self.session: 
      step = 0 
      try: 
       while not self.session.should_stop(): 
        # training 

        [trainx, trainy_] = self.session.run([self.trainx, self.trainy_]) 
        feed = {self.modelObj.x: trainx, self.modelObj.y_: trainy_, 
          self.modelObj.batch: self.batch_size, self.modelObj.keep_prob: 0.7} 
        _, trainloss = self.session.run([self.trainstep, self.modelObj.loss], feed_dict=feed) 

        print("step: %d, training loss %f" % (step, trainloss)) 

        step += 1 

      except tf.errors.OutOfRangeError: 
       print('training finished, number of epochs reached') 

回答

1

找到了解决办法。

延迟首席工人加入

time.sleep(5) 

而且开始,对参数服务器做同样的尝试,而不是CPU的GPU运行参数的服务器。

2

是的,ps不应该放在GPU上。 我也有这个问题。我通过在tf.train.replica_device_setter中显式声明ps_device =“/ job:ps/cpu:0”来解决它。 整个代码如下:

with tf.device(tf.train.replica_device_setter(
           ps_device="/job:ps/cpu:0", 
           worker_device="/job:worker/task:%d" % (worker_index), 
           cluster=cluster_spec)): 

非常感谢@prateek阿格拉瓦尔