2017-05-06 113 views
1

我有一个非常常见的用途,即冻结Inception的底层并仅训练前两层,之后我降低学习速率并微调整个初始模型。如何从Inception-3检查点恢复具有不同可训练变量的训练

这里是我跑第一部分

train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers' 
with tf.Graph().as_default(): 
    tf.logging.set_verbosity(tf.logging.INFO) 

    dataset = get_dataset() 
    images, _, labels = load_batch(dataset, batch_size=32) 

    # Create the model, use the default arg scope to configure the batch norm parameters. 
    with slim.arg_scope(inception.inception_v3_arg_scope()): 
     logits, _ = inception.inception_v3(images, num_classes=5, is_training=True) 

    # Specify the loss function: 
    one_hot_labels = slim.one_hot_encoding(labels, 5) 
    tf.losses.softmax_cross_entropy(one_hot_labels, logits) 
    total_loss = tf.losses.get_total_loss() 

    # Create some summaries to visualize the training process: 
    tf.summary.scalar('losses/Total Loss', total_loss) 

    # Specify the optimizer and create the train op: 
    optimizer = tf.train.RMSPropOptimizer(0.001, 0.9, 
            momentum=0.9, epsilon=1.0) 
    train_op = slim.learning.create_train_op(total_loss, optimizer, variables_to_train=get_variables_to_train()) 

    # Run the training: 
    final_loss = slim.learning.train(
     train_op, 
     logdir=train_dir, 
     init_fn=get_init_fn(), 
     number_of_steps=4500, 
     save_summaries_secs=30, 
     save_interval_secs=30, 
     session_config=tf.ConfigProto(gpu_options=gpu_options)) 

print('Finished training. Last batch loss %f' % final_loss) 

其正常运行,然后我的代码运行的第二部分

train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers' 
with tf.Graph().as_default(): 
    tf.logging.set_verbosity(tf.logging.INFO) 

    dataset = get_dataset() 
    images, _, labels = load_batch(dataset, batch_size=32) 

    # Create the model, use the default arg scope to configure the batch norm parameters. 
    with slim.arg_scope(inception.inception_v3_arg_scope()): 
     logits, _ = inception.inception_v3(images, num_classes=5, is_training=True) 

    # Specify the loss function: 
    one_hot_labels = slim.one_hot_encoding(labels, 5) 
    tf.losses.softmax_cross_entropy(one_hot_labels, logits) 
    total_loss = tf.losses.get_total_loss() 
    # Create some summaries to visualize the training process: 
    tf.summary.scalar('losses/Total Loss', total_loss) 

    # Specify the optimizer and create the train op: 
    optimizer = tf.train.RMSPropOptimizer(0.0001, 0.9, 
            momentum=0.9, epsilon=1.0) 
    train_op = slim.learning.create_train_op(total_loss, optimizer) 

    # Run the training: 
    final_loss = slim.learning.train(
     train_op, 
     logdir=train_dir, 
     init_fn=get_init_fn(), 
     number_of_steps=10000, 
     save_summaries_secs=30, 
     save_interval_secs=30, 
     session_config=tf.ConfigProto(gpu_options=gpu_options)) 

print('Finished training. Last batch loss %f' % final_loss) 

注意的是,在第二部分中,我没有通过代码任何东西变成create_train_opvariables_to_train参数。那么这个错误显示

NotFoundError (see above for traceback): Key InceptionV3/Conv2d_4a_3x3/BatchNorm/beta/RMSProp not found in checkpoint 
    [[Node: save_1/RestoreV2_49 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_1/Const_0, save_1/RestoreV2_49/tensor_names, save_1/RestoreV2_49/shape_and_slices)]] 
    [[Node: save_1/Assign_774/_1550 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_2911_save_1/Assign_774", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]] 

我怀疑它在寻找的InceptionV3/Conv2d_4a_3x3层,这是不存在的RMSProp变量,因为我没有在以前的检查站列车层。我不知道如何实现我想要的功能,因为在文档中没有关于如何执行此操作的示例。

回答