2015-06-25 35 views
1

我试图训练神经网络来学习函数y = x1 + x2 + x3。目标是与Caffe一起玩,以便更好地学习和理解它。所需的数据是在python中合成生成的,并作为lmdb数据库文件写入内存。Caffe:在学习简单的线性函数时损失极高

数据生成代码:

import numpy as np 
import lmdb 
import caffe 

Ntrain = 100 
Ntest = 20 
K = 3 
H = 1 
W = 1 

Xtrain = np.random.randint(0,1000, size = (Ntrain,K,H,W)) 
Xtest = np.random.randint(0,1000, size = (Ntest,K,H,W)) 

ytrain = Xtrain[:,0,0,0] + Xtrain[:,1,0,0] + Xtrain[:,2,0,0] 
ytest = Xtest[:,0,0,0] + Xtest[:,1,0,0] + Xtest[:,2,0,0] 

env = lmdb.open('expt/expt_train') 

for i in range(Ntrain): 
    datum = caffe.proto.caffe_pb2.Datum() 
    datum.channels = Xtrain.shape[1] 
    datum.height = Xtrain.shape[2] 
    datum.width = Xtrain.shape[3] 
    datum.data = Xtrain[i].tobytes() 
    datum.label = int(ytrain[i]) 
    str_id = '{:08}'.format(i) 

    with env.begin(write=True) as txn: 
     txn.put(str_id.encode('ascii'), datum.SerializeToString()) 


env = lmdb.open('expt/expt_test') 

for i in range(Ntest): 
    datum = caffe.proto.caffe_pb2.Datum() 
    datum.channels = Xtest.shape[1] 
    datum.height = Xtest.shape[2] 
    datum.width = Xtest.shape[3] 
    datum.data = Xtest[i].tobytes() 
    datum.label = int(ytest[i]) 
    str_id = '{:08}'.format(i) 

    with env.begin(write=True) as txn: 
     txn.put(str_id.encode('ascii'), datum.SerializeToString()) 

Solver.prototext文件:

net: "expt/expt.prototxt" 

display: 1 
max_iter: 200 
test_iter: 20 
test_interval: 100 

base_lr: 0.000001 
momentum: 0.9 
# weight_decay: 0.0005 

lr_policy: "inv" 
# gamma: 0.5 
# stepsize: 10 
# power: 0.75 

snapshot_prefix: "expt/expt" 
snapshot_diff: true 

solver_mode: CPU 
solver_type: SGD 

debug_info: true 

来自Caffe型号:

name: "expt" 


layer { 
    name: "Expt_Data_Train" 
    type: "Data" 
    top: "data" 
    top: "label"  

    include { 
     phase: TRAIN 
    } 

    data_param { 
     source: "expt/expt_train" 
     backend: LMDB 
     batch_size: 1 
    } 
} 


layer { 
    name: "Expt_Data_Validate" 
    type: "Data" 
    top: "data" 
    top: "label"  

    include { 
     phase: TEST 
    } 

    data_param { 
     source: "expt/expt_test" 
     backend: LMDB 
     batch_size: 1 
    } 
} 


layer { 
    name: "IP" 
    type: "InnerProduct" 
    bottom: "data" 
    top: "ip" 

    inner_product_param { 
     num_output: 1 

     weight_filler { 
      type: 'constant' 
     } 

     bias_filler { 
      type: 'constant' 
     } 
    } 
} 


layer { 
    name: "Loss" 
    type: "EuclideanLoss" 
    bottom: "ip" 
    bottom: "label" 
    top: "loss" 
} 

上我得到了测试数据的损失233,655。这是令人震惊的,因为损失比训练和测试数据集中的数字大三个数量级。另外,要学习的功能是简单的线性函数。我似乎无法弄清楚代码中的错误。任何建议/投入都非常感谢。

回答

1

损失产生很多在这种情况下,因为来自Caffe仅在int32格式uint8格式和标签(datum.label)接受数据(即datum.data)。但是,对于标签,numpy.int64格式也似乎工作。我认为datum.data仅适用于uint8格式,因为Caffe主要是为计算机视觉任务开发的,其输入是图像,其RGB值在[0,255]范围内。 uint8可以使用最少量的内存来捕获此信息。我做了如下修改数据生成代码:

Xtrain = np.uint8(np.random.randint(0,256, size = (Ntrain,K,H,W))) 
Xtest = np.uint8(np.random.randint(0,256, size = (Ntest,K,H,W))) 

ytrain = int(Xtrain[:,0,0,0]) + int(Xtrain[:,1,0,0]) + int(Xtrain[:,2,0,0]) 
ytest = int(Xtest[:,0,0,0]) + int(Xtest[:,1,0,0]) + int(Xtest[:,2,0,0]) 

与网络参数玩耍后(学习率,迭代次数等),我得到的10 ^(数量级的错误 - 6 )我认为这很不错!