2015-10-15 72 views
8

我想用一个向量标签的咖啡,而不是整数。我检查了一些答案,看来HDF5是更好的方法。但后来我stucked与错误,如:如何以HDF5格式提供caffe多标签数据?

accuracy_layer.cpp:34] Check failed: outer_num_ * inner_num_ == bottom[1]->count() (50 vs. 200) Number of labels must match number of predictions; e.g., if label axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W , with integer values in {0, 1, ..., C-1}.

与HDF5创建为:

f = h5py.File('train.h5', 'w') 
f.create_dataset('data', (1200, 128), dtype='f8') 
f.create_dataset('label', (1200, 4), dtype='f4') 

由生成我的网络:

def net(hdf5, batch_size): 
    n = caffe.NetSpec() 
    n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2) 
    n.ip1 = L.InnerProduct(n.data, num_output=50, weight_filler=dict(type='xavier')) 
    n.relu1 = L.ReLU(n.ip1, in_place=True) 
    n.ip2 = L.InnerProduct(n.relu1, num_output=50, weight_filler=dict(type='xavier')) 
    n.relu2 = L.ReLU(n.ip2, in_place=True) 
    n.ip3 = L.InnerProduct(n.relu1, num_output=4, weight_filler=dict(type='xavier')) 
    n.accuracy = L.Accuracy(n.ip3, n.label) 
    n.loss = L.SoftmaxWithLoss(n.ip3, n.label) 
    return n.to_proto() 

with open(PROJECT_HOME + 'auto_train.prototxt', 'w') as f: 
f.write(str(net('/home/romulus/code/project/train.h5list', 50))) 

with open(PROJECT_HOME + 'auto_test.prototxt', 'w') as f: 
f.write(str(net('/home/romulus/code/project/test.h5list', 20))) 

看来我应该增加标签数量和把东西放在整数而不是数组中,但是如果我这样做,caffe会抱怨一些数据和标签不相等,那么存在。

那么,喂多标签数据的正确格式是什么?

另外,我很想知道为什么没有人只是写数据格式HDF5如何映射到caffe blob?

+0

不应该''数据类型'f4'以及? – Shai

+0

更改为f4不会更改错误。 –

+1

可能是一个有价值的资源:http://stackoverflow.com/questions/33112941/multiple-category-classification-in-caffe –

回答

21

回答这个问题的标题:

的HDF5文件应该有两个数据集根,名为“数据”和“标签”,分别。形状是(data amount,dimension)。我只使用一维数据,所以我不确定channel,widthheight的顺序是什么。也许没关系。 dtype应该是浮动或双。

样品代码与h5py创建列车组是:

 
import h5py, os 
import numpy as np 

f = h5py.File('train.h5', 'w') 
# 1200 data, each is a 128-dim vector 
f.create_dataset('data', (1200, 128), dtype='f8') 
# Data's labels, each is a 4-dim vector 
f.create_dataset('label', (1200, 4), dtype='f4') 

# Fill in something with fixed pattern 
# Regularize values to between 0 and 1, or SigmoidCrossEntropyLoss will not work 
for i in range(1200): 
    a = np.empty(128) 
    if i % 4 == 0: 
     for j in range(128): 
      a[j] = j/128.0; 
     l = [1,0,0,0] 
    elif i % 4 == 1: 
     for j in range(128): 
      a[j] = (128 - j)/128.0; 
     l = [1,0,1,0] 
    elif i % 4 == 2: 
     for j in range(128): 
      a[j] = (j % 6)/128.0; 
     l = [0,1,1,0] 
    elif i % 4 == 3: 
     for j in range(128): 
      a[j] = (j % 4) * 4/128.0; 
     l = [1,0,1,1] 
    f['data'][i] = a 
    f['label'][i] = l 

f.close() 

另外,不需要精度层,简单地移除它是好的。下一个问题是损失层。由于SoftmaxWithLoss只有一个输出(具有最大值的维度的索引),因此它不能用于多标签问题。感谢Adian和Shai,我发现SigmoidCrossEntropyLoss在这种情况下是很好的。

下面是完整的代码,从数据创建,培训网络,并得到测试结果:

main.py (modified from caffe lanet example)

 
import os, sys 

PROJECT_HOME = '.../project/' 
CAFFE_HOME = '.../caffe/' 
os.chdir(PROJECT_HOME) 

sys.path.insert(0, CAFFE_HOME + 'caffe/python') 
import caffe, h5py 

from pylab import * 
from caffe import layers as L 

def net(hdf5, batch_size): 
    n = caffe.NetSpec() 
    n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2) 
    n.ip1 = L.InnerProduct(n.data, num_output=50, weight_filler=dict(type='xavier')) 
    n.relu1 = L.ReLU(n.ip1, in_place=True) 
    n.ip2 = L.InnerProduct(n.relu1, num_output=50, weight_filler=dict(type='xavier')) 
    n.relu2 = L.ReLU(n.ip2, in_place=True) 
    n.ip3 = L.InnerProduct(n.relu2, num_output=4, weight_filler=dict(type='xavier')) 
    n.loss = L.SigmoidCrossEntropyLoss(n.ip3, n.label) 
    return n.to_proto() 

with open(PROJECT_HOME + 'auto_train.prototxt', 'w') as f: 
    f.write(str(net(PROJECT_HOME + 'train.h5list', 50))) 
with open(PROJECT_HOME + 'auto_test.prototxt', 'w') as f: 
    f.write(str(net(PROJECT_HOME + 'test.h5list', 20))) 

caffe.set_device(0) 
caffe.set_mode_gpu() 
solver = caffe.SGDSolver(PROJECT_HOME + 'auto_solver.prototxt') 

solver.net.forward() 
solver.test_nets[0].forward() 
solver.step(1) 

niter = 200 
test_interval = 10 
train_loss = zeros(niter) 
test_acc = zeros(int(np.ceil(niter * 1.0/test_interval))) 
print len(test_acc) 
output = zeros((niter, 8, 4)) 

# The main solver loop 
for it in range(niter): 
    solver.step(1) # SGD by Caffe 
    train_loss[it] = solver.net.blobs['loss'].data 
    solver.test_nets[0].forward(start='data') 
    output[it] = solver.test_nets[0].blobs['ip3'].data[:8] 

    if it % test_interval == 0: 
     print 'Iteration', it, 'testing...' 
     correct = 0 
     data = solver.test_nets[0].blobs['ip3'].data 
     label = solver.test_nets[0].blobs['label'].data 
     for test_it in range(100): 
      solver.test_nets[0].forward() 
      # Positive values map to label 1, while negative values map to label 0 
      for i in range(len(data)): 
       for j in range(len(data[i])): 
        if data[i][j] > 0 and label[i][j] == 1: 
         correct += 1 
        elif data[i][j] %lt;= 0 and label[i][j] == 0: 
         correct += 1 
     test_acc[int(it/test_interval)] = correct * 1.0/(len(data) * len(data[0]) * 100) 

# Train and test done, outputing convege graph 
_, ax1 = subplots() 
ax2 = ax1.twinx() 
ax1.plot(arange(niter), train_loss) 
ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r') 
ax1.set_xlabel('iteration') 
ax1.set_ylabel('train loss') 
ax2.set_ylabel('test accuracy') 
_.savefig('converge.png') 

# Check the result of last batch 
print solver.test_nets[0].blobs['ip3'].data 
print solver.test_nets[0].blobs['label'].data 

h5list文件只包含在每一行的H5文件的路径:

train.h5list

/home/foo/bar/project/train.h5 

test.h5list

/home/foo/bar/project/test.h5 

和求解:

auto_solver.prototxt

train_net: "auto_train.prototxt" 
test_net: "auto_test.prototxt" 
test_iter: 10 
test_interval: 20 
base_lr: 0.01 
momentum: 0.9 
weight_decay: 0.0005 
lr_policy: "inv" 
gamma: 0.0001 
power: 0.75 
display: 100 
max_iter: 10000 
snapshot: 5000 
snapshot_prefix: "sed" 
solver_mode: GPU 

收敛图表: Converge graph

最后一批结果:

 
[[ 35.91593933 -37.46276474 -6.2579031 -6.30313492] 
[ 42.69248581 -43.00864792 13.19664764 -3.35134125] 
[ -1.36403108 1.38531208 2.77786589 -0.34310576] 
[ 2.91686511 -2.88944006 4.34043217 0.32656598] 
... 
[ 35.91593933 -37.46276474 -6.2579031 -6.30313492] 
[ 42.69248581 -43.00864792 13.19664764 -3.35134125] 
[ -1.36403108 1.38531208 2.77786589 -0.34310576] 
[ 2.91686511 -2.88944006 4.34043217 0.32656598]] 

[[ 1. 0. 0. 0.] 
[ 1. 0. 1. 0.] 
[ 0. 1. 1. 0.] 
[ 1. 0. 1. 1.] 
... 
[ 1. 0. 0. 0.] 
[ 1. 0. 1. 0.] 
[ 0. 1. 1. 0.] 
[ 1. 0. 1. 1.]] 

我觉得这个代码仍然有很多事情,以改善。任何建议表示赞赏。

+0

你能解释一下标签是如何定义的,它是一个二元系统吗? –

+0

是的,我只尝试二进制。 ON是1,OFF是0. –

+0

什么是caffe版本?对我有一个错误'ImportError:无法导入命名图层' – tidy

1

您的准确性层没有意义。

方式精度层工作:在精度层预计两个输入
(I)的预测概率矢量
(ⅱ)对应标量整数标签地面实况。
准确性层比检查预测标签的概率是否确实是最大值(或在top_k内)。
因此,如果你要C不同类别的分类,您的输入将要N -by- C(其中N是批量大小)输入预测概率为属于每个类CN样品,并N标签。

它在你的净定义方式:您输入的准确性层N逐4预测和N逐4个标签 - 这是没有意义的朱古力。

+0

看来我误解了准确性层。但是,如果我删除它,丢失层给我返回相同的错误。也许我需要另一个损失层的矢量标签?我找不到可用的损耗层列表。 –

+0

我试过了EuclideanLoss(没有准确的图层),但是它返回了大量的nan。 –

+1

@ RomulusUrakagiTs'ai它是最初的'NaN'吗?这可能是因为损失太高,导致你渐渐“爆炸”,将你的训练抛开。尝试*显着*减少丢失层的'loss_weight'。 – Shai