2012-06-01 39 views
0

我有一个错误在k均值聚类一个错误的象夫,检查错误日志后,我认为它可能通过Hadoop本地库造成的,但是当我再次编译的Hadoop我自己,然后去运行作业,它总是抛出hs_err_pid * .LOG,其内容如下:强大的文本当运行k均值聚类通过Hadoop的象夫在Hadoop

# 
# A fatal error has been detected by the Java Runtime Environment: 
# 
# SIGFPE (0x8) at pc=0x00002aae75c8168f, pid=30832, tid=1076017472 
# 
# JRE version: 6.0_29-b11 
# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 compressed oops) 
# Problematic frame: 
# C [ld-linux-x86-64.so.2+0x868f] double+0xcf 
# 
# If you would like to submit a bug report, please visit: 
# http://java.sun.com/webapps/bugreport/crash.jsp 
# The crash happened outside the Java Virtual Machine in native code. 
# See problematic frame for where to report the bug. 
# 

--------------- T H R E A D --------------- 

Current thread (0x0000000040115000): JavaThread "main" [_thread_in_native, id=30863, stack(0x000000004012b000,0x000000004022c000)] 

siginfo:si_signo=SIGFPE: si_errno=0, si_code=1 (FPE_INTDIV), si_addr=0x00002aae75c8168f 

Registers: 
RAX=0x000000000f4d007f, RBX=0x0000000000000000, RCX=0x0000000040227d00, RDX=0x0000000000000000 
RSP=0x0000000040227ba0, RBP=0x0000000040227d40, RSI=0x000000000f4d007f, RDI=0x00002aaab90008f1 
R8 =0x00002aaab8e694c0, R9 =0x0000000000000000, R10=0x0000000000000000, R11=0xffffffffffffffff 
R12=0x0000000040227d00, R13=0x00002aaab8e69210, R14=0x0000000000000000, R15=0x00002aaab90002c0 
RIP=0x00002aae75c8168f, EFLAGS=0x0000000000010246, CSGSFS=0x0000000000000033, ERR=0x0000000000000000 
    TRAPNO=0x0000000000000000 

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) 
j java.lang.ClassLoader$NativeLibrary.load(Ljava/lang/String;)V+0 
j java.lang.ClassLoader.loadLibrary0(Ljava/lang/Class;Ljava/io/File;)Z+300 
j java.lang.ClassLoader.loadLibrary(Ljava/lang/Class;Ljava/lang/String;Z)V+347 
j java.lang.Runtime.loadLibrary0(Ljava/lang/Class;Ljava/lang/String;)V+54 
j java.lang.System.loadLibrary(Ljava/lang/String;)V+7 
j org.apache.hadoop.util.NativeCodeLoader.<clinit>()V+25 
v ~StubRoutines::call_stub 
j org.apache.hadoop.io.compress.zlib.ZlibFactory.<clinit>()V+13 
v ~StubRoutines::call_stub 
j org.apache.hadoop.io.compress.DefaultCodec.getCompressorType()Ljava/lang/Class;+4 
j org.apache.hadoop.io.compress.CodecPool.getCompressor(Lorg/apache/hadoop/io/compress/CompressionCodec;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/io/compress/Comp 
ressor;+4 
j org.apache.hadoop.io.compress.CodecPool.getCompressor(Lorg/apache/hadoop/io/compress/CompressionCodec;)Lorg/apache/hadoop/io/compress/Compressor;+2 
j org.apache.hadoop.io.SequenceFile$Writer.init(Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/fs/FSDataOutputStream;Ljava/lang/Class;Ljava/l 
ang/Class;ZLorg/apache/hadoop/io/compress/CompressionCodec;Lorg/apache/hadoop/io/SequenceFile$Metadata;)V+121 
j org.apache.hadoop.io.SequenceFile$RecordCompressWriter.<init>(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/fs/Path;Ljava/lang/Class 
;Ljava/lang/Class;ISJLorg/apache/hadoop/io/compress/CompressionCodec;Lorg/apache/hadoop/util/Progressable;Lorg/apache/hadoop/io/SequenceFile$Metadata;)V+30 
j org.apache.hadoop.io.SequenceFile.createWriter(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/fs/Path;Ljava/lang/Class;Ljava/lang/Cla 
ss;ISJLorg/apache/hadoop/io/SequenceFile$CompressionType;Lorg/apache/hadoop/io/compress/CompressionCodec;Lorg/apache/hadoop/util/Progressable;Lorg/apache/hadoop/io/SequenceFile$Me 
tadata;)Lorg/apache/hadoop/io/SequenceFile$Writer;+100 
j org.apache.hadoop.io.SequenceFile.createWriter(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/fs/Path;Ljava/lang/Class;Ljava/lang/Cla 
ss;Lorg/apache/hadoop/io/SequenceFile$CompressionType;)Lorg/apache/hadoop/io/SequenceFile$Writer;+43 
j org.apache.hadoop.io.SequenceFile.createWriter(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/fs/Path;Ljava/lang/Class;Ljava/lang/Cla 
ss;)Lorg/apache/hadoop/io/SequenceFile$Writer;+10 
j org.apache.mahout.clustering.kmeans.RandomSeedGenerator.buildRandom(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/fs/Path;ILorg/apache/mah 
out/common/distance/DistanceMeasure;)Lorg/apache/hadoop/fs/Path;+101 
j org.apache.mahout.clustering.kmeans.KMeansDriver.run([Ljava/lang/String;)I+264 
j org.apache.hadoop.util.ToolRunner.run(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/util/Tool;[Ljava/lang/String;)I+38 
j org.apache.mahout.clustering.kmeans.KMeansDriver.main([Ljava/lang/String;)V+15 
v ~StubRoutines::call_stub 
j sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 
j sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+87 
j sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 
j java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+161 
j org.apache.hadoop.util.RunJar.main([Ljava/lang/String;)V+538 
v ~StubRoutines::call_stub 



    Is there anyone can help me or give me some advice? 

    xianwu 
+0

它找不到zlib的图书馆,都安装? –

+0

我会@ThomasJungblut同意 - 但我认为图书馆是存在(否则你会看到一个不同的错误消息),只是他们可能已损坏以某种方式 - 你试图重新编译的本地库最近还好吗? –

回答

0

我可以帮你做出如下这个问题的一些感觉:

如果您的堆栈上工作并查看您的堆栈使用Hadoop库的最后一次通话是如下:

org.apache.hadoop.util.NativeCodeLoader 

现在,如果你看一下源代码here你会看到代码试图加载一个Hadoop库如下:如果你看一下你的电话STAC

41 try { 
42  System.loadLibrary("hadoop"); 
43  LOG.info("Loaded the native-hadoop library"); 
44  nativeCodeLoaded = true; 
45 } catch (Throwable t) { 
46  // Ignore failure to load 
47  LOG.debug("Failed to load native-hadoop with error: " + t); 
48  LOG.debug("java.library.path=" + System.getProperty("java.library.path")); 
49 } 

现在K优会看到同样的事情的Java运行时试图加载一个库,然后发生了碰撞:

Java.lang.ClassLoader$NativeLibrary.load(Ljava/lang/String;)V+0 
java.lang.ClassLoader.loadLibrary0(Ljava/lang/Class;Ljava/io/File;)Z+300 
java.lang.ClassLoader.loadLibrary(Ljava/lang/Class;Ljava/lang/String;Z)V+347 
java.lang.Runtime.loadLibrary0(Ljava/lang/Class;Ljava/lang/String;)V+54 
java.lang.System.loadLibrary(Ljava/lang/String;)V+7 
org.apache.hadoop.util.NativeCodeLoader.<clinit>()V+25 

所以,问题与Java运行时间正好与并没有很多可以做。您确定可以记录此错误并继续并尝试再次测试您的代码。这样的问题可能会不时出现,你只是继续下一步。

+0

谢谢你的回复,Avkash。因为我没有改变Hadoop和Mahout中的任何代码,而不是,我只是呼吁在Hadoop的jar包(Mahout的核心 - 0.5.job)(Hadoop的0.20.203.0);正如你所说,这个问题可能会不时发生,所以我需要清楚地解决它。我会尝试重新安装Java或将现有的Java升级到更高级别。请给出你的建议。谢谢。 --xianwu – xianwu

+0

尝试使用与最近可用的Hadoop和Mahout二进制文件兼容的Java 7。 – AvkashChauhan

+0

这很好,我用java 7,并正常运行,谢谢Avkash。 – xianwu

0

这是无关的Hadoop或亨利马乌真的。 JVM本身崩溃。这是JVM中的一个错误,或者是您的JVM安装问题。再试一次,和/或重新安装Java。

+0

谢谢您的回答道:Owen.My命令是:斌/ Hadoop的罐子象夫核-0.5-job.jar org.apache.mahout.clustering.kmeans.KMeansDriver --input /用户/ ppstat/jixianwu/clustering_test/pipi_vectors --output /用户/ ppstat/jixianwu/clustering_test/k均值-clusters2 --distanceMeasure org.apache.mahout.common.distance.CosineDistanceMeasure --clusters /用户/ ppstat/jixianwu/clustering_test /初始簇--numClusters 10 - convergenceDelta 0.01 --maxIter 20 --tempDir/user/ppstat/jixianwu/clustering_tes/temp/2012-05-02;所以,我同意你的看法,因为我没有更改任何代码。我会尝试上层的JVM,Thx – xianwu