2016-12-16 112 views
1

时的记忆,我想星火数据帧转换为H2O数据帧苏打水:出火花数据帧转换为H2O数据帧

火花设置,我使用

.setMaster("local[1]") 
.set("spark.driver.memory", "4g") 
.set("spark.executor.memory", "4g") 

,我试图H2O 2.0。 2和H2O 1.6.4。

val trainsetH2O: H2OFrame = trainsetH 
val testsetH2O: H2OFrame = testsetH 

的错误信息是:

ERROR Executor: Exception in task 49.0 in stage 3.0 (TID 62) 
java.lang.OutOfMemoryError: PermGen space 
    at sun.misc.Unsafe.defineClass(Native Method) 
    at sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:63) 
    at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:399) 
    at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:396) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:395) 
    at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:113) 
    at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:331) 
    at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1376) 
    at java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:72) 
    at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:493) 
    at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:468) 
    at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365) 
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602) 
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622) 
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) 

哪里错了,我在得到两个同样的错误? trainset和testset中的数据小于10K,所以它实际上很小。

回答

3

的问题是,你用完了PermGem内存这是不相同的内存空间,你通常配置使用

.set("spark.driver.memory", "4g") .set("spark.executor.memory", "4g")

这是JVM的内存的一部分,其中包含加载的类驱动程序和执行者。为了提高火花驱动器和执行器的性能,请使用以下参数调用​​或spark-shell命令。

--conf spark.driver.extraJavaOptions="-XX:MaxPermSize=384m" --conf spark.executor.extraJavaOptions="-XX:MaxPermSize=384m"