2016-03-08 60 views
3

我有7个cassandra节点(5 nodes with 32 cores and 32G memory, and 4 nodes with 4 cores and 64G memory),并在这个集群上部署了火花工作者,火花的主人在8th node。我为他们使用了spark-cassandra-connector。现在我的卡桑德拉有近十亿记录有30场,我写的斯卡拉包括下面的代码片段:为什么我在使用spark + cassandra时出现错误:“Size exceeded Integer.MAX_VALUE”?

def startOneCache(): DataFrame = { 
val conf = new SparkConf(true) 
    .set("spark.cassandra.connection.host", "192.168.0.184") 
    .set("spark.cassandra.auth.username", "username") 
    .set("spark.cassandra.auth.password", "password") 
    .set("spark.driver.maxResultSize", "4G") 
    .set("spark.executor.memory", "12G") 
    .set("spark.cassandra.input.split.size_in_mb","64") 

val sc = new SparkContext("spark://192.168.0.131:7077", "statistics", conf) 
val cc = new CassandraSQLContext(sc) 
val rdd: DataFrame = cc.sql("select user_id,col1,col2,col3,col4,col5,col6 
,col7,col8 from user_center.users").limit(100000192) 
val rdd_cache: DataFrame = rdd.cache() 

rdd_cache.count() 
return rdd_cache 
} 

在火花的主我用​​运行上面的代码,在执行语句时:rdd_cache.count(),我在一个工人节点的ERROR192.168.0.185

16/03/08 15:38:57 INFO ShuffleBlockFetcherIterator: Started 4 remote fetches in 221 ms 
16/03/08 15:43:49 WARN MemoryStore: Not enough space to cache rdd_6_0 in memory! (computed 4.6 GB so far) 
16/03/08 15:43:49 INFO MemoryStore: Memory use = 61.9 KB (blocks) + 4.6 GB (scratch space shared across 1 tasks(s)) = 4.6 GB. Storage limit = 6.2 GB. 
16/03/08 15:43:49 WARN CacheManager: Persisting partition rdd_6_0 to disk instead. 
16/03/08 16:13:11 ERROR Executor: Managed memory leak detected; size = 4194304 bytes, TID = 24002 
16/03/08 16:13:11 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 24002) 
java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE 

我只是想到,最后的错误Size exceeds Integer.MAX_VALUE被警告引起:16/03/08 15:43:49 WARN MemoryStore: Not enough space to cache rdd_6_0 in memory! (computed 4.6 GB so far)之前,但我不知道为什么,还是我应该设定一个大于.set("spark.executor.memory", "12G"),应该怎么做我为了纠正这个吗?

回答

2

No Spark shuffle block can be greater than 2 GB.

Spark uses ByteBuffer as abstraction for storing blocks and its size is limited by Integer.MAX_VALUE (2 billions).

分区的低数目可导致高混洗块大小。要解决此问题,请尝试使用rdd.repartition()rdd.coalesce()或更多来增加分区数。

如果这不起作用,这意味着至少有一个分区仍然太大,您可能需要使用一些更复杂的方法使其更小 - 例如,使用随机性来均衡RDD数据的分布个人分区。

+1

尽管这是一个正确的答案,但一些解释是有用的。 – zero323

+0

'拉多Buransky',谢谢!我应该怎么做才能得到当前rdd中有多少个分区?在我的Spark UI中,总任务是'23660',这是当前的分区数量,如果是的话,我应该设置多少个分区来解决这个错误? – abelard2008

+0

@ abelard2008试试这个:https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/performance_optimization/how_many_partitions_does_an_rdd_have.html –

相关问题