2017-05-05 119 views
5

我在单台机器上运行带有Spark(1.6.1)的JanusGraph(0.1.0)。 我的配置如here所述。 当使用SparkGraphComputer访问gremlin控制台上的图时,它始终为空。我在日志文件中找不到任何错误,它只是一个空图。为Spark集群和Cassandra设置和配置JanusGraph

是否有人使用JanusGraph和Spark并可以分享他的配置和属性?

使用JanusGraph,我得到预期的输出:

gremlin> graph=JanusGraphFactory.open('conf/test.properties') 
==>standardjanusgraph[cassandrathrift:[127.0.0.1]] 
gremlin> g=graph.traversal() 
==>graphtraversalsource[standardjanusgraph[cassandrathrift:[127.0.0.1]], standard] 
gremlin> g.V().count() 
14:26:10 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes 
==>1000001 
gremlin> 

使用HadoopGraph星火为GraphComputer,图形是空的:

gremlin> graph=GraphFactory.open('conf/test.properties') 
==>hadoopgraph[cassandrainputformat->gryooutputformat] 
gremlin> g=graph.traversal().withComputer(SparkGraphComputer) 
==>graphtraversalsource[hadoopgraph[cassandrainputformat->gryooutputformat], sparkgraphcomputer] 
gremlin> g.V().count() 
      ==>0==============================================> (14 + 1)/15] 

我的conf/test.properties:

# 
# Hadoop Graph Configuration 
# 
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph 
gremlin.hadoop.graphInputFormat=org.janusgraph.hadoop.formats.cassandra.CassandraInputFormat 
gremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat 
gremlin.hadoop.memoryOutputFormat=org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat 
gremlin.hadoop.memoryOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat 

gremlin.hadoop.deriveMemory=false 
gremlin.hadoop.jarsInDistributedCache=true 
gremlin.hadoop.inputLocation=none 
gremlin.hadoop.outputLocation=output 

# 
# Titan Cassandra InputFormat configuration 
# 
janusgraphmr.ioformat.conf.storage.backend=cassandrathrift 
janusgraphmr.ioformat.conf.storage.hostname=127.0.0.1 
janusgraphmr.ioformat.conf.storage.keyspace=janusgraph 
storage.backend=cassandrathrift 
storage.hostname=127.0.0.1 
storage.keyspace=janusgraph 

# 
# Apache Cassandra InputFormat configuration 
# 
cassandra.input.partitioner.class=org.apache.cassandra.dht.Murmur3Partitioner 
cassandra.input.keyspace=janusgraph 
cassandra.input.predicate=0c00020b0001000000000b000200000000020003000800047fffffff0000 
cassandra.input.columnfamily=edgestore 
cassandra.range.batch.size=2147483647 

# 
# SparkGraphComputer Configuration 
# 
spark.master=spark://127.0.0.1:7077 
spark.serializer=org.apache.spark.serializer.KryoSerializer 
spark.executor.memory=100g 

gremlin.spark.persistContext=true 
gremlin.hadoop.defaultGraphComputer=org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer 

HDFS似乎配置正确here

gremlin> hdfs 
==>storage[DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_178390072_1, ugi=cassandra (auth:SIMPLE)]]] 

回答

5

尝试修复这些属性:

janusgraphmr.ioformat.conf.storage.keyspace=janusgraph 
storage.keyspace=janusgraph 

替换:

janusgraphmr.ioformat.conf.storage.cassandra.keyspace=janusgraph 
storage.cassandra.keyspace=janusgraph 

默认密钥空间的名称为janusgraph,所以尽管在属性名称的错误,我不除非您使用不同的密钥空间名称加载数据,否则您会发现该问题。

后面的属性在Configuration Reference中描述。另外,请留意此open issue以改进Hadoop-Graph使用的文档。