2017-04-19 69 views
0

我正在尝试为Mesos集群部署模式配置spark作业服务器。 我在jobserver配置中设置了spark.master =“mesos:// mesos-master:5050”。java.lang.UnsatisfiedLinkError:java.library.path中没有mesos

当我试图创造就业的服务器上的一个背景下,未能与以下异常:

[2017-04-19 14:09:42,346] ERROR .jobserver.JobManagerActor [] [akka://JobServer/user/jobManager-42-881e-b37be6e443dd] - Failed to create context test-context, shutting down actor 
java.lang.UnsatisfiedLinkError: no mesos in java.library.path 
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) 
    at java.lang.Runtime.loadLibrary0(Runtime.java:870) 
    at java.lang.System.loadLibrary(System.java:1122) 
    at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:54) 
    at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:79) 
    at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2485) 
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:492) 
    at spark.jobserver.context.DefaultSparkContextFactory$$anon$1.<init>(SparkContextFactory.scala:119) 
    at spark.jobserver.context.DefaultSparkContextFactory.makeContext(SparkContextFactory.scala:119) 
    at spark.jobserver.context.DefaultSparkContextFactory.makeContext(SparkContextFactory.scala:114) 
    at spark.jobserver.context.SparkContextFactory$class.makeContext(SparkContextFactory.scala:63) 
    at spark.jobserver.context.DefaultSparkContextFactory.makeContext(SparkContextFactory.scala:114) 
    at spark.jobserver.JobManagerActor$$anonfun$wrappedReceive$1.applyOrElse(JobManagerActor.scala:135) 
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
    at spark.jobserver.common.akka.ActorStack$$anonfun$receive$1.applyOrElse(ActorStack.scala:33) 
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
    at spark.jobserver.common.akka.Slf4jLogging$$anonfun$receive$1$$anonfun$applyOrElse$1.apply$mcV$sp(Slf4jLogging.scala:25) 
    at spark.jobserver.common.akka.Slf4jLogging$class.spark$jobserver$common$akka$Slf4jLogging$$withAkkaSourceLogging(Slf4jLogging.scala:34) 
    at spark.jobserver.common.akka.Slf4jLogging$$anonfun$receive$1.applyOrElse(Slf4jLogging.scala:24) 
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
    at spark.jobserver.common.akka.ActorMetrics$$anonfun$receive$1.applyOrElse(ActorMetrics.scala:23) 
    at akka.actor.Actor$class.aroundReceive(Actor.scala:484) 
    at spark.jobserver.common.akka.InstrumentedActor.aroundReceive(InstrumentedActor.scala:8) 
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526) 
    at akka.actor.ActorCell.invoke(ActorCell.scala:495) 
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257) 
    at akka.dispatch.Mailbox.run(Mailbox.scala:224) 
    at akka.dispatch.Mailbox.exec(Mailbox.scala:234) 
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 

我已经设置环境变量MESOS_NATIVE_JAVA_LIBRARY指向正确的libmesos.so 另外的位置,我能使用从命令行提交的火花成功提交作业:

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master mesos://mesos-master:5050 /pathto/spark/examples.jar 100 

这意味着我的mesos集群设置正在工作。

我是否缺少需要专门为spark-jobserver完成的任何配置?

回答

0

我为用户设置了MESOS_NATIVE_JAVA_LIBRARY env变量,并且我正在使用Sudo特权运行作业服务器。

相关问题