2015-04-02 87 views
1

可以配置Spark以便替代端口7077绑定到地址127.0.1.1,而不是将绑定为0.0.0.0的 。在为8080端口绑定同样的方式:更改在端口7077上运行的绑定IP - Apache Spark

netstat -pln 
(Not all processes could be identified, non-owned process info 
will not be shown, you would have to be root to see it all.) 
Active Internet connections (only servers) 
Proto Recv-Q Send-Q Local Address   Foreign Address   State  PID/Program name 
tcp  0  0 127.0.1.1:7077   0.0.0.0:*    LISTEN  2864/java 
tcp  0  0 0.0.0.0:8080   0.0.0.0:*    LISTEN  2864/java 
tcp  0  0 127.0.1.1:6066   0.0.0.0:*    LISTEN  2864/java 
tcp  0  0 0.0.0.0:22    0.0.0.0:*    LISTEN  - 
udp  0  0 0.0.0.0:68    0.0.0.0:*       - 
udp  0  0 192.168.192.22:123  0.0.0.0:*       - 
udp  0  0 127.0.0.1:123   0.0.0.0:*       - 
udp  0  0 0.0.0.0:123    0.0.0.0:*       - 
udp  0  0 0.0.0.0:21415   0.0.0.0:*       - 
Active UNIX domain sockets (only servers) 
Proto RefCnt Flags  Type  State   I-Node PID/Program name Path 
unix 2  [ ACC ]  STREAM  LISTENING  7195  -     /var/run/dbus/system_bus_socket 
unix 2  [ ACC ]  SEQPACKET LISTENING  405  -     /run/udev/control 

原因,我问这是我无法工人连接到主节点和我认为这个问题是主IP不被发现。

错误时,试图从站连接到主:

15/04/02 21:58:18 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://[email protected]:7077]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: raspberrypi/192.168.192.22:7077 
15/04/02 21:58:18 INFO RemoteActorRefProvider$RemoteDeadLetterActorRef: Message [org.apache.spark.deploy.DeployMessages$RegisterWorker] from Actor[akka://sparkWorker/user/Worker#1677101765] to Actor[akka://sparkWorker/deadLetters] was not delivered. [10] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 

回答

3

在spark-env.sh你可以设置SPARK_MASTER_IP=<ip>

主机名也可以正常工作(通过SPARK_STANDALONE_MASTER=<hostname>),只要确保工作人员连接到与主机绑定的主机名完全相同(即Spark主UI中显示的spark://地址)。

+0

你的意思是加入这种格式(不包括引号):“SPARK_MASTER_IP ”? – 2015-04-02 21:41:55

+0

ive尝试使用“./bin/spark-class org.apache.spark.deploy.worker.Worker spark://192.168.192.22:7077”手动设置主机名,但由于master不接受连接到此IP的连接,因为可以看到有问题,它不会连接 – 2015-04-02 21:44:56

+0

非常感谢你。我在主节点上添加了“export SPARK_STANDALONE_MASTER = raspberrypi export SPARK_MASTER_IP = 192.168.192.22”到/conf/start-env.sh。 Spark然后注册的主节点连接到UI上作为“URL:spark://192.168.192.22:7077”使用命令./bin/spark-class org.apache.spark.deploy.worker.Worker spark://192.168.192.22 :7077 连接到工作人员的主人。所以看起来不错。再次感谢,这真的让我越来越好。不知道我是否需要start-env.sh中的两个参数,但会保持原样。谢谢 – 2015-04-02 22:32:33