2016-07-25 162 views
2

我在mac os上使用了docker-machine。并创建群模式群如:docker 1.12 swarm模式:如何连接到覆盖网络上的另一个容器以及如何使用loadbalance?

➜ docker-machine create --driver virtualbox docker1 
➜ docker-machine create --driver virtualbox docker2 
➜ docker-machine create --driver virtualbox docker3 

➜ config docker-machine ls 
NAME  ACTIVE DRIVER  STATE  URL       SWARM DOCKER  ERRORS 
docker1 -  virtualbox Running tcp://192.168.99.100:2376   v1.12.0-rc4 
docker2 -  virtualbox Running tcp://192.168.99.101:2376   v1.12.0-rc4 
docker3 -  virtualbox Running tcp://192.168.99.102:2376   v1.12.0-rc4 


➜ config docker-machine ssh docker1 
[email protected]:~$ docker swarm init 
No --secret provided. Generated random secret: 
    b0wcyub7lbp8574mk1oknvavq 

Swarm initialized: current node (8txt830ivgrxxngddtx7k4xe4) is now a manager. 

To add a worker to this swarm, run the following command: 
    docker swarm join --secret b0wcyub7lbp8574mk1oknvavq \ 
    --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 \ 
    10.0.2.15:2377 


# on host docker2 and host docker3, I run cammand to join the cluster: 

[email protected]:~$ docker swarm join --secret b0wcyub7lbp8574mk1oknvavq --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 192.1 
68.99.100:2377 
This node joined a Swarm as a worker. 

[email protected]:~$ docker swarm join --secret b0wcyub7lbp8574mk1oknvavq --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 192.1 
68.99.100:2377 
This node joined a Swarm as a worker. 

# on docker1: 
[email protected]:~$ docker node ls 
ID       HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS 
8txt830ivgrxxngddtx7k4xe4 * docker1 Accepted Ready Active  Leader 
9fliuzb9zl5jcqzqucy9wfl4y docker2 Accepted Ready Active 
c4x8rbnferjvr33ff8gh4c6cr docker3 Accepted Ready Active 

然后我创建网络mynet与docker1上的覆盖驱动程序。 第一个问题:但我cann`t看到其他搬运工网络中的主机:

[email protected]:~$ docker network create --driver overlay mynet 
a1v8i656el5d3r45k985cn44e 
[email protected]:~$ docker network ls 
NETWORK ID   NAME    DRIVER    SCOPE 
5ec55ffde8e4  bridge    bridge    local 
83967a11e3dd  docker_gwbridge  bridge    local 
7f856c9040b3  host    host    local 
bpoqtk71o6qo  ingress    overlay    swarm 
a1v8i656el5d  mynet    overlay    swarm 
829a614aa278  none    null    local 

[email protected]:~$ docker network ls 
NETWORK ID   NAME    DRIVER    SCOPE 
da07b3913bd4  bridge    bridge    local 
7a2e627634b9  docker_gwbridge  bridge    local 
e8971c2b5b21  host    host    local 
bpoqtk71o6qo  ingress    overlay    swarm 
c37de5447a14  none    null    local 

[email protected]:~$ docker network ls 
NETWORK ID   NAME    DRIVER    SCOPE 
06eb8f0bad11  bridge    bridge    local 
fb5e3bcae41c  docker_gwbridge  bridge    local 
e167d97cd07f  host    host    local 
bpoqtk71o6qo  ingress    overlay    swarm 
6540ece8e146  none    null    local 

的创建nginx的服务,这呼应了默认主机名索引页上docker1:

[email protected]:~$ docker service create --name nginx --network mynet --replicas 1 -p 80:80 dhub.yunpro.cn/shenshouer/nginx:hostname 
9d7xxa8ukzo7209r30f0rmcut 
[email protected]:~$ docker service tasks nginx 
ID       NAME  SERVICE IMAGE          LAST STATE    DESIRED STATE NODE 
0dvgh9xfwz7301jmsh8yc5zpe nginx.1 nginx dhub.yunpro.cn/shenshouer/nginx:hostname Running 12 seconds ago Running  docker3 

第二个问题:我无法从docker1主机的IP访问该服务。我只能得到访问docker3的IP的响应。

➜ tools curl 192.168.99.100 
curl: (52) Empty reply from server 
➜ tools curl 192.168.99.102 
fda9fb58f9d4 

所以我认为没有负载平衡。我如何使用内置负载平衡?

然后,我创建busybox的图像在同一网络上的其他服务,以测试的ping:

[email protected]:~$ docker service create --name busybox --network mynet --replicas 1 busybox sleep 3000 
akxvabx66ebjlak77zj6x1w4h 
[email protected]:~$ docker service tasks busybox 
ID       NAME  SERVICE IMAGE LAST STATE    DESIRED STATE NODE 
9yc3svckv98xtmv1d0tvoxbeu busybox.1 busybox busybox Running 11 seconds ago Running  docke1 

# on host docker3. I got the container name and the container IP to ping test: 

[email protected]:~$ docker ps 
CONTAINER ID  IMAGE          COMMAND     CREATED    STATUS    PORTS    NAMES 
fda9fb58f9d4  dhub.yunpro.cn/shenshouer/nginx:hostname "sh -c /entrypoint.sh" 7 minutes ago  Up 7 minutes  80/tcp, 443/tcp  nginx.1.0dvgh9xfwz7301jmsh8yc5zpe 

[email protected]:~$ docker inspect fda9fb58f9d4 
... 

      "Networks": { 
       "ingress": { 
        "IPAMConfig": { 
         "IPv4Address": "10.255.0.7" 
        }, 
        "Links": null, 
        "Aliases": [ 
         "fda9fb58f9d4" 
        ], 
        "NetworkID": "bpoqtk71o6qor8t2gyfs07yfc", 
        "EndpointID": "98c98a9cc0fcc71511f0345f6ce19cc9889e2958d9345e200b3634ac0a30edbb", 
        "Gateway": "", 
        "IPAddress": "10.255.0.7", 
        "IPPrefixLen": 16, 
        "IPv6Gateway": "", 
        "GlobalIPv6Address": "", 
        "GlobalIPv6PrefixLen": 0, 
        "MacAddress": "02:42:0a:ff:00:07" 
       }, 
       "mynet": { 
        "IPAMConfig": { 
         "IPv4Address": "10.0.0.3" 
        }, 
        "Links": null, 
        "Aliases": [ 
         "fda9fb58f9d4" 
        ], 
        "NetworkID": "a1v8i656el5d3r45k985cn44e", 
        "EndpointID": "5f3c5678d40b6a7a2495963c16a873c6a2ba14e94cf99d2aa3fa087b67a46cce", 
        "Gateway": "", 
        "IPAddress": "10.0.0.3", 
        "IPPrefixLen": 24, 
        "IPv6Gateway": "", 
        "GlobalIPv6Address": "", 
        "GlobalIPv6PrefixLen": 0, 
        "MacAddress": "02:42:0a:00:00:03" 
       } 
      } 
     } 
    } 
] 


# on host docker1 : 
[email protected]:~$ docker ps 
CONTAINER ID  IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES 
b94716e9252e  busybox:latest  "sleep 3000"  2 minutes ago  Up 2 minutes       busybox.1.9yc3svckv98xtmv1d0tvoxbeu 
[email protected]:~$ docker exec -it b94716e9252e ping nginx.1.0dvgh9xfwz7301jmsh8yc5zpe 
ping: bad address 'nginx.1.0dvgh9xfwz7301jmsh8yc5zpe' 
[email protected]:~$ docker exec -it b94716e9252e ping 10.0.0.3 
PING 10.0.0.3 (10.0.0.3): 56 data bytes 
90 packets transmitted, 0 packets received, 100% packet loss 

第三个问题:如何与同一网络上的每个容器的沟通?

和网络我的网为:

[email protected]:~$ docker network ls 
NETWORK ID   NAME    DRIVER    SCOPE 
5ec55ffde8e4  bridge    bridge    local 
83967a11e3dd  docker_gwbridge  bridge    local 
7f856c9040b3  host    host    local 
bpoqtk71o6qo  ingress    overlay    swarm 
a1v8i656el5d  mynet    overlay    swarm 
829a614aa278  none    null    local 
[email protected]:~$ docker network inspect mynet 
[ 
    { 
     "Name": "mynet", 
     "Id": "a1v8i656el5d3r45k985cn44e", 
     "Scope": "swarm", 
     "Driver": "overlay", 
     "EnableIPv6": false, 
     "IPAM": { 
      "Driver": "default", 
      "Options": null, 
      "Config": [ 
       { 
        "Subnet": "10.0.0.0/24", 
        "Gateway": "10.0.0.1" 
       } 
      ] 
     }, 
     "Internal": false, 
     "Containers": { 
      "b94716e9252e6616f0f4c81e0c7ef674d7d5f4fafe931953fced9ef059faeb5f": { 
       "Name": "busybox.1.9yc3svckv98xtmv1d0tvoxbeu", 
       "EndpointID": "794be0e92b34547e44e9a5e697ab41ddd908a5db31d0d31d7833c746395534f5", 
       "MacAddress": "02:42:0a:00:00:05", 
       "IPv4Address": "10.0.0.5/24", 
       "IPv6Address": "" 
      } 
     }, 
     "Options": { 
      "com.docker.network.driver.overlay.vxlanid_list": "257" 
     }, 
     "Labels": {} 
    } 
] 


[email protected]:~$ docker network ls 
NETWORK ID   NAME    DRIVER    SCOPE 
da07b3913bd4  bridge    bridge    local 
7a2e627634b9  docker_gwbridge  bridge    local 
e8971c2b5b21  host    host    local 
bpoqtk71o6qo  ingress    overlay    swarm 
c37de5447a14  none    null    local 

[email protected]:~$ docker network ls 
NETWORK ID   NAME    DRIVER    SCOPE 
06eb8f0bad11  bridge    bridge    local 
fb5e3bcae41c  docker_gwbridge  bridge    local 
e167d97cd07f  host    host    local 
bpoqtk71o6qo  ingress    overlay    swarm 
a1v8i656el5d  mynet    overlay    swarm 
6540ece8e146  none    null    local 

[email protected]:~$ docker network inspect mynet 
[ 
    { 
     "Name": "mynet", 
     "Id": "a1v8i656el5d3r45k985cn44e", 
     "Scope": "swarm", 
     "Driver": "overlay", 
     "EnableIPv6": false, 
     "IPAM": { 
      "Driver": "default", 
      "Options": null, 
      "Config": [ 
       { 
        "Subnet": "10.0.0.0/24", 
        "Gateway": "10.0.0.1" 
       } 
      ] 
     }, 
     "Internal": false, 
     "Containers": { 
      "fda9fb58f9d46317ef1df60e597bd14214ec3fac43e32f4b18a39bb92925aa7e": { 
       "Name": "nginx.1.0dvgh9xfwz7301jmsh8yc5zpe", 
       "EndpointID": "5f3c5678d40b6a7a2495963c16a873c6a2ba14e94cf99d2aa3fa087b67a46cce", 
       "MacAddress": "02:42:0a:00:00:03", 
       "IPv4Address": "10.0.0.3/24", 
       "IPv6Address": "" 
      } 
     }, 
     "Options": { 
      "com.docker.network.driver.overlay.vxlanid_list": "257" 
     }, 
     "Labels": {} 
    } 
] 

所以第四个问题:是否有有建-INT KV店?

回答

3

问题1:其他主机上的网络是按需创建的,当群体在该主机上分配任务时,网络将被创建。

问题2:负载均衡是开箱即用的,这可能与您的泊坞窗群集有一些问题。你需要检查iptables和ipvs规则

问题3:在同一覆盖网络(你的情况下mynet)上的容器可以相互通话,而且docker有一个buildin DNS服务器,它可以解析容器名称到IP地址

问题4:是的。

+0

谢谢您的回复。我得到了这个问题,因为boot2docker os中的ipvs模块没有加载。所以负载平衡有一些问题。我改变操作系统到debian并没有问题。 – sope

相关问题