2017-03-05 49 views
0
Kubernetes version (use kubectl version): 
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"87d9d8d7bc5aa35041a8ddfe3d4b367381112f89", GitTreeState:"clean", BuildDate:"2016-12-12T21:10:52Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"} 
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"87d9d8d7bc5aa35041a8ddfe3d4b367381112f89", GitTreeState:"clean", BuildDate:"2016-12-12T21:10:52Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"} 

环境:Kubernetes DNS的作品,其中kubedns荚的工作,如果规模kubedns吊舱没有什么工作

AWS, using VPC, all master and 2 nodes under same subnet 
RHEL 7.2 
Kernel (e.g. uname -a): Linux master.example.com 3.10.0-514.6.2.el7.x86_64 #1 SMP Fri Feb 17 19:21:31 EST 2017 x86_64 x86_64 x86_64 GNU/Linux 
Install tools: Install kubernetes as per Redhat guideline using flannel Network 
flannel-config.json 
{ 
"Network": "10.20.0.0/16", 
"SubnetLen": 24, 
"Backend": { 
"Type": "vxlan", 
"VNI": 1 
} 
} 
Kubernetes Cluster Network : 10.254.0.0/16 

其他: 发生了什么事: 我们有kubernetes群集安装有以下设置

Master: ip-10-52-2-56.ap-northeast-2.compute.internal 
Node1: ip-10-52-2-59.ap-northeast-2.compute.internal 
Node2: ip-10-52-2-54.ap-northeast-2.compute.internal 

主配置细节:

[[email protected] ~]# egrep -v '^#|^$' /etc/etcd/etcd.conf 
ETCD_NAME=default 
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 
ETCD_LISTEN_PEER_URLS="http://localhost:2380" 
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" 
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379" 
[[email protected] ~]# egrep -v '^#|^$' /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true" 
KUBE_LOG_LEVEL="--v=0" 
KUBE_ALLOW_PRIV="--allow-privileged=false" 
KUBE_MASTER="--master=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080" 
[[email protected] ~]# egrep -v '^#|^$' /etc/kubernetes/apiserver 
KUBE_API_ADDRESS="--address=0.0.0.0" 
KUBE_ETCD_SERVERS="--etcd_servers=http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379" 
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" 
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" 
KUBE_API_ARGS="--service_account_key_file=/serviceaccount.key"" 
[[email protected] ~]# egrep -v '^#|^$' /etc/sysconfig/flanneld 
FLANNEL_ETCD="http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379" 
FLANNEL_ETCD_KEY="/coreos.com/network" 
FLANNEL_OPTIONS="eth0" 

Node1/Node2 config details are same as follows: 
[[email protected] ec2-user]# egrep -v '^$|^#' /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true" 
KUBE_LOG_LEVEL="--v=0" 
KUBE_ALLOW_PRIV="--allow-privileged=false" 
KUBE_MASTER="--master=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080" 
[[email protected] ec2-user]# egrep -v '^#|^$' /etc/kubernetes/kubelet 
KUBELET_ADDRESS="--address=0.0.0.0" 
KUBELET_HOSTNAME="--hostname-override=ip-10-52-2-59.ap-northeast-2.compute.internal" 
KUBELET_API_SERVER="--api-servers=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080" 
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" 
KUBELET_ARGS="--cluster-dns=10.254.0.2 --cluster-domain=cluster.local" 
[[email protected] ec2-user]# grep KUBE_PROXY_ARGS /etc/kubernetes/proxy 
KUBE_PROXY_ARGS="" 
[[email protected] ec2-user]# egrep -v '^#|^$' /etc/sysconfig/flanneld 
FLANNEL_ETCD="http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379" 
FLANNEL_ETCD_KEY="/coreos.com/network" 
FLANNEL_OPTIONS="eth0" 

运行KUBE DNS如下配置:

apiVersion: v1 
kind: Service 
metadata: 
name: kube-dns 
namespace: kube-system 
labels: 
k8s-app: kube-dns 
kubernetes.io/cluster-service: "true" 
kubernetes.io/name: "KubeDNS" 
spec: 
selector: 
k8s-app: kube-dns 
clusterIP: 10.254.0.2 
ports: 

name: dns 
port: 53 
protocol: UDP 
name: dns-tcp 
port: 53 
protocol: TCP 
apiVersion: v1 
kind: ReplicationController 
metadata: 
labels: 
k8s-app: kube-dns 
kubernetes.io/cluster-service: "true" 
version: v20 
name: kube-dns-v20 
namespace: kube-system 
spec: 
replicas: 1 
selector: 
k8s-app: kube-dns 
version: v20 
template: 
metadata: 
labels: 
k8s-app: kube-dns 
kubernetes.io/cluster-service: "true" 
version: v20 
spec: 
containers: 
- 
args: 
- "--domain=cluster.local" 
- "--kube-master-url=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080" 
- "--dns-port=10053" 
image: "gcr.io/google_containers/kubedns-amd64:1.9" 
livenessProbe: 
failureThreshold: 5 
httpGet: 
path: /healthz 
port: 8080 
scheme: HTTP 
initialDelaySeconds: 60 
successThreshold: 1 
timeoutSeconds: 5 
name: kubedns 
ports: 
- 
containerPort: 10053 
name: dns-local 
protocol: UDP 
- 
containerPort: 10053 
name: dns-tcp-local 
protocol: TCP 
readinessProbe: 
httpGet: 
path: /readiness 
port: 8081 
scheme: HTTP 
initialDelaySeconds: 30 
timeoutSeconds: 5 
resources: 
limits: 
cpu: 100m 
memory: 500Mi 
requests: 
cpu: 100m 
memory: 500Mi 
- 
args: 
- "--cache-size=1000" 
- "--no-resolv" 
- "--server=127.0.0.1#10053" 
image: "gcr.io/google_containers/kube-dnsmasq-amd64:1.4" 
name: dnsmasq 
ports: 
- 
containerPort: 53 
name: dns 
protocol: UDP 
- 
containerPort: 53 
name: dns-tcp 
protocol: TCP 
- 
args: 
- "-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null" 
- "-port=8080" 
- "-quiet" 
image: "gcr.io/google_containers/exechealthz-amd64:1.2" 
name: healthz 
ports: 
- 
containerPort: 8080 
protocol: TCP 
resources: 
limits: 
cpu: 10m 
memory: 20Mi 
requests: 
cpu: 10m 
memory: 20Mi 
dnsPolicy: Default 

发生什么事: Kubernetes DNS的作品,其中kubedns荚的工作,如果规模kubedns吊舱没有什么工作的任何地方(节点)。

在下面的一个dns pod在node1上运行,并且响应也来自node1 busybox pod,但node2 busybox pod nslookup未响应。

image1

现在以下两个DNS舱体上node1和node2上运行,你可以看到NO响应从没有的busybox荚的到来从两个节点

image2

低于其他一些观察.. ..

DNS pod大多数时间采取172.17 IP系列,如果我缩放超过4 pod然后在节点2 dns pod采取10.20系列IP。

有趣的部分Node2豆荚以10.20系列IP开始。 ,但Node1 Pod以172.17系列IP开始。

两个节点的一些iptable-save输出。

[[email protected] ec2-user]# iptables-save | grep DNAT 
-A KUBE-SEP-3M72SO5X7J6X6TX6 -p tcp -m comment --comment "default/prometheus:prometheus" -m tcp -j DNAT --to-destination 172.17.0.8:9090 
-A KUBE-SEP-7SLC3EUJVX23N2X4 -p tcp -m comment --comment "default/zookeeper:" -m tcp -j DNAT --to-destination 172.17.0.4:2181 
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53 
-A KUBE-SEP-EN24FH2N7PLAR6AW -p tcp -m comment --comment "default/kafkacluster:" -m tcp -j DNAT --to-destination 172.17.0.2:9092 
-A KUBE-SEP-LCDAFU4UXQHVDQT6 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-LCDAFU4UXQHVDQT6 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.52.2.56:6443 
-A KUBE-SEP-MX63IHIHS5ZB4347 -p tcp -m comment --comment "default/nodejs4promethus-scraping:" -m tcp -j DNAT --to-destination 172.17.0.6:3000 
-A KUBE-SEP-NOI5B75N7ZJAIPJR -p tcp -m comment --comment "default/mongodb-prometheus-exporter:" -m tcp -j DNAT --to-destination 172.17.0.12:9001 
-A KUBE-SEP-O6UDQQL3MHGYTSH5 -p tcp -m comment --comment "default/producer:" -m tcp -j DNAT --to-destination 172.17.0.3:8125 
-A KUBE-SEP-QO4SWWCV7NMMGPBN -p tcp -m comment --comment "default/kafka-prometheus-jmx:" -m tcp -j DNAT --to-destination 172.17.0.2:7071 
-A KUBE-SEP-SVCEI2UVU246H7MW -p tcp -m comment --comment "default/mongodb:" -m tcp -j DNAT --to-destination 172.17.0.12:27017 
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53 
-A KUBE-SEP-ZXXWX3EF7T3W7UNY -p tcp -m comment --comment "default/grafana:" -m tcp -j DNAT --to-destination 172.17.0.9:3000 

[[email protected] ec2-user]# iptables-save | grep 53 
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53 
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53 
-A KUBE-SERVICES -d 10.254.0.2/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU 
-A KUBE-SERVICES -d 10.254.0.2/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4 

--------- 

[[email protected] ec2-user]# iptables-save | grep DNAT 
-A KUBE-SEP-3M72SO5X7J6X6TX6 -p tcp -m comment --comment "default/prometheus:prometheus" -m tcp -j DNAT --to-destination 172.17.0.8:9090 
-A KUBE-SEP-7SLC3EUJVX23N2X4 -p tcp -m comment --comment "default/zookeeper:" -m tcp -j DNAT --to-destination 172.17.0.4:2181 
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53 
-A KUBE-SEP-EN24FH2N7PLAR6AW -p tcp -m comment --comment "default/kafkacluster:" -m tcp -j DNAT --to-destination 172.17.0.2:9092 
-A KUBE-SEP-LCDAFU4UXQHVDQT6 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-LCDAFU4UXQHVDQT6 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.52.2.56:6443 
-A KUBE-SEP-MX63IHIHS5ZB4347 -p tcp -m comment --comment "default/nodejs4promethus-scraping:" -m tcp -j DNAT --to-destination 172.17.0.6:3000 
-A KUBE-SEP-NOI5B75N7ZJAIPJR -p tcp -m comment --comment "default/mongodb-prometheus-exporter:" -m tcp -j DNAT --to-destination 172.17.0.12:9001 
-A KUBE-SEP-O6UDQQL3MHGYTSH5 -p tcp -m comment --comment "default/producer:" -m tcp -j DNAT --to-destination 172.17.0.3:8125 
-A KUBE-SEP-QO4SWWCV7NMMGPBN -p tcp -m comment --comment "default/kafka-prometheus-jmx:" -m tcp -j DNAT --to-destination 172.17.0.2:7071 
-A KUBE-SEP-SVCEI2UVU246H7MW -p tcp -m comment --comment "default/mongodb:" -m tcp -j DNAT --to-destination 172.17.0.12:27017 
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53 
-A KUBE-SEP-ZXXWX3EF7T3W7UNY -p tcp -m comment --comment "default/grafana:" -m tcp -j DNAT --to-destination 172.17.0.9:3000 

[[email protected] ec2-user]# iptables-save | grep 53 

-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53 
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53 
-A KUBE-SERVICES -d 10.254.0.2/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4 
-A KUBE-SERVICES -d 10.254.0.2/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU 

Restarted below serviced on both node 

    for SERVICES in flanneld docker kube-proxy.service kubelet.service; do 
    systemctl stop $SERVICES 
    systemctl start $SERVICES 
    done 

Node1: ifconfig 

    [[email protected] ec2-user]# ifconfig 
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0 
      inet6 fe80::42:2dff:fe01:c0b0 prefixlen 64 scopeid 0x20<link> 
      ether 02:42:2d:01:c0:b0 txqueuelen 0 (Ethernet) 
      RX packets 1718522 bytes 154898857 (147.7 MiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 1704874 bytes 2186333188 (2.0 GiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001 
      inet 10.52.2.59 netmask 255.255.255.224 broadcast 10.52.2.63 
      inet6 fe80::91:9aff:fe7e:20a7 prefixlen 64 scopeid 0x20<link> 
      ether 02:91:9a:7e:20:a7 txqueuelen 1000 (Ethernet) 
      RX packets 2604083 bytes 2208387383 (2.0 GiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 1974861 bytes 593497458 (566.0 MiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 
      inet 127.0.0.1 netmask 255.0.0.0 
      inet6 ::1 prefixlen 128 scopeid 0x10<host> 
      loop txqueuelen 1 (Local Loopback) 
      RX packets 80 bytes 7140 (6.9 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 80 bytes 7140 (6.9 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    veth01225a6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::1034:a8ff:fe79:aba3 prefixlen 64 scopeid 0x20<link> 
      ether 12:34:a8:79:ab:a3 txqueuelen 0 (Ethernet) 
      RX packets 1017 bytes 100422 (98.0 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 1869 bytes 145519 (142.1 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    veth3079eb6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::90c2:62ff:fe84:fb53 prefixlen 64 scopeid 0x20<link> 
      ether 92:c2:62:84:fb:53 txqueuelen 0 (Ethernet) 
      RX packets 4891 bytes 714845 (698.0 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 5127 bytes 829516 (810.0 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    veth3be8c1f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::c8a5:64ff:fe15:be95 prefixlen 64 scopeid 0x20<link> 
      ether ca:a5:64:15:be:95 txqueuelen 0 (Ethernet) 
      RX packets 210 bytes 27750 (27.0 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 307 bytes 35118 (34.2 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    veth559a1ab: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::100b:23ff:fe60:3752 prefixlen 64 scopeid 0x20<link> 
      ether 12:0b:23:60:37:52 txqueuelen 0 (Ethernet) 
      RX packets 14926 bytes 1931413 (1.8 MiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 14375 bytes 19695295 (18.7 MiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    veth5c05729: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::cca1:4ff:fe5d:14cd prefixlen 64 scopeid 0x20<link> 
      ether ce:a1:04:5d:14:cd txqueuelen 0 (Ethernet) 
      RX packets 455 bytes 797963 (779.2 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 681 bytes 83904 (81.9 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    veth85ba9a9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::74ca:90ff:feae:6f4d prefixlen 64 scopeid 0x20<link> 
      ether 76:ca:90:ae:6f:4d txqueuelen 0 (Ethernet) 
      RX packets 19 bytes 1404 (1.3 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 66 bytes 4568 (4.4 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    vetha069d16: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::accd:eeff:fe21:6eda prefixlen 64 scopeid 0x20<link> 
      ether ae:cd:ee:21:6e:da txqueuelen 0 (Ethernet) 
      RX packets 3566 bytes 7353788 (7.0 MiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 2560 bytes 278400 (271.8 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    vetha58e4af: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::6cd2:16ff:fee2:aa59 prefixlen 64 scopeid 0x20<link> 
      ether 6e:d2:16:e2:aa:59 txqueuelen 0 (Ethernet) 
      RX packets 779 bytes 62585 (61.1 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 1014 bytes 109417 (106.8 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    vethb7bbef5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::5ce6:6fff:fe31:c3e prefixlen 64 scopeid 0x20<link> 
      ether 5e:e6:6f:31:0c:3e txqueuelen 0 (Ethernet) 
      RX packets 589 bytes 55654 (54.3 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 573 bytes 74014 (72.2 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    vethbda3e0a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::9c0a:f2ff:fea5:23a2 prefixlen 64 scopeid 0x20<link> 
      ether 9e:0a:f2:a5:23:a2 txqueuelen 0 (Ethernet) 
      RX packets 490 bytes 47064 (45.9 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 645 bytes 77464 (75.6 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    vethfc65cc3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
      inet6 fe80::b854:dcff:feb4:f4ba prefixlen 64 scopeid 0x20<link> 
      ether ba:54:dc:b4:f4:ba txqueuelen 0 (Ethernet) 
      RX packets 503 bytes 508251 (496.3 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 565 bytes 73145 (71.4 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 


    Node2 - ifconfig 

    [[email protected] ec2-user]# ifconfig 
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 8951 
      inet 10.20.48.1 netmask 255.255.255.0 broadcast 0.0.0.0 
      inet6 fe80::42:87ff:fe39:2ef0 prefixlen 64 scopeid 0x20<link> 
      ether 02:42:87:39:2e:f0 txqueuelen 0 (Ethernet) 
      RX packets 269123 bytes 22165441 (21.1 MiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 419870 bytes 149980299 (143.0 MiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001 
      inet 10.52.2.54 netmask 255.255.255.224 broadcast 10.52.2.63 
      inet6 fe80::9a:d8ff:fed3:4cf5 prefixlen 64 scopeid 0x20<link> 
      ether 02:9a:d8:d3:4c:f5 txqueuelen 1000 (Ethernet) 
      RX packets 1517512 bytes 938147149 (894.6 MiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 1425156 bytes 1265738472 (1.1 GiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 8951 
      inet 10.20.48.0 netmask 255.255.0.0 broadcast 0.0.0.0 
      ether 06:69:bf:c6:8a:12 txqueuelen 0 (Ethernet) 
      RX packets 0 bytes 0 (0.0 B) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 0 bytes 0 (0.0 B) 
      TX errors 0 dropped 1 overruns 0 carrier 0 collisions 0 

    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 
      inet 127.0.0.1 netmask 255.0.0.0 
      inet6 ::1 prefixlen 128 scopeid 0x10<host> 
      loop txqueuelen 1 (Local Loopback) 
      RX packets 106 bytes 8792 (8.5 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 106 bytes 8792 (8.5 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

    veth9f05785: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 8951 
      inet6 fe80::d81e:d3ff:fe5e:bade prefixlen 64 scopeid 0x20<link> 
      ether da:1e:d3:5e:ba:de txqueuelen 0 (Ethernet) 
      RX packets 31 bytes 2458 (2.4 KiB) 
      RX errors 0 dropped 0 overruns 0 frame 0 
      TX packets 37 bytes 4454 (4.3 KiB) 
      TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

具有两个输出的ifconfig节点-1

回答

1

检查flanneld过程,flannel.1接口从节点-1缺失有些迷惑,检查/ var /记录/消息和还要比较两个节点的法兰绒配置文件 -/etc/sysconfig/flannel

+0

问题已解决...看起来像网络接口中的问题。 @pawankkamboj指向我的正确方向。感谢Alexander –

0

看起来绒布在node2上运行不正常。您应该检查Pawan已经指出的日志和配置。

另外,您似乎正在使用旧版本的Kubernetes。目前的版本是1.5,我建议使用这个版本。

在网络中发现的裸机设置指南往往会过时得相当快,即使是官方Kubernetes指南。

我建议不要再使用这些指南中的任何一个,而是使用(半)自动部署解决方案,如kargo (Ansible based)kops (only AWS, Go based)。如果你不想使用这些自动解决方案,你可以尝试使用kubeadm,它目前处于alpha状态,但可能已经足够你的工作。