2016-11-23 92 views
0

我试图建立一个kubernetes DNS插件基于ansible回购: https://github.com/kubernetes/contrib/tree/master/ansible/roles/kubernetes-addonsKUBE-DNS吊舱和服务仍有上涨了一段时间,突然死亡

运行的剧本后,我无法找到既不DNS荚也没有服务。 在做了一些演讲后,(https://github.com/kubernetes/contrib/issues/886#issuecomment-216741889)似乎我需要手动运行rc.yml和svc.yml。 这就是我所做的。

不幸的是,dns pod和服务仍然有一段时间,并突然终止。

我想签一些日志的吊舱进入前下降:

ETCD登录

# kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c etcd 

2016/11/21 13:05:04 etcd: listening for peers on http://localhost:2380 
2016/11/21 13:05:04 etcd: listening for peers on http://localhost:7001 
2016/11/21 13:05:04 etcd: listening for client requests on http://127.0.0.1:2379 
2016/11/21 13:05:04 etcd: listening for client requests on http://127.0.0.1:4001 
2016/11/21 13:05:04 etcdserver: datadir is valid for the 2.0.1 format 
2016/11/21 13:05:04 etcdserver: name = default 
2016/11/21 13:05:04 etcdserver: data dir = /var/etcd/data 
2016/11/21 13:05:04 etcdserver: member dir = /var/etcd/data/member 
2016/11/21 13:05:04 etcdserver: heartbeat = 100ms 
2016/11/21 13:05:04 etcdserver: election = 1000ms 
2016/11/21 13:05:04 etcdserver: snapshot count = 10000 
2016/11/21 13:05:04 etcdserver: advertise client URLs = http://127.0.0.1:2379,http://127.0.0.1:4001 
2016/11/21 13:05:04 etcdserver: initial advertise peer URLs = http://localhost:2380,http://localhost:7001 
2016/11/21 13:05:04 etcdserver: initial cluster = default=http://localhost:2380,default=http://localhost:7001 
2016/11/21 13:05:04 etcdserver: start member 6a5871dbdd12c17c in cluster f68652439e3f8f2a 
2016/11/21 13:05:04 raft: 6a5871dbdd12c17c became follower at term 0 
2016/11/21 13:05:04 raft: newRaft 6a5871dbdd12c17c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] 
2016/11/21 13:05:04 raft: 6a5871dbdd12c17c became follower at term 1 
2016/11/21 13:05:04 etcdserver: added local member 6a5871dbdd12c17c [http://localhost:2380 http://localhost:7001] to cluster f68652439e3f8f2a 
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c is starting a new election at term 1 
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c became candidate at term 2 
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c received vote from 6a5871dbdd12c17c at term 2 
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c became leader at term 2 
2016/11/21 13:05:06 raft.node: 6a5871dbdd12c17c elected leader 6a5871dbdd12c17c at term 2 
2016/11/21 13:05:06 etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379 http://127.0.0.1:4001]} to cluster f68652439e3f8f2a 

skydns登录

# kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c skydns 

2016/11/21 13:07:14 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns/config) [10] 
2016/11/21 13:07:14 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 
2016/11/21 13:07:14 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 

healthz日志

#kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c healthz 

2016/11/21 13:05:58 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:05:59 Client ip 12.16.64.1:45631 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:00 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:02 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:04 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:06 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:08 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:08 Client ip 12.16.64.1:45652 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:10 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:12 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:14 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:16 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:18 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:18 Client ip 12.16.64.1:45673 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null2016/11/21 13:06:20 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:22 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:24 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:26 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:28 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 
2016/11/21 13:06:28 Client ip 12.16.64.1:45693 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null 

kube2sky登录

Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.213227  1 kube2sky.go:529] Using https://10.254.0.1:443 for kubernetes master 
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.213279  1 kube2sky.go:530] Using kubernetes API <nil> 
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.214181  1 kube2sky.go:598] Waiting for service: default/kubernetes 
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: 2016/11/23 07:09:26 Worker running nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null 
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.508032  1 kube2sky.go:660] Successfully added DNS record for Kubernetes service. 

什么做错了?

回答

0

您使用的是什么版本的kubernetes和dns容器?我看到他们正在使用v11。我对v11有类似的问题,目前正在运行kube-dns v19一个月,而不会遇到麻烦。

+0

我在主人上使用kubernetes 1.2.0,在奴才上使用1.2.4 – mootez

+0

如果你想要一些额外的信息,请按照我的[问题](https://github.com/kubernetes/kubernetes/issues/37352)在github上 – mootez