2017-09-05 90 views
0

因此,我有一个3节点的kubernetes集群运行在运行HypriotOS的3个树莓派上。自启动并加入节点以来,我一直没有做任何事情,除了安装编织。然而,当我进入kubectl cluster-info,我拥有两种选择,Kubernetes“没有端点可用于服务”kube-dns“”

Kubernetes master is running at https://192.168.0.35:6443 
KubeDNS is running at https://192.168.0.35:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy 

当我蜷缩第二个URL,我得到如下回应:

{ 
    "kind": "Status", 
    "apiVersion": "v1", 
    "metadata": {}, 
    "status": "Failure", 
    "message": "no endpoints available for service \"kube-dns\"", 
    "reason": "ServiceUnavailable", 
    "code": 503 
} 

这里有一个关于我的群集的状态的一些详细信息。

$ kubectl version 
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"} 
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"} 




$ kubectl get pods --all-namespaces 
NAMESPACE  NAME         READY  STATUS    RESTARTS AGE 
kube-system etcd-node01        1/1  Running   0   13d 
kube-system kube-apiserver-node01     1/1  Running   21   13d 
kube-system kube-controller-manager-node01   1/1  Running   5   13d 
kube-system kube-dns-2459497834-v1g4n    3/3  Running   43   13d 
kube-system kube-proxy-1hplm      1/1  Running   0   5h 
kube-system kube-proxy-6bzvr      1/1  Running   0   13d 
kube-system kube-proxy-cmp3q      1/1  Running   0   6d 
kube-system kube-scheduler-node01     1/1  Running   8   13d 
kube-system weave-net-5cq9c       2/2  Running   0   6d 
kube-system weave-net-ff5sz       2/2  Running   4   13d 
kube-system weave-net-z3nq3       2/2  Running   0   5h 


$ kubectl get svc --all-namespaces 
NAMESPACE  NAME     CLUSTER-IP  EXTERNAL-IP PORT(S)   AGE 
default  kubernetes    10.96.0.1  <none>  443/TCP   13d 
kube-system kube-dns    10.96.0.10  <none>  53/UDP,53/TCP 13d 


$ kubectl --namespace kube-system describe pod kube-dns-2459497834-v1g4n 
Name:   kube-dns-2459497834-v1g4n 
Namespace:  kube-system 
Node:   node01/192.168.0.35 
Start Time:  Wed, 23 Aug 2017 20:34:56 +0000 
Labels:   k8s-app=kube-dns 
       pod-template-hash=2459497834 
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-2459497834","uid":"37640de4-8841-11e7-ad32-b827eb0a... 
       scheduler.alpha.kubernetes.io/critical-pod= 
Status:   Running 
IP:    10.32.0.2 
Created By:  ReplicaSet/kube-dns-2459497834 
Controlled By: ReplicaSet/kube-dns-2459497834 
Containers: 
    kubedns: 
    Container ID:  docker://9a781f1fea4c947a9115c551e65c232d5fe0aa2045e27e79eae4b057b68e4914 
    Image:    gcr.io/google_containers/k8s-dns-kube-dns-arm:1.14.4 
    Image ID:   docker-pullable://gcr.io/google_containers/[email protected]:ac677e54bef9717220a0ba2275ba706111755b2906de689d71ac44bfe425946d 
    Ports:    10053/UDP, 10053/TCP, 10055/TCP 
    Args: 
     --domain=cluster.local. 
     --dns-port=10053 
     --config-dir=/kube-dns-config 
     --v=2 
    State:    Running 
     Started:   Tue, 29 Aug 2017 19:09:10 +0000 
    Last State:   Terminated 
     Reason:   Error 
     Exit Code:  137 
     Started:   Tue, 29 Aug 2017 17:07:49 +0000 
     Finished:   Tue, 29 Aug 2017 19:09:08 +0000 
    Ready:    True 
    Restart Count:  18 
    Limits: 
     memory: 170Mi 
    Requests: 
     cpu:  100m 
     memory: 70Mi 
    Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5 
    Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3 
    Environment: 
     PROMETHEUS_PORT: 10055 
    Mounts: 
     /kube-dns-config from kube-dns-config (rw) 
     /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-rf19g (ro) 
    dnsmasq: 
    Container ID:  docker://f8e17df36310bc3423a74e3f6989204abac9e83d4a8366561e54259418030a50 
    Image:    gcr.io/google_containers/k8s-dns-dnsmasq-nanny-arm:1.14.4 
    Image ID:   docker-pullable://gcr.io/google_containers/[email protected]:a7469e91b4b20f31036448a61c52e208833c7cb283faeb4ea51b9fa22e18eb69 
    Ports:    53/UDP, 53/TCP 
    Args: 
     -v=2 
     -logtostderr 
     -configDir=/etc/k8s/dns/dnsmasq-nanny 
     -restartDnsmasq=true 
     -- 
     -k 
     --cache-size=1000 
     --log-facility=- 
     --server=/cluster.local/127.0.0.1#10053 
     --server=/in-addr.arpa/127.0.0.1#10053 
     --server=/ip6.arpa/127.0.0.1#10053 
    State:    Running 
     Started:   Tue, 29 Aug 2017 19:09:52 +0000 
    Last State:   Terminated 
     Reason:   Error 
     Exit Code:  137 


$ kubectl --namespace kube-system describe svc kube-dns 
Name:   kube-dns 
Namespace:  kube-system 
Labels:   k8s-app=kube-dns 
      kubernetes.io/cluster-service=true 
      kubernetes.io/name=KubeDNS 
Annotations:  <none> 
Selector:  k8s-app=kube-dns 
Type:   ClusterIP 
IP:   10.96.0.10 
Port:   dns 53/UDP 
Endpoints:  10.32.0.2:53 
Port:   dns-tcp 53/TCP 
Endpoints:  10.32.0.2:53 
Session Affinity: None 
Events:   <none> 

我想不出这里发生了什么,因为我没有做过比按照说明here的任何其他。此问题在多个版本的kubernetes以及多个网络覆盖(包括法兰绒)之间持续存在。所以它开始让我觉得这是rpis本身的问题。

+1

'kubectl --namespace kube-system describe pod kube-dns-2459497834-v1g4n'和'kubectl --namespace kube-system describe svc kube-dns'请 –

+0

我已经添加了相关信息。正如你所看到的,pod正在运行,但是不时重新开始。不知道这里还有什么。 – jzeef

回答

0

更新:下面的假设不是此错误消息的完整解释。该proxy API状态:

创建连接代理

连接GET请求波德

GET/API/V1 /命名空间/ {命名空间} /荚/ {名称} /代理的代理

现在的问题是connect GET requests to proxy of Pod究竟意味着什么,但我坚信这意味着将GET请求转发到该pod。这意味着下面的假设是正确的。

我检查了不是专为HTTP流量设计的其他服务,它们都产生了此错误消息,而为HTTP流量设计的服务运行良好(例如,/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy)。


我认为这是正常的行为 - 没有什么可担心的。如果您查看群集中的kube-dns服务对象,您可以看到它只能将内部端点提供给端口53,这是标准DNS端口 - 所以我认为kube-dns服务只对正确的DNS查询做出响应。使用curl,您试图对此服务发出正常的GET请求,这会导致错误响应。

从您给出的集群信息来看,您的所有豆荚看起来都很好,我敢打赌,您的服务端点也可以正常暴露。您可以检查通过kubectl get ep kube-dns --namespace=kube-system应该得到这样的事情:

$ kubectl get ep kube-dns --namespace=kube-system 
NAME  ENDPOINTS               AGE 
kube-dns 100.101.26.65:53,100.96.150.198:53,100.101.26.65:53 + 1 more... 20d 

在我的集群(K8S 1.7.3)卷曲GET来/api/v1/namespaces/kube-system/services/kube-dns/proxy也会导致你提到的错误消息,但我从来没有DNS的问题,所以我希望我对这一个的假设是正确的。

相关问题