2016-08-03 91 views
0

我已经在CoreOS上基于contrib repo创建了Kubernetes v1.3.3群集。我的群集看起来很健康,而且我想使用仪表板,但即使所有身份验证都被禁用,我也无法访问UI。以下是kubernetes-dashboard组件的详细信息,以及一些API服务器配置/输出。我在这里错过了什么?无法访问Kubernetes仪表板

控制板组件

[email protected] ~ $ kubectl get ep kubernetes-dashboard --namespace=kube-system -o yaml 
apiVersion: v1 
kind: Endpoints 
metadata: 
    creationTimestamp: 2016-07-28T23:40:57Z 
    labels: 
    k8s-app: kubernetes-dashboard 
    kubernetes.io/cluster-service: "true" 
    name: kubernetes-dashboard 
    namespace: kube-system 
    resourceVersion: "345970" 
    selfLink: /api/v1/namespaces/kube-system/endpoints/kubernetes-dashboard 
    uid: bb49360f-551c-11e6-be8c-02b43b6aa639 
subsets: 
- addresses: 
    - ip: 172.16.100.9 
    targetRef: 
     kind: Pod 
     name: kubernetes-dashboard-v1.1.0-nog8g 
     namespace: kube-system 
     resourceVersion: "345969" 
     uid: d4791722-5908-11e6-9697-02b43b6aa639 
    ports: 
    - port: 9090 
    protocol: TCP 

[email protected] ~ $ kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml 
apiVersion: v1 
kind: Service 
metadata: 
    creationTimestamp: 2016-07-28T23:40:57Z 
    labels: 
    k8s-app: kubernetes-dashboard 
    kubernetes.io/cluster-service: "true" 
    name: kubernetes-dashboard 
    namespace: kube-system 
    resourceVersion: "109199" 
    selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard 
    uid: bb4804bd-551c-11e6-be8c-02b43b6aa639 
spec: 
    clusterIP: 172.20.164.194 
    ports: 
    - port: 80 
    protocol: TCP 
    targetPort: 9090 
    selector: 
    k8s-app: kubernetes-dashboard 
    sessionAffinity: None 
    type: ClusterIP 
status: 
    loadBalancer: {} 
[email protected] ~ $ kubectl describe svc/kubernetes-dashboard -- 

namespace=kube-system 
Name:   kubernetes-dashboard 
Namespace:  kube-system 
Labels:   k8s-app=kubernetes-dashboard 
      kubernetes.io/cluster-service=true 
Selector:  k8s-app=kubernetes-dashboard 
Type:   ClusterIP 
IP:   172.20.164.194 
Port:   <unset> 80/TCP 
Endpoints:  172.16.100.9:9090 
Session Affinity: None 
No events. 

[email protected] ~ $ kubectl get po kubernetes-dashboard-v1.1.0-nog8g --namespace=kube-system -o yaml 
apiVersion: v1 
kind: Pod 
metadata: 
    annotations: 
    kubernetes.io/created-by: | 
     {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kubernetes-dashboard-v1.1.0","uid":"3a282a06-58c9-11e6-9ce6-02b43b6aa639","apiVersion":"v1","resourceVersion":"338823"}} 
    creationTimestamp: 2016-08-02T23:28:34Z 
    generateName: kubernetes-dashboard-v1.1.0- 
    labels: 
    k8s-app: kubernetes-dashboard 
    kubernetes.io/cluster-service: "true" 
    version: v1.1.0 
    name: kubernetes-dashboard-v1.1.0-nog8g 
    namespace: kube-system 
    resourceVersion: "345969" 
    selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-v1.1.0-nog8g 
    uid: d4791722-5908-11e6-9697-02b43b6aa639 
spec: 
    containers: 
    - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0 
    imagePullPolicy: IfNotPresent 
    livenessProbe: 
     failureThreshold: 3 
     httpGet: 
     path:/
     port: 9090 
     scheme: HTTP 
     initialDelaySeconds: 30 
     periodSeconds: 10 
     successThreshold: 1 
     timeoutSeconds: 30 
    name: kubernetes-dashboard 
    ports: 
    - containerPort: 9090 
     protocol: TCP 
    resources: 
     limits: 
     cpu: 100m 
     memory: 50Mi 
     requests: 
     cpu: 100m 
     memory: 50Mi 
    terminationMessagePath: /dev/termination-log 
    volumeMounts: 
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount 
     name: default-token-lvmnw 
     readOnly: true 
    dnsPolicy: ClusterFirst 
    nodeName: ip-10-178-153-57.us-west-2.compute.internal 
    restartPolicy: Always 
    securityContext: {} 
    serviceAccount: default 
    serviceAccountName: default 
    terminationGracePeriodSeconds: 30 
    volumes: 
    - name: default-token-lvmnw 
    secret: 
     secretName: default-token-lvmnw 
status: 
    conditions: 
    - lastProbeTime: null 
    lastTransitionTime: 2016-08-02T23:28:34Z 
    status: "True" 
    type: Initialized 
    - lastProbeTime: null 
    lastTransitionTime: 2016-08-02T23:28:35Z 
    status: "True" 
    type: Ready 
    - lastProbeTime: null 
    lastTransitionTime: 2016-08-02T23:28:34Z 
    status: "True" 
    type: PodScheduled 
    containerStatuses: 
    - containerID: docker://1bf65bbec830e32e85e1cd9e22a5db7a2b623c6d9d7da17c747d256a9838676f 
    image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0 
    imageID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53 
    lastState: {} 
    name: kubernetes-dashboard 
    ready: true 
    restartCount: 0 
    state: 
     running: 
     startedAt: 2016-08-02T23:28:34Z 
    hostIP: 10.178.153.57 
    phase: Running 
    podIP: 172.16.100.9 
    startTime: 2016-08-02T23:28:34Z 

API服务器配置

/opt/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://internal-etcd-elb-236896596.us-west-2.elb.amazonaws.com:80 --insecure-bind-address=0.0.0.0 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=172.20.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota --bind-address=0.0.0.0 --cloud-provider=aws 

API服务器是从远程主机(笔记本)

$ curl http://10.178.153.240:8080/ 
{ 
    "paths": [ 
    "/api", 
    "/api/v1", 
    "/apis", 
    "/apis/apps", 
    "/apis/apps/v1alpha1", 
    "/apis/autoscaling", 
    "/apis/autoscaling/v1", 
    "/apis/batch", 
    "/apis/batch/v1", 
    "/apis/batch/v2alpha1", 
    "/apis/extensions", 
    "/apis/extensions/v1beta1", 
    "/apis/policy", 
    "/apis/policy/v1alpha1", 
    "/apis/rbac.authorization.k8s.io", 
    "/apis/rbac.authorization.k8s.io/v1alpha1", 
    "/healthz", 
    "/healthz/ping", 
    "/logs/", 
    "/metrics", 
    "/swaggerapi/", 
    "/ui/", 
    "/version" 
    ] 
访问

UI是不能远程访问

$ curl -L http://10.178.153.240:8080/ui 
Error: 'dial tcp 172.16.100.9:9090: i/o timeout' 
Trying to reach: 'http://172.16.100.9:9090/' 

UI是从爪牙节点访问

[email protected] ~$ curl -L 172.16.100.9:9090 
<!doctype html> <html ng-app="kubernetesDashboard">... 

API服务器路由表

[email protected] ~ $ ip route show 
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.240 metric 1024 
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.240 
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.240 metric 1024 
172.16.0.0/12 dev flannel.1 proto kernel scope link src 172.16.6.0 
172.16.6.0/24 dev docker0 proto kernel scope link src 172.16.6.1 

爪牙(POD地方住)路由表

[email protected] ~ $ ip route show 
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.57 metric 1024 
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.57 
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.57 metric 1024 
172.16.0.0/12 dev flannel.1 
172.16.100.0/24 dev docker0 proto kernel scope link src 172.16.100.1 

法兰绒日志 看来,这一条路线与法兰绒行为不端。我在日志中看到这些错误,但重新启动守护进程似乎无法解决它。

...Watch subnets: client: etcd cluster is unavailable or misconfigured 

... L3 miss: 172.16.100.9 

... calling NeighSet: 172.16.100.9 
+0

这可能是一个问题,未定义或创建的服务? 你可以尝试粘贴kubectl describe svc的输出吗? –

+0

这绝对是@SantanuDey。我向OP添加了描述调用,并且它正在到达表示页面的172.16.100.9端点。 – smugcloud

+0

如果您从服务的:9090获得响应,这意味着它是正确的。 您可能需要定义一个节点端口类型的附加服务,以便能够使用节点IP或从外部访问它。 –

回答

1

对于任何人谁找到自己的方式对这个问题,我想后最终解决,因为它不是一个绒布,Kubernetes,或SkyDNS问题,这是一个意外的防火墙。只要我在API服务器上打开防火墙,我的Flannel路由功能完全正常,我可以访问仪表板(假设在API服务器上启用了基本身份验证)。

所以最后,用户错误:)

+0

Flannel需要什么流程才能正常运行?谢谢! –

+0

我的TCP流量暴露在防火墙中,但没有UDP。打开限制解决了它。 – smugcloud

0

如果您尝试添加其他服务像下面的定义,那么你我想你应该能够访问使用任何节点的IP和它在这个例子中,nodeport的仪表板30100

kind: Service 
apiVersion: v1 
metadata: 
    name: kube-expose-dashboard 
    namespace: kube-system 
    labels: 
    k8s-app: kubernetes-dashboard 
spec: 
    type: NodePort 
    ports: 
    - port: 80 
     protocol: TCP 
     nodePort: 30100 
     targetPort: 9090 
    selector: 
    app: kubernetes-dashboard 
+2

正确。没有必要为此创建另一个服务,但可以通过简单地修补现有服务来添加NodePort,例如使用'kubectl edit svc kubernetes-dashboard' –

1

要么你不得不暴露使用类型NodePort的服务,因为在前面的回答中提到的集群之外的服务,或者如果您启用了基本验证您的API服务器上,你可以使用下面的URL到达您的服务:

http://kubernetes_master_address/api/v1/proxy/namespaces/namespace_name/services/service_name

参见:http://kubernetes.io/docs/user-guide/accessing-the-cluster/#manually-constructing-apiserver-proxy-urls

+0

谢谢Antoine。我已经添加了一个基本的auth文件,重新启动了api-server,并且我仍然看到'错误:'拨号tcp 172.16.100.9:9090:I/O超时' 尝试达到:'http://172.16。 100.9:9090 /''问题。如果我尝试用base64用户卷曲:pw,则会导致未经授权。我确实尝试了NodePort,并且无法从外部访问底层容器。代理中是否存在配置错误? @ antoine-cotten – smugcloud

+0

这是因为您试图访问您的服务IP,而这在工作站所在的网络中很可能无法路由!尝试使用我发布的URL(替换'namespace_name'和'service_name'),并确保您使用的是*主* IP /地址 –

+0

是的,这就是我正在做的。以下是我尝试点击的完整网址:https://10.178.153.240/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/ – smugcloud