2016-08-17 249 views
5

我正在尝试将NFS卷挂载到我的挂包,但没有成功。Kubernetes无法为超时挂载卷挂载

我有一个服务器上运行的NFS挂载点,当我尝试从其他正在运行的服务器连接到它

sudo mount -t nfs -o proto=tcp,port=2049 10.0.0.4:/export /mnt工作正常

另一件事值得一提的是,当我从部署删除卷并且吊舱正在运行。我登录进去,我可以telnet到10.0.0.4端口111和2049成功。所以真的不似乎是任何通信问题

还有:

showmount -e 10.0.0.4 
Export list for 10.0.0.4: 
/export/drive 10.0.0.0/16 
/export  10.0.0.0/16 

因此我可以假设存在的服务器和客户端之间的网络没有或配置问题(我使用Amazon和我测试上是相同的安全组作为K8S爪牙)在服务器

PS: 该服务器是一个简单的ubuntu-> 50GB磁盘

Kubernetes V1.3.4

于是我开始创建我的PV

apiVersion: v1 
kind: PersistentVolume 
metadata: 
    name: nfs 
spec: 
    capacity: 
    storage: 50Gi 
    accessModes: 
    - ReadWriteMany 
    nfs: 
    server: 10.0.0.4 
    path: "/export" 

我的PVC

kind: PersistentVolumeClaim 
apiVersion: v1 
metadata: 
    name: nfs-claim 
spec: 
    accessModes: 
    - ReadWriteMany 
    resources: 
    requests: 
     storage: 50Gi 

这里是kubectl这样描述他们:

Name:  nfs 
    Labels:  <none> 
    Status:  Bound 
    Claim:  default/nfs-claim 
    Reclaim Policy: Retain 
    Access Modes: RWX 
    Capacity: 50Gi 
    Message: 
    Source: 
     Type: NFS (an NFS mount that lasts the lifetime of a pod) 
     Server: 10.0.0.4 
     Path: /export 
     ReadOnly: false 
    No events. 

Name:  nfs-claim 
    Namespace: default 
    Status:  Bound 
    Volume:  nfs 
    Labels:  <none> 
    Capacity: 0 
    Access Modes: 
    No events. 

POD部署:

apiVersion: extensions/v1beta1 
    kind: Deployment 
    metadata: 
     name: mypod 
     labels: 
     name: mypod 
    spec: 
     replicas: 1 
     strategy: 
     rollingUpdate: 
      maxSurge: 1 
      maxUnavailable: 0 
     type: RollingUpdate 
     template: 
     metadata: 
      name: mypod 
      labels: 
      # Important: these labels need to match the selector above, the api server enforces this constraint 
      name: mypod 
     spec: 
      containers: 
      - name: abcd 
      image: irrelevant to the question 
      ports: 
      - containerPort: 80 
      env: 
      - name: hello 
       value: world 
      volumeMounts: 
      - mountPath: "/mnt" 
       name: nfs 
      volumes: 
      - name: nfs 
       persistentVolumeClaim: 
       claimName: nfs-claim 

当我部署我的POD我得到如下:

Volumes: 
     nfs: 
     Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) 
     ClaimName: nfs-claim 
     ReadOnly: false 
     default-token-6pd57: 
     Type: Secret (a volume populated by a Secret) 
     SecretName: default-token-6pd57 
    QoS Tier: BestEffort 
    Events: 
     FirstSeen LastSeen Count From       SubobjectPath Type  Reason  Message 
     --------- -------- ----- ----       ------------- -------- ------  ------- 
     13m  13m  1 {default-scheduler }       Normal  Scheduled Successfully assigned xxx-2140451452-hjeki to ip-10-0-0-157.us-west-2.compute.internal 
     11m  7s  6 {kubelet ip-10-0-0-157.us-west-2.compute.internal}   Warning  FailedMount Unable to mount volumes for pod "xxx-2140451452-hjeki_default(93ca148d-6475-11e6-9c49-065c8a90faf1)": timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs] 
     11m  7s  6 {kubelet ip-10-0-0-157.us-west-2.compute.internal}   Warning  FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs] 

尝试一切,我知道,一切我能想到的。我在这里失踪或做错了什么?

回答

1

我测试了Kubernetes版本1.3.4和1.3.5,而NFS mount对我无效。后来我转向1.2.5,该版本给了我更详细的信息(kubectl describe pod ...)。原来,hyperkube图像中缺少'nfs-common'。在基于主节点和工作节点上的hyperkube映像的所有容器实例中添加nfs-common后,NFS共享开始正常工作(安装成功)。这就是这种情况。我在实践中测试了它,并解决了我的问题。

+0

我可以看到问题已经打开,所以希望官方修正这个问题:https://github.com/kubernetes/kubernetes/issues/30310 – dejwsz

+0

实际上,修正应用在'master'分支hyperkube映像(请参阅Dockerfile定义) – dejwsz