Bug 1567023
Summary: | MountContainer: Glusterfs failed to mount by containerized mount utilities | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Wenqi He <wehe> |
Component: | rhgs-server-container | Assignee: | Saravanakumar <sarumuga> |
Status: | CLOSED NEXTRELEASE | QA Contact: | Prasanth <pprakash> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | rhgs-3.0 | CC: | aos-bugs, aos-storage-staff, jsafrane, kramdoss, madam, ndevos, rhs-bugs, sankarshan, wehe |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-02-07 11:14:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Wenqi He
2018-04-13 10:02:50 UTC
Wenqi, could you try again with this change? The yaml file used in comment #0 needs to be adjusted for OpenShift, it should give /var/lib/origin/openshift.local.volumes/ to the mount container instead of /var/lib/kubelet. Update the ds to use /var/lib/origin/openshift.local.volumes/, but seems not working. Pod under k8s-mount project: # oc get pods mounter-h4grg -o yaml apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: privileged creationTimestamp: 2018-06-29T06:24:43Z generateName: mounter- labels: controller-revision-hash: "3374779226" name: mounter pod-template-generation: "1" name: mounter-h4grg namespace: k8s-mount ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: mounter uid: 1c9ce29b-7b65-11e8-a5fc-fa163e83f6cc resourceVersion: "30821" selfLink: /api/v1/namespaces/k8s-mount/pods/mounter-h4grg uid: 1ca44702-7b65-11e8-a5fc-fa163e83f6cc spec: containers: - env: - name: MOUNT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: MOUNT_POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: MOUNT_POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: MOUNT_CONTAINER_NAME value: mounter image: jsafrane/mounter-daemonset:latest imagePullPolicy: IfNotPresent name: mounter resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/origin/openshift.local.volumes/ mountPropagation: Bidirectional name: kubelet - mountPath: /sys name: sys - mountPath: /dev name: dev - mountPath: /etc/iscsi name: iscsi - mountPath: /run/lock/iscsi name: iscsilock - mountPath: /lib/modules name: modules - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-7wl4q readOnly: true dnsPolicy: ClusterFirst hostNetwork: true imagePullSecrets: - name: default-dockercfg-g4b2f nodeName: qe-wehe-node-1 nodeSelector: node-role.kubernetes.io/compute: "true" restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - hostPath: path: /var/lib/origin/openshift.local.volumes/ type: "" name: kubelet - hostPath: path: /dev type: "" name: dev - hostPath: path: /sys type: "" name: sys - hostPath: path: /etc/iscsi type: "" name: iscsi - hostPath: path: /run/lock/iscsi type: "" name: iscsilock - hostPath: path: /lib/modules type: "" name: modules - name: default-token-7wl4q secret: defaultMode: 420 secretName: default-token-7wl4q status: conditions: - lastProbeTime: null lastTransitionTime: 2018-06-29T06:24:43Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2018-06-29T06:25:08Z status: "True" type: Ready - lastProbeTime: null lastTransitionTime: 2018-06-29T06:24:43Z status: "True" type: PodScheduled containerStatuses: - containerID: docker://dd6e8c3e19e17147f7f7d9d27204ef4fea68461d88cef106a7d79aecf7a52720 image: docker.io/jsafrane/mounter-daemonset:latest imageID: docker-pullable://docker.io/jsafrane/mounter-daemonset@sha256:2f5644c866deef8ce6ad5006f74780f8659dc020e54c1129af4e2b7343009d6d lastState: {} name: mounter ready: true restartCount: 0 state: running: startedAt: 2018-06-29T06:25:07Z hostIP: 172.16.120.80 phase: Running podIP: 172.16.120.80 qosClass: BestEffort startTime: 2018-06-29T06:24:43Z Still meet the error of: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 47m default-scheduler Successfully assigned gluster to qe-wehe-node-1 Warning FailedMount 47m kubelet, qe-wehe-node-1 MountVolume.SetUp failed for volume "pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc" : mount failed: mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/a602a87f-7b6b-11e8-a5fc-fa163e83f6cc/volumes/kubernetes.io~glusterfs/pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc --scope -- mount -t glusterfs -o auto_unmount,log-level=ERROR,log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc/gluster-glusterfs.log,backup-volfile-servers=172.16.120.55:172.16.120.56:172.16.120.93 172.16.120.55:vol_ac9133fb2e5395f0130ca752009d3be3 /var/lib/origin/openshift.local.volumes/pods/a602a87f-7b6b-11e8-a5fc-fa163e83f6cc/volumes/kubernetes.io~glusterfs/pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc Output: Running scope as unit run-11018.scope. mount: unknown filesystem type 'glusterfs' And what's worse, there is no kubernetes.io~glusterfs.json file under: /var/lib/kubelet/plugin-containers/ I can find one only in here: # cat /var/lib/docker/devicemapper/mnt/143c577a046d36ca6ae3c411ed002e2e1893d3eb3955ce4b6e9e5408617533a1/rootfs/var/lib/kubelet/plugin-containers/kubernetes.io~glusterfs.json { "podNamespace": "k8s-mount", "podName": "mounter-h4grg", "podUID": "1ca44702-7b65-11e8-a5fc-fa163e83f6cc", "containerName": "mounter" } The log for the mount process should be stored as /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc/gluster-glusterfs.log in the mounter pod. Can you verify that the path is available on both the host and inside the pod? If the path does not exist, up until which parent directory is available? # oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-fa58ed14-7ea7-11e8-896b-0e3681792016 1Gi RWO Delete Bound kni-o/pvc1 glusterfs-storage 1m Inside mounter pod: # oc get pods NAME READY STATUS RESTARTS AGE mounter-r6mxg 1/1 Running 0 15m # oc exec -it mounter-r6mxg bash [root@ip-172-18-4-7 /]# ls /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/ pvc-fa58ed14-7ea7-11e8-896b-0e3681792016 [root@ip-172-18-4-7 /]# ls /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-fa58ed14-7ea7-11e8-896b-0e3681792016/ [root@ip-172-18-4-7 /]# exit On the host: [root@ip-172-18-4-7 ~]# ls /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-fa58ed14-7ea7-11e8-896b-0e3681792016/ [root@ip-172-18-4-7 ~]# ls /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/ pvc-fa58ed14-7ea7-11e8-896b-0e3681792016 So both are available. mountContainers will not be supported with the OCS-3.x product. When CSI is used, a similar functionality is available and consumed automatically. |