Bug 1567023 - MountContainer: Glusterfs failed to mount by containerized mount utilities
Summary: MountContainer: Glusterfs failed to mount by containerized mount utilities
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhgs-server-container
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Saravanakumar
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-13 10:02 UTC by Wenqi He
Modified: 2019-02-07 11:14 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-07 11:14:19 UTC
Embargoed:


Attachments (Terms of Use)

Description Wenqi He 2018-04-13 10:02:50 UTC
Description of problem:
Fail to mount by mount container pod after enable mountContainer feature gate. Since this feature gate will stay alpha forever and not intended for end user, set the level to low.  

Version-Release number of selected component (if applicable):
openshift v3.9.20
kubernetes v1.9.1+a0ce1bc657

How reproducible:
Always

Steps to Reproduce:
1. Enable feature gate of mountContainer and mountPropagation
kubernetesMasterConfig:
  apiServerArguments:
    feature-gates:
    - MountPropagation=true
    - MountContainers=true
  controllerArguments:
    feature-gates:
    - MountPropagation=true
    - MountContainers=true

2. Create a project to run mount container pods with mounter ds
oc new-project k8s-mount
3. Add the service account to scc priviledged
groups:
- system:serviceaccounts:k8s-mount
4. Create the mounter ds
oc create -f https://raw.githubusercontent.com/jsafrane/mounter-daemonset/master/daemon.yaml

5. Remove the glusterfs packages on the node:
# rpm -qa | grep glusterfs
glusterfs-client-xlators-3.8.4-53.el7.x86_64
glusterfs-3.8.4-53.el7.x86_64
glusterfs-fuse-3.8.4-53.el7.x86_64
glusterfs-libs-3.8.4-53.el7.x86_64
# yum remove glusterfs*

6. Create glusterfs storage class, pvc and pods

Actual results:
Pod could not be running with below error:

# oc describe pods -n mim4b
Name:         gluster
Namespace:    mim4b
Node:         172.16.120.104/172.16.120.104
Start Time:   Fri, 13 Apr 2018 02:38:03 -0400
Labels:       name=gluster
Annotations:  openshift.io/scc=privileged
Status:       Pending
IP:           
Containers:
  gluster:
    Container ID:   
    Image:          aosqe/hello-openshift
    Image ID:       
    Port:           <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/gluster from gluster (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j6fxg (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  gluster:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc1
    ReadOnly:   false
  default-token-j6fxg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-j6fxg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  router=enabled
Tolerations:     <none>
Events:
  Type     Reason                 Age   From                     Message
  ----     ------                 ----  ----                     -------
  Normal   Scheduled              44s   default-scheduler        Successfully assigned gluster to 172.16.120.104
  Normal   SuccessfulMountVolume  44s   kubelet, 172.16.120.104  MountVolume.SetUp succeeded for volume "default-token-j6fxg"
  Warning  FailedMount            44s   kubelet, 172.16.120.104  MountVolume.SetUp failed for volume "pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/37a055ba-3ee5-11e8-aa63-fa163ec22fff/volumes/kubernetes.io~glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff --scope -- mount -t glusterfs -o log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff/gluster-glusterfs.log,backup-volfile-servers=172.16.120.110:172.16.120.51:172.16.120.78,auto_unmount,log-level=ERROR 172.16.120.110:vol_e0bd92096d38791995041b94acfb3abd /var/lib/origin/openshift.local.volumes/pods/37a055ba-3ee5-11e8-aa63-fa163ec22fff/volumes/kubernetes.io~glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff
Output: Running scope as unit run-64679.scope.
mount: unknown filesystem type 'glusterfs'

 the following error information was pulled from the glusterfs log to help diagnose this issue: glusterfs: could not open log file for pod: gluster
  Warning  FailedMount  43s  kubelet, 172.16.120.104  MountVolume.SetUp failed for volume "pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff" : mount failed: mount failed: exit status 32


Expected results:
Pod could be running

Master Log:
Did not find any useful log.

Node Log (of failed PODs):
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: E0413 02:38:36.167121   42001 mount_linux.go:147] Mount failed: exit status 32
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: Mounting command: systemd-run
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/37a055ba-3ee5-11e8-aa63-fa163ec22fff/volumes/kubernetes.io~glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff --scope -- mount -t glusterfs -o auto_unmount,log-level=ERROR,log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff/gluster-glusterfs.log,backup-volfile-servers=172.16.120.110:172.16.120.51:172.16.120.78 172.16.120.110:vol_e0bd92096d38791995041b94acfb3abd /var/lib/origin/openshift.local.volumes/pods/37a055ba-3ee5-11e8-aa63-fa163ec22fff/volumes/kubernetes.io~glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: Output: Running scope as unit run-64752.scope.
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: mount: unknown filesystem type 'glusterfs'
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: I0413 02:38:36.167168   42001 glusterfs_util.go:37] glusterfs: failure, now attempting to read the gluster log for pod gluster
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: W0413 02:38:36.167206   42001 util.go:133] Warning: "/var/lib/origin/openshift.local.volumes/pods/37a055ba-3ee5-11e8-aa63-fa163ec22fff/volumes/kubernetes.io~glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff" is not a mountpoint, deleting
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: E0413 02:38:36.167339   42001 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/glusterfs/37a055ba-3ee5-11e8-aa63-fa163ec22fff-pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff\" (\"37a055ba-3ee5-11e8-aa63-fa163ec22fff\")" failed. No retries permitted until 2018-04-13 02:39:08.167295214 -0400 EDT m=+12034.815407151 (durationBeforeRetry 32s). Error: "MountVolume.SetUp failed for volume \"pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff\" (UniqueName: \"kubernetes.io/glusterfs/37a055ba-3ee5-11e8-aa63-fa163ec22fff-pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff\") pod \"gluster\" (UID: \"37a055ba-3ee5-11e8-aa63-fa163ec22fff\") : mount failed: mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/37a055ba-3ee5-11e8-aa63-fa163ec22fff/volumes/kubernetes.io~glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff --scope -- mount -t glusterfs -o auto_unmount,log-level=ERROR,log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff/gluster-glusterfs.log,backup-volfile-servers=172.16.120.110:172.16.120.51:172.16.120.78 172.16.120.110:vol_e0bd92096d38791995041b94acfb3abd /var/lib/origin/openshift.local.volumes/pods/37a055ba-3ee5-11e8-aa63-fa163ec22fff/volumes/kubernetes.io~glusterfs/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff\nOutput: Running scope as unit run-64752.scope.\nmount: unknown filesystem type 'glusterfs'\n\n the following error information was pulled from the glusterfs log to help diagnose this issue: glusterfs: could not open log file for pod: gluster"
Apr 13 02:38:36 host-172-16-120-104 atomic-openshift-node: I0413 02:38:36.167576   42001 server.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"mim4b", Name:"gluster", UID:"37a055ba-3ee5-11e8-aa63-fa163ec22fff", APIVersion:"v1", ResourceVersion:"50995", FieldPath:""}): type: 'Warning' reason: 'FailedMount' MountVolume.SetUp failed for volume "pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff" : mount failed: mount failed: exit status 32


PV Dump:
# oc get pv pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    Description: 'Gluster-Internal: Dynamically provisioned PV'
    gluster.kubernetes.io/heketi-volume-id: e0bd92096d38791995041b94acfb3abd
    gluster.org/type: file
    kubernetes.io/createdby: heketi-dynamic-provisioner
    pv.beta.kubernetes.io/gid: "2000"
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
    volume.beta.kubernetes.io/mount-options: auto_unmount
  creationTimestamp: 2018-04-13T06:37:58Z
  name: pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff
  resourceVersion: "50982"
  selfLink: /api/v1/persistentvolumes/pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff
  uid: 3481ac32-3ee5-11e8-aa63-fa163ec22fff
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: pvc1
    namespace: mim4b
    resourceVersion: "50954"
    uid: 30374d35-3ee5-11e8-aa63-fa163ec22fff
  glusterfs:
    endpoints: glusterfs-dynamic-pvc1
    path: vol_e0bd92096d38791995041b94acfb3abd
  persistentVolumeReclaimPolicy: Delete
  storageClassName: glusterfs-storage
status:
  phase: Bound


PVC Dump:
# oc get pvc -n mim4b -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      pv.kubernetes.io/bind-completed: "yes"
      pv.kubernetes.io/bound-by-controller: "yes"
      volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
    creationTimestamp: 2018-04-13T06:37:51Z
    name: pvc1
    namespace: mim4b
    resourceVersion: "50984"
    selfLink: /api/v1/namespaces/mim4b/persistentvolumeclaims/pvc1
    uid: 30374d35-3ee5-11e8-aa63-fa163ec22fff
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 10Gi
    storageClassName: glusterfs-storage
    volumeName: pvc-30374d35-3ee5-11e8-aa63-fa163ec22fff
  status:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 10Gi
    phase: Bound
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


StorageClass Dump (if StorageClass used by PV/PVC):
# oc get sc -o yaml
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    creationTimestamp: 2018-04-13T02:26:52Z
    name: glusterfs-storage
    namespace: ""
    resourceVersion: "3222"
    selfLink: /apis/storage.k8s.io/v1/storageclasses/glusterfs-storage
    uid: 20705d38-3ec2-11e8-8ada-fa163ec22fff
  parameters:
    resturl: http://heketi-storage-glusterfs.apps.0412-oy2.qe.rhcloud.com
    restuser: admin
    secretName: heketi-storage-admin-secret
    secretNamespace: glusterfs
  provisioner: kubernetes.io/glusterfs
  reclaimPolicy: Delete
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


Additional info:
Container pods are running on all the node:
# oc get pods
NAME            READY     STATUS    RESTARTS   AGE
mounter-8xdg9   1/1       Running   0          2h
mounter-9ltcm   1/1       Running   0          2h
mounter-cvzq7   1/1       Running   0          2h
mounter-dtxx6   1/1       Running   0          2h
mounter-ghr9j   1/1       Running   0          2h
mounter-lqx88   1/1       Running   0          2h
mounter-rw46m   1/1       Running   0          2h
mounter-wnjwh   1/1       Running   0          1h

And on the node, I can see the json file under:
# cat /var/lib/kubelet/plugin-containers/kubernetes.io~glusterfs.json 
{
    "podNamespace": "k8s-mount",
    "podName": "mounter-8xdg9",
    "podUID": "b529c75f-3edf-11e8-8eb3-fa163ec22fff",
    "containerName": "mounter"
}

# oc logs mounter-8xdg9
+ '[' '!' -e /etc/iscsi ']'
+ '[' '!' -e /etc/iscsi/initiatorname.iscsi ']'
+ '[' '!' -e /etc/iscsi/iscsid.conf ']'
+ SUPPORTED_CONTAINERS='kubernetes.io~glusterfs kubernetes.io~nfs kubernetes.io~ceph kubernetes.io~rbd kubernetes.io~cephfs'
+ iscsid
Could not start iscsid, assuming it runs on the host.
+ '[' 1 -eq 0 ']'
+ echo 'Could not start iscsid, assuming it runs on the host.'
+ rpcbind
+ rpc.statd
+ rpc.mountd
+ register-mount-container kubernetes.io~glusterfs kubernetes.io~nfs kubernetes.io~ceph kubernetes.io~rbd kubernetes.io~cephfs

Comment 5 Niels de Vos 2018-06-28 11:43:16 UTC
Wenqi, could you try again with this change?

The yaml file used in comment #0 needs to be adjusted for OpenShift, it should give /var/lib/origin/openshift.local.volumes/ to the mount container instead of /var/lib/kubelet.

Comment 6 Wenqi He 2018-06-29 08:06:55 UTC
Update the ds to use /var/lib/origin/openshift.local.volumes/, but seems not working.

Pod under k8s-mount project:
# oc get pods mounter-h4grg -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: privileged
  creationTimestamp: 2018-06-29T06:24:43Z
  generateName: mounter-
  labels:
    controller-revision-hash: "3374779226"
    name: mounter
    pod-template-generation: "1"
  name: mounter-h4grg
  namespace: k8s-mount
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: mounter
    uid: 1c9ce29b-7b65-11e8-a5fc-fa163e83f6cc
  resourceVersion: "30821"
  selfLink: /api/v1/namespaces/k8s-mount/pods/mounter-h4grg
  uid: 1ca44702-7b65-11e8-a5fc-fa163e83f6cc
spec:
  containers:
  - env:
    - name: MOUNT_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: MOUNT_POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: MOUNT_POD_UID
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.uid
    - name: MOUNT_CONTAINER_NAME
      value: mounter
    image: jsafrane/mounter-daemonset:latest
    imagePullPolicy: IfNotPresent
    name: mounter
    resources: {}
    securityContext:
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/lib/origin/openshift.local.volumes/
      mountPropagation: Bidirectional
      name: kubelet
    - mountPath: /sys
      name: sys
    - mountPath: /dev
      name: dev
    - mountPath: /etc/iscsi
      name: iscsi
    - mountPath: /run/lock/iscsi
      name: iscsilock
    - mountPath: /lib/modules
      name: modules
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-7wl4q
      readOnly: true
  dnsPolicy: ClusterFirst
  hostNetwork: true
  imagePullSecrets:
  - name: default-dockercfg-g4b2f
  nodeName: qe-wehe-node-1
  nodeSelector:
    node-role.kubernetes.io/compute: "true"
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/disk-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  volumes:
  - hostPath:
      path: /var/lib/origin/openshift.local.volumes/
      type: ""
    name: kubelet
  - hostPath:
      path: /dev
      type: ""
    name: dev
  - hostPath:
      path: /sys
      type: ""
    name: sys
  - hostPath:
      path: /etc/iscsi
      type: ""
    name: iscsi
  - hostPath:
      path: /run/lock/iscsi
      type: ""
    name: iscsilock
  - hostPath:
      path: /lib/modules
      type: ""
    name: modules
  - name: default-token-7wl4q
    secret:
      defaultMode: 420
      secretName: default-token-7wl4q
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-06-29T06:24:43Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-06-29T06:25:08Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-06-29T06:24:43Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://dd6e8c3e19e17147f7f7d9d27204ef4fea68461d88cef106a7d79aecf7a52720
    image: docker.io/jsafrane/mounter-daemonset:latest
    imageID: docker-pullable://docker.io/jsafrane/mounter-daemonset@sha256:2f5644c866deef8ce6ad5006f74780f8659dc020e54c1129af4e2b7343009d6d
    lastState: {}
    name: mounter
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-06-29T06:25:07Z
  hostIP: 172.16.120.80
  phase: Running
  podIP: 172.16.120.80
  qosClass: BestEffort
  startTime: 2018-06-29T06:24:43Z

Still meet the error of:

Events:
  Type     Reason       Age   From                     Message
  ----     ------       ----  ----                     -------
  Normal   Scheduled    47m   default-scheduler        Successfully assigned gluster to qe-wehe-node-1
  Warning  FailedMount  47m   kubelet, qe-wehe-node-1  MountVolume.SetUp failed for volume "pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/a602a87f-7b6b-11e8-a5fc-fa163e83f6cc/volumes/kubernetes.io~glusterfs/pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc --scope -- mount -t glusterfs -o auto_unmount,log-level=ERROR,log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc/gluster-glusterfs.log,backup-volfile-servers=172.16.120.55:172.16.120.56:172.16.120.93 172.16.120.55:vol_ac9133fb2e5395f0130ca752009d3be3 /var/lib/origin/openshift.local.volumes/pods/a602a87f-7b6b-11e8-a5fc-fa163e83f6cc/volumes/kubernetes.io~glusterfs/pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc
Output: Running scope as unit run-11018.scope.
mount: unknown filesystem type 'glusterfs'

And what's worse, there is no kubernetes.io~glusterfs.json file under: /var/lib/kubelet/plugin-containers/

I can find one only in here:
# cat /var/lib/docker/devicemapper/mnt/143c577a046d36ca6ae3c411ed002e2e1893d3eb3955ce4b6e9e5408617533a1/rootfs/var/lib/kubelet/plugin-containers/kubernetes.io~glusterfs.json
{
    "podNamespace": "k8s-mount",
    "podName": "mounter-h4grg",
    "podUID": "1ca44702-7b65-11e8-a5fc-fa163e83f6cc",
    "containerName": "mounter"
}

Comment 7 Niels de Vos 2018-06-29 08:50:59 UTC
The log for the mount process should be stored as /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-a0201c16-7b6b-11e8-a5fc-fa163e83f6cc/gluster-glusterfs.log in the mounter pod. Can you verify that the path is available on both the host and inside the pod?

If the path does not exist, up until which parent directory is available?

Comment 8 Wenqi He 2018-07-03 10:06:10 UTC
# oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                STORAGECLASS        REASON    AGE
pvc-fa58ed14-7ea7-11e8-896b-0e3681792016   1Gi        RWO            Delete           Bound     kni-o/pvc1           glusterfs-storage             1m

Inside mounter pod:

# oc get pods
NAME            READY     STATUS    RESTARTS   AGE
mounter-r6mxg   1/1       Running   0          15m
# oc exec -it mounter-r6mxg bash
[root@ip-172-18-4-7 /]# ls /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/
pvc-fa58ed14-7ea7-11e8-896b-0e3681792016
[root@ip-172-18-4-7 /]# ls /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-fa58ed14-7ea7-11e8-896b-0e3681792016/
[root@ip-172-18-4-7 /]# exit

On the host:
[root@ip-172-18-4-7 ~]# ls /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-fa58ed14-7ea7-11e8-896b-0e3681792016/
[root@ip-172-18-4-7 ~]# ls /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/
pvc-fa58ed14-7ea7-11e8-896b-0e3681792016

So both are available.

Comment 9 Niels de Vos 2019-02-07 11:14:19 UTC
mountContainers will not be supported with the OCS-3.x product. When CSI is used, a similar functionality is available and consumed automatically.


Note You need to log in before you can comment on or make changes to this bug.