Bug 1737389 - CSI: MountOption in storage class does not take effect for ebs volume
Summary: CSI: MountOption in storage class does not take effect for ebs volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.2.0
Assignee: Fabio Bertinatto
QA Contact: Chao Yang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-05 09:40 UTC by Chao Yang
Modified: 2019-10-16 06:34 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:34:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 23555 0 None closed Bug 1737389: UPSTREAM: 80191: Add passthrough for MountOptions for NodeStageVolume for CSI 2020-11-19 20:58:19 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:34:53 UTC

Description Chao Yang 2019-08-05 09:40:06 UTC
Description of problem:
MountOption in storage class does not take effect for ebs volume

Version-Release number of selected component (if applicable):
4.2.0-0.nightly-2019-07-31-162901

How reproducible:
Always

Steps to Reproduce:
1.Create sc
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-mount
parameters:
  type: gp2
  fsType: xfs
  csi.storage.k8s.io/provisioner-secret-name: aws-creds
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
provisioner: ebs.csi.aws.com
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
mountOptions:
  - "discard"
2.Create pvc and pod
3.Waiting for pod is running
4.oc rsh mypod-mount

/dev/nvme3n1 on /tmp type xfs (rw,seclabel,relatime,attr2,inode64,noquota)


Actual results:
No discard showed when check mount option

Expected results:
"discard" should be added when mount volumes

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 1 Jan Safranek 2019-08-06 09:05:08 UTC
I checked that external-provisioner passes the mount options to provisioned PVs:

- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
    name: pvc-ee2b2484-b827-11e9-8a50-068fac16b29a
  spec:
    csi:
      driver: ebs.csi.aws.com
      fsType: ext4
      volumeAttributes:
        fstype: ""
        storage.kubernetes.io/csiProvisionerIdentity: 1565017758875-8081-ebs.csi.aws.com
      volumeHandle: vol-07272e50cd39021ab
    mountOptions:
    - discard
  ...


I noticed that NodeStageVolume is called without mount options:

I0806 08:56:09.189379       1 node.go:93] NodeStageVolume: called with args {VolumeId:vol-07272e50cd39021ab PublishContext:map[devicePath:/dev/xvdba] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ee2b2484-b827-11e9-8a50-068fac16b29a/globalmount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:map[] VolumeContext:map[fstype: storage.kubernetes.io/csiProvisionerIdentity:1565017758875-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}


NodePublish is called with mount options, but it's too late:
I0806 08:56:09.465438       1 node.go:267] NodePublishVolume: called with args {VolumeId:vol-07272e50cd39021ab PublishContext:map[devicePath:/dev/xvdba] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ee2b2484-b827-11e9-8a50-068fac16b29a/globalmount TargetPath:/var/lib/kubelet/pods/03c8c3a9-b828-11e9-bca8-02c1d4176b00/volumes/kubernetes.io~csi/pvc-ee2b2484-b827-11e9-8a50-068fac16b29a/mount VolumeCapability:mount:<fs_type:"ext4" mount_flags:"discard" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[fstype: storage.kubernetes.io/csiProvisionerIdentity:1565017758875-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}

Comment 2 Jan Safranek 2019-08-06 09:17:18 UTC
This seems to be fixed by https://github.com/kubernetes/kubernetes/pull/80191

Comment 4 Chao Yang 2019-08-20 05:51:00 UTC
It is failed on 4.2.0-0.nightly-2019-08-19-201622

Checking mount options in pod:
/dev/xvdbb on /tmp type ext4 (rw,seclabel,relatime)


I0820 05:43:51.923559       1 node.go:93] NodeStageVolume: called with args {VolumeId:vol-0bdb9d6a3bc3ce021 PublishContext:map[devicePath:/dev/xvdbb] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount VolumeCapability:mount:<fs_type:"ext4" mount_flags:"discard" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1566270480235-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0820 05:43:51.923691       1 node.go:148] NodeStageVolume: find device path /dev/xvdbb -> /dev/xvdbb
I0820 05:43:51.924271       1 node.go:183] NodeStageVolume: formatting /dev/xvdbb and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount with fstype ext4
I0820 05:43:51.924298       1 mount_linux.go:441] Checking for issues with fsck on disk: /dev/xvdbb
I0820 05:43:51.943455       1 mount_linux.go:454] `fsck` error fsck from util-linux 2.30.2
fsck.ext2: Bad magic number in super-block while trying to open /dev/xvdbb
/dev/xvdbb: 
The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

I0820 05:43:51.943479       1 mount_linux.go:460] Attempting to mount disk: ext4 /dev/xvdbb /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount
I0820 05:43:51.943498       1 mount_linux.go:142] Mounting cmd (mount) with arguments ([-t ext4 -o defaults /dev/xvdbb /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount])
E0820 05:43:51.950021       1 mount_linux.go:147] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o defaults /dev/xvdbb /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount: wrong fs type, bad option, bad superblock on /dev/xvdbb, missing codepage or helper program, or other error.

I0820 05:43:51.950050       1 mount_linux.go:515] Attempting to determine if disk "/dev/xvdbb" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/xvdbb])
I0820 05:43:51.960530       1 mount_linux.go:518] Output: "", err: exit status 2
I0820 05:43:51.960557       1 mount_linux.go:489] Disk "/dev/xvdbb" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/xvdbb]
I0820 05:43:52.078850       1 mount_linux.go:493] Disk successfully formatted (mkfs): ext4 - /dev/xvdbb /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount
I0820 05:43:52.078893       1 mount_linux.go:142] Mounting cmd (mount) with arguments ([-t ext4 -o defaults /dev/xvdbb /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount])
I0820 05:43:52.088517       1 node.go:134] NodeStageVolume: volume="vol-0bdb9d6a3bc3ce021" operation finished
I0820 05:43:52.089999       1 node.go:339] NodeGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0820 05:43:52.097090       1 node.go:269] NodePublishVolume: called with args {VolumeId:vol-0bdb9d6a3bc3ce021 PublishContext:map[devicePath:/dev/xvdbb] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount TargetPath:/var/lib/kubelet/pods/74b0572e-c30d-11e9-9335-0254d8b891de/volumes/kubernetes.io~csi/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/mount VolumeCapability:mount:<fs_type:"ext4" mount_flags:"discard" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1566270480235-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0820 05:43:52.097169       1 node.go:439] NodePublishVolume: creating dir /var/lib/kubelet/pods/74b0572e-c30d-11e9-9335-0254d8b891de/volumes/kubernetes.io~csi/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/mount
I0820 05:43:52.097197       1 node.go:449] NodePublishVolume: mounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount at /var/lib/kubelet/pods/74b0572e-c30d-11e9-9335-0254d8b891de/volumes/kubernetes.io~csi/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/mount with option [bind discard] as fstype ext4
I0820 05:43:52.097221       1 mount_linux.go:142] Mounting cmd (mount) with arguments ([-t ext4 -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount /var/lib/kubelet/pods/74b0572e-c30d-11e9-9335-0254d8b891de/volumes/kubernetes.io~csi/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/mount])
I0820 05:43:52.099048       1 mount_linux.go:142] Mounting cmd (mount) with arguments ([-t ext4 -o bind,remount,discard /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/globalmount /var/lib/kubelet/pods/74b0572e-c30d-11e9-9335-0254d8b891de/volumes/kubernetes.io~csi/pvc-694edcbb-c30d-11e9-9260-0af72f45a844/mount])

Comment 12 errata-xmlrpc 2019-10-16 06:34:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.