Bug 1613902 - RBD bind mount on RHEL does not transistively acquire _netdev from original mount
Summary: RBD bind mount on RHEL does not transistively acquire _netdev from original m...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 3.6.z
Assignee: Hemant Kumar
QA Contact: Wenqi He
URL:
Whiteboard:
Depends On:
Blocks: 1631449 1640746
TreeView+ depends on / blocked
 
Reported: 2018-08-08 13:56 UTC by Birol Bilgin
Modified: 2020-05-20 19:52 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1631449 (view as bug list)
Environment:
Last Closed: 2019-01-22 15:29:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1487316 1 None None None 2022-03-13 14:25:17 UTC

Comment 16 Hemant Kumar 2018-08-31 15:11:51 UTC
@Ilya can you retry same thing with creating a bind mount of "/mnt" somewhere else with "_netdev" option. 

something like:

# mount -o _netdev /dev/rd0 /mnt
# mount -o bind,_netdev /mnt /foo

And then run shutdown/poweroff? It appears to me that creating a bind mount consistenly reproduces this problem.

Comment 25 Hemant Kumar 2018-08-31 18:38:09 UTC
Move this to systemd. lets see where this one goes.

Comment 37 Hemant Kumar 2018-11-28 15:20:47 UTC
The PR is merged.

Comment 39 Wenqi He 2019-01-22 07:54:48 UTC
Tested on below version:
openshift v3.6.173.0.147
kubernetes v1.6.1+5115d708d7

# uname -a
Linux xxx 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 15 17:36:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.6 (Maipo)

# oc get sc
NAME                 TYPE
cephrbdprovisioner   kubernetes.io/rbd
# oc get pvc
NAME                 STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS         AGE
cephrbdprovisioner   Bound     pvc-ed457136-1e0d-11e9-a26a-fa163e2b34ed   1Gi        RWO           cephrbdprovisioner   25m
pvc                  Bound     pvc-9f5054de-1e0f-11e9-a26a-fa163e2b34ed   1Gi        RWO           cephrbdprovisioner   13m
pvc1                 Bound     pvc-abda841b-1e0f-11e9-a26a-fa163e2b34ed   1Gi        RWO           cephrbdprovisioner   13m
# oc get pods
NAME                       READY     STATUS    RESTARTS   AGE
cephpod                    1/1       Running   0          12m
cephpod1                   1/1       Running   0          11m
cephpod2                   1/1       Running   0          11m

And on the node
# mount | grep rbd
/dev/rbd0 on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/rbd/rbd/rbd-image-kubernetes-dynamic-pvc-ed47d478-1e0d-11e9-a26a-fa163e2b34ed type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
/dev/rbd0 on /var/lib/origin/openshift.local.volumes/pods/e667d4ba-1e0f-11e9-a26a-fa163e2b34ed/volumes/kubernetes.io~rbd/pvc-ed457136-1e0d-11e9-a26a-fa163e2b34ed type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
/dev/rbd1 on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/rbd/rbd/rbd-image-kubernetes-dynamic-pvc-9f52594e-1e0f-11e9-a26a-fa163e2b34ed type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
/dev/rbd1 on /var/lib/origin/openshift.local.volumes/pods/f8c6e84b-1e0f-11e9-a26a-fa163e2b34ed/volumes/kubernetes.io~rbd/pvc-9f5054de-1e0f-11e9-a26a-fa163e2b34ed type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
/dev/rbd2 on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/rbd/rbd/rbd-image-kubernetes-dynamic-pvc-abdd4923-1e0f-11e9-a26a-fa163e2b34ed type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
/dev/rbd2 on /var/lib/origin/openshift.local.volumes/pods/0a0d17d4-1e10-11e9-a26a-fa163e2b34ed/volumes/kubernetes.io~rbd/pvc-abda841b-1e0f-11e9-a26a-fa163e2b34ed type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
# reboot

The server never come back, the mount above do not have "_netdev" option. Does we have the 3.6 PR for fixing this?


Note You need to log in before you can comment on or make changes to this bug.