Bug 2265124 - [4.15] Move cephFS fencing under a new flag to trigger networkFence
Summary: [4.15] Move cephFS fencing under a new flag to trigger networkFence
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.15.0
Assignee: Subham Rai
QA Contact: Joy John Pinto
URL:
Whiteboard:
Depends On: 2259668 2262070
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-02-20 14:51 UTC by Subham Rai
Modified: 2024-03-19 15:33 UTC (History)
5 users (show)

Fixed In Version: 4.15.0-149
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-03-19 15:32:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 578 0 None open Bug 2265124: csi: remove cephFs networkFence code temporarily 2024-02-22 07:14:37 UTC
Github rook rook pull 13801 0 None open csi: disable cephFs network Fencing temporarily 2024-02-21 11:13:35 UTC
Red Hat Product Errata RHSA-2024:1383 0 None None None 2024-03-19 15:33:01 UTC

Comment 7 Joy John Pinto 2024-02-28 10:37:40 UTC
Verified with OCP 4.15(4.15.0-0.nightly-2024-02-27-181650) and ODF 4.15.0-150

Created a cephfs deployment pod and upon adding taint label with command 'oc adm taint nodes <node> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute', networkfence is not created

[jopinto@jopinto ceph-csi]$ oc get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
logwriter-cephfs-76fbfb679-srdm2   1/1     Running   0          13s   10.129.2.41   compute-1   <none>           <none>

[jopinto@jopinto ceph-csi]$ oc adm taint nodes compute-1 node.kubernetes.io/out-of-service=nodeshutdown:NoExecute
node/compute-1 tainted
[jopinto@jopinto ceph-csi]$ oc get networkfences.csiaddons.openshift.io  
No resources found
[jopinto@jopinto ceph-csi]$ oc get networkfences.csiaddons.openshift.io  
No resources found
[jopinto@jopinto ceph-csi]$ oc get networkfences.csiaddons.openshift.io  
No resources found
[jopinto@jopinto ceph-csi]$ oc get networkfences.csiaddons.openshift.io  
No resources found
[jopinto@jopinto ceph-csi]$ oc adm taint nodes compute-1 node.kubernetes.io/out-of-service=nodeshutdown:NoExecute-
node/compute-1 untainted


Also tried same scenario with rbd deployment pod, upon tainting the node, network fence gets created and upon untainting it gets removed.

[jopinto@jopinto ceph-csi]$ oc get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
logwriter-cephfs-76fbfb679-7r5kn   1/1     Running   0          23m     10.131.0.36   compute-0   <none>           <none>
logwriter-rbd-new-0                1/1     Running   0          5m55s   10.129.2.51   compute-1   <none>           <none>
[jopinto@jopinto ceph-csi]$ oc adm taint nodes compute-1 node.kubernetes.io/out-of-service=nodeshutdown:NoExecute
node/compute-1 tainted
[jopinto@jopinto ceph-csi]$ oc get networkfences.csiaddons.openshift.io  
NAME                              DRIVER                               CIDRS               FENCESTATE   AGE   RESULT
compute-1-rbd-openshift-storage   openshift-storage.rbd.csi.ceph.com   ["100.64.0.7/32"]   Fenced       19m   Succeeded
[jopinto@jopinto ceph-csi]$ oc adm taint nodes compute-1 node.kubernetes.io/out-of-service=nodeshutdown:NoExecute-
node/compute-1 untainted
[jopinto@jopinto ceph-csi]$ oc get networkfences.csiaddons.openshift.io  
No resources found

Comment 10 errata-xmlrpc 2024-03-19 15:32:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383


Note You need to log in before you can comment on or make changes to this bug.