Bug 2235395 - Wrong CIDR gets blocklisted and pod does not get mounted upon fencing when having gateway in between
Summary: Wrong CIDR gets blocklisted and pod does not get mounted upon fencing when ha...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.14
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.14.0
Assignee: Subham Rai
QA Contact: suchita
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-28 15:59 UTC by Joy John Pinto
Modified: 2023-11-08 18:55 UTC (History)
5 users (show)

Fixed In Version: 4.14.0-123
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 18:54:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 515 0 None open Bug 2235395: core: return valid CIDR ip 2023-08-29 09:32:13 UTC
Red Hat Product Errata RHSA-2023:6832 0 None None None 2023-11-08 18:55:03 UTC

Description Joy John Pinto 2023-08-28 15:59:52 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Wrong CIDR gets blocklisted and pod does not get mounted upon fencing when having gateway in between

Version of all relevant components (if applicable):
OCP 4.14.0-0.nightly-2023-08-11-055332
ODF 4.14.0-115


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
NA

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:
NA

Steps to Reproduce:
1. Install Openshift data foundation and deploy a app pod in same node as that of rook ceph operator pod
2. Shutdown the node on which RBD RWO pod is deployed
3.Once the node is down, add taint
```oc  taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute ```
Wait for some time(if the application pod and rook operator are on the same node wait for bit logger) then check the networkFence cr status and make sure its state is fenced and state is ready
```oc get networkfences.csiaddons.openshift.io
NAME           DRIVER                       CIDRS                   FENCESTATE   AGE   RESULT
minikube-m03   rook-ceph.rbd.csi.ceph.com   ["92.168.39.187:0/32"]   Fenced       96s   Succeeded
4. Wait for pod to come up on new node. 


Actual results:
The pod will be stuck in container creating state (Waited for more than one hour)

Expected results:
Pod should come up on new node

Additional info:
Event log : Generated from kubelet on compute-1 34 times in the last 1 hourMountVolume.MountDevice failed for volume "pvc-703328fc-2637-435a-918c-c61baf96ab8b" : rpc error: code = Internal desc = error generating volume 0001-0011-openshift-storage-0000000000000001-a059cc2e-e5f5-4094-ac54-278006bde5a9: rados: ret=-108, Cannot send after transport endpoint shutdown

Comment 7 Subham Rai 2023-09-27 02:33:40 UTC
already provided the info, removing needinfo on me

Comment 11 errata-xmlrpc 2023-11-08 18:54:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832


Note You need to log in before you can comment on or make changes to this bug.