Bug 2280332

Summary: nvmeof GW exits after fully startup is not performed after brought back post failover leading to WAIT_FAILBACK_PREPARED ana_state
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Rahul Lepakshi <rlepaksh>
Component: NVMeOFAssignee: Aviv Caro <aviv.caro>
Status: CLOSED ERRATA QA Contact: Rahul Lepakshi <rlepaksh>
Severity: urgent Docs Contact: ceph-doc-bot <ceph-doc-bugzilla>
Priority: unspecified    
Version: 7.1CC: acaro, aviv.caro, cephqe-warriors, mmurthy, tserlin, vdas
Target Milestone: ---Keywords: BetaBlocker, TestBlocker
Target Release: 7.1Flags: rlepaksh: needinfo-
rlepaksh: needinfo-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.1-176.el9cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-06-13 14:32:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Rahul Lepakshi 2024-05-14 08:27:30 UTC
Description of problem:
Post failover the GW and bringing it back up, GW does not startup with latest updates as in omap and is stuck even to list subsystems and eventually exits.

[root@argo023 nvmeof-client.nvmeof.nvmeof_pool.argo023.qmbpxi]# nvmeof subsystem list
Subsystems:
╒═══════════╤════════════════════════════╤════════════╤══════════╤══════════════════╤═════════════╤══════════════╕
│ Subtype   │ NQN                        │ HA State   │   Serial │ Controller IDs   │   Namespace │          Max │
│           │                            │            │   Number │                  │       Count │   Namespaces │
╞═══════════╪════════════════════════════╪════════════╪══════════╪══════════════════╪═════════════╪══════════════╡
│ NVMe      │ nqn.2016-06.io.spdk:cnode1 │ enabled    │        1 │ 2041-4080        │         93 │         2048 │
├───────────┼────────────────────────────┼────────────┼──────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode2 │ enabled    │        2 │ 2041-4080        │         95│         2048 │
╘═══════════╧════════════════════════════╧════════════╧══════════╧══════════════════╧═════════════╧══════════════╛


Version-Release number of selected component (if applicable):


How reproducible:2/2


Steps to Reproduce:
1. Deploy ceph cluster and nvmeof service 
2. Perform a failover and bring back the failed GW again for Failback
3. Observe GW does not fully startup to load all GW components on omap and no CLI commands output is accurate 

Actual results: GW does not perform full startup right. Now the GW is in situation where it could not list subsystem/ namespaces


Expected results: We expect GW to perform full startup right.


Additional info:

Comment 1 RHEL Program Management 2024-05-14 08:27:40 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Aviv Caro 2024-05-14 08:40:25 UTC
Issue is understood. Leonid prepares a fix.

Comment 3 Rahul Lepakshi 2024-05-14 15:22:39 UTC
@aviv.caro I am terming this as blocker as ina 2 GW config, if other GW also comes down for some reason,  there is no one to handle IO and maintain namespaces in ceph cluster. We hit Data unavailability in this case

Comment 8 Rahul Lepakshi 2024-05-28 05:03:25 UTC
Not seeing this issue on recent builds, but I have an Observation on scale cluster, GW takes at least 2 minutes to load NS post GW comes to ACTIVE STANDBY state. 


[root@argo023 ~]# nvmeof subsystem list
Subsystems:
╒═══════════╤════════════════════════════╤════════════╤════════════════════╤══════════════════╤═════════════╤══════════════╕
│ Subtype   │ NQN                        │ HA State   │ Serial             │ Controller IDs   │   Namespace │          Max │
│           │                            │            │ Number             │                  │       Count │   Namespaces │
╞═══════════╪════════════════════════════╪════════════╪════════════════════╪══════════════════╪═════════════╪══════════════╡
│ NVMe      │ nqn.2016-06.io.spdk:cnode1 │ enabled    │ Ceph76593830561176 │ 2041-4080        │         179 │          400 │
├───────────┼────────────────────────────┼────────────┼────────────────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode2 │ enabled    │ Ceph50770207011824 │ 2041-4080        │         182 │          400 │
╘═══════════╧════════════════════════════╧════════════╧════════════════════╧══════════════════╧═════════════╧══════════════╛
[root@argo023 ~]# nvmeof subsystem list
Subsystems:
╒═══════════╤════════════════════════════╤════════════╤════════════════════╤══════════════════╤═════════════╤══════════════╕
│ Subtype   │ NQN                        │ HA State   │ Serial             │ Controller IDs   │   Namespace │          Max │
│           │                            │            │ Number             │                  │       Count │   Namespaces │
╞═══════════╪════════════════════════════╪════════════╪════════════════════╪══════════════════╪═════════════╪══════════════╡
│ NVMe      │ nqn.2016-06.io.spdk:cnode1 │ enabled    │ Ceph76593830561176 │ 2041-4080        │         190 │          400 │
├───────────┼────────────────────────────┼────────────┼────────────────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode2 │ enabled    │ Ceph50770207011824 │ 2041-4080        │         191 │          400 │
╘═══════════╧════════════════════════════╧════════════╧════════════════════╧══════════════════╧═════════════╧══════════════╛
[root@argo023 ~]# nvmeof subsystem list
Subsystems:
╒═══════════╤════════════════════════════╤════════════╤════════════════════╤══════════════════╤═════════════╤══════════════╕
│ Subtype   │ NQN                        │ HA State   │ Serial             │ Controller IDs   │   Namespace │          Max │
│           │                            │            │ Number             │                  │       Count │   Namespaces │
╞═══════════╪════════════════════════════╪════════════╪════════════════════╪══════════════════╪═════════════╪══════════════╡
│ NVMe      │ nqn.2016-06.io.spdk:cnode1 │ enabled    │ Ceph76593830561176 │ 2041-4080        │         196 │          400 │
├───────────┼────────────────────────────┼────────────┼────────────────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode2 │ enabled    │ Ceph50770207011824 │ 2041-4080        │         197 │          400 │
╘═══════════╧════════════════════════════╧════════════╧════════════════════╧══════════════════╧═════════════╧══════════════╛
[root@argo023 ~]# nvmeof subsystem list
Subsystems:
╒═══════════╤════════════════════════════╤════════════╤════════════════════╤══════════════════╤═════════════╤══════════════╕
│ Subtype   │ NQN                        │ HA State   │ Serial             │ Controller IDs   │   Namespace │          Max │
│           │                            │            │ Number             │                  │       Count │   Namespaces │
╞═══════════╪════════════════════════════╪════════════╪════════════════════╪══════════════════╪═════════════╪══════════════╡
│ NVMe      │ nqn.2016-06.io.spdk:cnode1 │ enabled    │ Ceph76593830561176 │ 2041-4080        │         200 │          400 │
├───────────┼────────────────────────────┼────────────┼────────────────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode2 │ enabled    │ Ceph50770207011824 │ 2041-4080        │         200 │          400 │
╘═══════════╧════════════════════════════╧════════════╧════════════════════╧══════════════════╧═════════════╧══════════════╛

Comment 9 errata-xmlrpc 2024-06-13 14:32:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925