Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2103862

Summary: Host in cluster rebooting upon array side controller failover
Product: Red Hat Enterprise Linux 7 Reporter: Govind Kulkarni <govind.kulkarni>
Component: corosyncAssignee: Jan Friesse <jfriesse>
Status: CLOSED DUPLICATE QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 7.8CC: ccaulfie, cluster-maint
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-07-07 07:08:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
syslog message none

Description Govind Kulkarni 2022-07-05 06:37:16 UTC
Created attachment 1894609 [details]
syslog message

Description of problem:

RHEL7.8 hosts, configured HA active/passive in NFS cluster.
Bith hosts, host1 & host2 mounted the NFS share volume and triggered the IO.
Failover was triggerred on storage array end. Which leads to IO drop and host reboot.

How reproducible:
Always

Steps to Reproduce:
1. Configure hosts in NFS cluster
2. Start IO
3. Trigger Controller failover on Nimble array.

Actual results:
IO drops and host reboots

Expected results:
IO should continue to run without disruption.

Additional info:
Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: State transition S_IDLE -> S_POLICY_ENGINE
Mar 27 14:10:55 iwf-dl360-17 pengine[3605]: notice: Scheduling shutdown of node iwf-dl360-18
Mar 27 14:10:55 iwf-dl360-17 pengine[3605]: notice: * Shutdown iwf-dl360-18
Mar 27 14:10:55 iwf-dl360-17 pengine[3605]: notice: Calculated transition 6, saving inputs in /var/lib/pacemaker/pengine/pe-input-287.bz2
Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: Transition 6 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-287.bz2): Complete
Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: do_shutdown of peer iwf-dl360-18 is complete
Mar 27 14:10:55 iwf-dl360-17 attrd[3604]: notice: Node iwf-dl360-18 state is now lost
Mar 27 14:10:55 iwf-dl360-17 attrd[3604]: notice: Removing all iwf-dl360-18 attributes for peer loss
Mar 27 14:10:55 iwf-dl360-17 attrd[3604]: notice: Purged 1 peer with id=2 and/or uname=iwf-dl360-18 from the membership cache
Mar 27 14:10:55 iwf-dl360-17 stonith-ng[3602]: notice: Node iwf-dl360-18 state is now lost
Mar 27 14:10:55 iwf-dl360-17 stonith-ng[3602]: notice: Purged 1 peer with id=2 and/or uname=iwf-dl360-18 from the membership cache
Mar 27 14:10:55 iwf-dl360-17 cib[3601]: notice: Node iwf-dl360-18 state is now lost
Mar 27 14:10:55 iwf-dl360-17 cib[3601]: notice: Purged 1 peer with id=2 and/or uname=iwf-dl360-18 from the membership cache

Mar 27 14:10:55 iwf-dl360-17 corosync[3174]: [TOTEM ] A new membership (10.201.14.83:547) was formed. Members left: 2
Mar 27 14:10:55 iwf-dl360-17 corosync[3174]: [CPG ] downlist left_list: 1 received
Mar 27 14:10:55 iwf-dl360-17 corosync[3174]: [QUORUM] Members[1]: 1
Mar 27 14:10:55 iwf-dl360-17 corosync[3174]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: Node iwf-dl360-18 state is now lost
Mar 27 14:10:55 iwf-dl360-17 pacemakerd[3591]: notice: Node iwf-dl360-18 state is now lost
Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: do_shutdown of peer iwf-dl360-18 is complete
Mar 27 14:11:04 iwf-dl360-17 systemd: Reloading.

Comment 3 Jan Friesse 2022-07-07 07:08:22 UTC

*** This bug has been marked as a duplicate of bug 2103867 ***