Created attachment 1894609 [details] syslog message Description of problem: RHEL7.8 hosts, configured HA active/passive in NFS cluster. Bith hosts, host1 & host2 mounted the NFS share volume and triggered the IO. Failover was triggerred on storage array end. Which leads to IO drop and host reboot. How reproducible: Always Steps to Reproduce: 1. Configure hosts in NFS cluster 2. Start IO 3. Trigger Controller failover on Nimble array. Actual results: IO drops and host reboots Expected results: IO should continue to run without disruption. Additional info: Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: State transition S_IDLE -> S_POLICY_ENGINE Mar 27 14:10:55 iwf-dl360-17 pengine[3605]: notice: Scheduling shutdown of node iwf-dl360-18 Mar 27 14:10:55 iwf-dl360-17 pengine[3605]: notice: * Shutdown iwf-dl360-18 Mar 27 14:10:55 iwf-dl360-17 pengine[3605]: notice: Calculated transition 6, saving inputs in /var/lib/pacemaker/pengine/pe-input-287.bz2 Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: Transition 6 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-287.bz2): Complete Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: do_shutdown of peer iwf-dl360-18 is complete Mar 27 14:10:55 iwf-dl360-17 attrd[3604]: notice: Node iwf-dl360-18 state is now lost Mar 27 14:10:55 iwf-dl360-17 attrd[3604]: notice: Removing all iwf-dl360-18 attributes for peer loss Mar 27 14:10:55 iwf-dl360-17 attrd[3604]: notice: Purged 1 peer with id=2 and/or uname=iwf-dl360-18 from the membership cache Mar 27 14:10:55 iwf-dl360-17 stonith-ng[3602]: notice: Node iwf-dl360-18 state is now lost Mar 27 14:10:55 iwf-dl360-17 stonith-ng[3602]: notice: Purged 1 peer with id=2 and/or uname=iwf-dl360-18 from the membership cache Mar 27 14:10:55 iwf-dl360-17 cib[3601]: notice: Node iwf-dl360-18 state is now lost Mar 27 14:10:55 iwf-dl360-17 cib[3601]: notice: Purged 1 peer with id=2 and/or uname=iwf-dl360-18 from the membership cache Mar 27 14:10:55 iwf-dl360-17 corosync[3174]: [TOTEM ] A new membership (10.201.14.83:547) was formed. Members left: 2 Mar 27 14:10:55 iwf-dl360-17 corosync[3174]: [CPG ] downlist left_list: 1 received Mar 27 14:10:55 iwf-dl360-17 corosync[3174]: [QUORUM] Members[1]: 1 Mar 27 14:10:55 iwf-dl360-17 corosync[3174]: [MAIN ] Completed service synchronization, ready to provide service. Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: Node iwf-dl360-18 state is now lost Mar 27 14:10:55 iwf-dl360-17 pacemakerd[3591]: notice: Node iwf-dl360-18 state is now lost Mar 27 14:10:55 iwf-dl360-17 crmd[3606]: notice: do_shutdown of peer iwf-dl360-18 is complete Mar 27 14:11:04 iwf-dl360-17 systemd: Reloading.
*** This bug has been marked as a duplicate of bug 2103867 ***