Bug 1730455
| Summary: | LVM-activate monitor can hang (timeout/unknown error) and cause resource to fail | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Barry Marson <bmarson> | ||||
| Component: | resource-agents | Assignee: | Oyvind Albrigtsen <oalbrigt> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Roman Bednář <rbednar> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 8.1 | CC: | agk, cluster-maint, fdinitto, oalbrigt, phagara, rbednar, teigland, ttracy, vincent.chen1 | ||||
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
||||
| Target Release: | 8.0 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | resource-agents-4.1.1-28.el8 | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | |||||||
| : | 1732873 (view as bug list) | Environment: | |||||
| Last Closed: | 2019-11-05 20:35:57 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1732873 | ||||||
| Attachments: |
|
||||||
> Jul 16 10:35:43 pats.perf.lab.eng.bos.redhat.com
> pacemaker-schedulerd[28434]: warning: Processing failed monitor of
> lv_sasbin:0 on dolphins-ic: unknown error
> Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-execd[28432]:
> warning: lv_sasbin_monitor_30000 process (PID 28135) timed out
> Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-execd[28432]:
> warning: lv_sasbin_monitor_30000:28135 - timed out after 90000ms
> Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-controld[28435]:
> error: Result of monitor operation for lv_sasbin on pats-ic: Timed Out
I can't tell if these are related to the LVM-activate agent, or an lvm command. If so, can someone tell me what command is timing out and can you get lvm debugging from it (set lvm.conf log/level=7 and log/file="/tmp/lvm.log" and send the lvm.log file).
Im trying to capture that data now. So far with full lvm debugging, I haven't been able to create the timeout failure. Watching the logs though, I have noticed the monitor logging taking anywhere from 2 seconds (idle) to upwards of over a minute ... so far ... I'll keep trying ... Barry This commit added a bunch of lvm commands to monitor (by way of calling validate) that can't be allowed: https://github.com/ClusterLabs/resource-agents/commit/792077bf2994e2e582ccfb0768f3186517de9025 The periodic monitor is meant to run only a single dmsetup command, no lvm commands. See the part of the comment beginning "we cannot afford to use LVM command in lvm_status". From an initial review of that commit, I can't tell why lvm_validate was added to monitor, but the commit header looks like it might have just been added to check for incorrect args being passed by the caller, or something like that. Can the lvm_validate call simply be dropped from monitor, or do we need to do some other kind of partial validation (without using any lvm commands)? Seems like I added that by accident thinking it was necessary to know e.g. access_mode_num or similar parameters. qa_ack+ LVM-Activate agent's monitor action must not use any lvm commands (see comment#3 for specific commit that should be reverted) Marking verified. Test environment used was a bit different however it should not affect the function of resource agent in this case. Tried with 5 nodes, iSCSI disks, no multipath, lvmlockd cluster lvm, gfs2, 1k simultaneous file reads from all nodes. Cluster did not hang, no errors observed. resource-agents-4.1.1-33.el8.x86_64 fixed in commit: https://github.com/ClusterLabs/resource-agents/pull/1369/commits/ef37f8a2461b5763f4510d51e08d27d8b1f76937 =================================== # pcs status Cluster name: STSRHTS13591 Stack: corosync Current DC: virt-436 (version 2.0.2-3.el8-744a30d655) - partition with quorum Last updated: Tue Sep 10 13:32:34 2019 Last change: Tue Sep 10 13:24:58 2019 by root via cibadmin on virt-435 5 nodes configured 25 resources configured Online: [ virt-434 virt-435 virt-436 virt-437 virt-438 ] Full list of resources: fence-virt-434 (stonith:fence_xvm): Started virt-435 fence-virt-435 (stonith:fence_xvm): Started virt-436 fence-virt-436 (stonith:fence_xvm): Started virt-437 fence-virt-437 (stonith:fence_xvm): Started virt-438 fence-virt-438 (stonith:fence_xvm): Started virt-434 Clone Set: locking-clone [locking] Started: [ virt-434 virt-435 virt-436 virt-437 virt-438 ] Clone Set: mygroup-clone [mygroup] Started: [ virt-434 virt-435 virt-436 virt-437 virt-438 ] Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled # for i in {1..1000}; do dd if=/dev/urandom of=/mnt/shared/file_$i count=1 bs=100M;done # for i in /mnt/shared/*; do cat $i >/dev/null & done; pcs status ... 5 nodes configured 25 resources configured Online: [ virt-434 virt-435 virt-436 virt-437 virt-438 ] Full list of resources: ... Clone Set: locking-clone [locking] Started: [ virt-434 virt-435 virt-436 virt-437 virt-438 ] Clone Set: mygroup-clone [mygroup] Started: [ virt-434 virt-435 virt-436 virt-437 virt-438 ] ... =================================== *** Bug 1761916 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3307 |
Created attachment 1591154 [details] pcs status --full and pcs resource config Description of problem: I have a 5 node resilient storage cluster that began life as a RHEL8.0 cluster and recently upgraded to 8.1 beta 1.1 (pcs config attached). Storage is FC dual ported HBA's to 5 HP P2000 array's. There are 40 LUNs presented and 8 way multipath, The clustered LVM's listed below LV VG Attr LSize Cpy%Sync Convert #Str Stripe lsasbin vg_sas -wi-ao---- <500.04g 15 128.00k lsasdata vg_sas -wi-ao---- 5.50t 15 128.00k lsaswork vg_sas -wi------- 3.00t 15 128.00k lutilloc vg_sas -wi------- 2.00t 10 128.00k While running a parallel read test (4 nodes reading the same ~50 files in parallel, The cluster after about 8 minutes experiences errors like: Jul 16 10:35:43 pats.perf.lab.eng.bos.redhat.com pacemaker-schedulerd[28434]: warning: Processing failed monitor of lv_sasbin:0 on dolphins-ic: unknown error Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-execd[28432]: warning: lv_sasbin_monitor_30000 process (PID 28135) timed out Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-execd[28432]: warning: lv_sasbin_monitor_30000:28135 - timed out after 90000ms Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-controld[28435]: error: Result of monitor operation for lv_sasbin on pats-ic: Timed Out The resource group is brought down and then restarted. There are no errors on the storage array nor any I/O errors reported by the OS, or network errors. This happens in RHEL8GA as well as RHEL8.1 beta1.1 Version-Release number of selected component (if applicable): Tested initially with RHEL8 GA upgraded and failed on RHEL8.1-beta1.1 ie RHEL-8.1.0-20190701.0 How reproducible: virtually every time Steps to Reproduce: 1. Shell command run from each of the 4 nodes ... for i in /sasdata/asuite-bills/input/*; do cat $i > /dev/null & done 2. 3. Actual results: Expected results: Additional info: Note that even though I pasted errors about lsasbin, my reading was from lsasdata (same vg).