Bug 1732873

Summary: LVM-activate monitor can hang (timeout/unknown error) and cause resource to fail (RHEL7)
Product: Red Hat Enterprise Linux 7 Reporter: Oyvind Albrigtsen <oalbrigt>
Component: resource-agentsAssignee: Oyvind Albrigtsen <oalbrigt>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.7CC: agk, bmarson, cluster-maint, cluster-qe, fdinitto, mjuricek, oalbrigt, phagara, teigland, ttracy
Target Milestone: rc   
Target Release: 7.8   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: resource-agents-4.1.1-31.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1730455 Environment:
Last Closed: 2020-03-31 19:47:12 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1730455    
Bug Blocks:    

Description Oyvind Albrigtsen 2019-07-24 15:03:01 UTC
+++ This bug was initially created as a clone of Bug #1730455 +++

Description of problem:

I have a 5 node resilient storage cluster that began life as a RHEL8.0 cluster  and recently upgraded to 8.1 beta 1.1 (pcs config attached).  Storage is FC dual ported HBA's to 5 HP P2000 array's.  There are 40 LUNs presented and 8 way multipath, The clustered LVM's listed below

  LV       VG         Attr       LSize    Cpy%Sync Convert #Str Stripe 
  lsasbin  vg_sas     -wi-ao---- <500.04g                    15 128.00k
  lsasdata vg_sas     -wi-ao----    5.50t                    15 128.00k
  lsaswork vg_sas     -wi-------    3.00t                    15 128.00k
  lutilloc vg_sas     -wi-------    2.00t                    10 128.00k

While running a parallel read test (4 nodes reading the same ~50 files in parallel, The cluster after about 8 minutes experiences errors like:

Jul 16 10:35:43 pats.perf.lab.eng.bos.redhat.com pacemaker-schedulerd[28434]: warning: Processing failed monitor of lv_sasbin:0 on dolphins-ic: unknown error
Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-execd[28432]: warning: lv_sasbin_monitor_30000 process (PID 28135) timed out
Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-execd[28432]: warning: lv_sasbin_monitor_30000:28135 - timed out after 90000ms
Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-controld[28435]: error: Result of monitor operation for lv_sasbin on pats-ic: Timed Out

The resource group is brought down and then restarted.

There are no errors on the storage array nor any I/O errors reported by the OS, or network errors. 

This happens in RHEL8GA as well as RHEL8.1 beta1.1


Version-Release number of selected component (if applicable):

Tested initially with RHEL8 GA
upgraded and failed on RHEL8.1-beta1.1 ie RHEL-8.1.0-20190701.0

How reproducible:
virtually every time

Steps to Reproduce:
1. Shell command run from each of the 4 nodes ...

   for i in /sasdata/asuite-bills/input/*; do cat $i > /dev/null & done

2.
3.

Actual results:


Expected results:


Additional info:

Note that even though I pasted errors about lsasbin, my reading was from lsasdata (same vg).

--- Additional comment from David Teigland on 2019-07-17 16:24:07 CEST ---

> Jul 16 10:35:43 pats.perf.lab.eng.bos.redhat.com
> pacemaker-schedulerd[28434]: warning: Processing failed monitor of
> lv_sasbin:0 on dolphins-ic: unknown error
> Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-execd[28432]:
> warning: lv_sasbin_monitor_30000 process (PID 28135) timed out
> Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-execd[28432]:
> warning: lv_sasbin_monitor_30000:28135 - timed out after 90000ms
> Jul 16 10:49:55 pats.perf.lab.eng.bos.redhat.com pacemaker-controld[28435]:
> error: Result of monitor operation for lv_sasbin on pats-ic: Timed Out

I can't tell if these are related to the LVM-activate agent, or an lvm command.  If so, can someone tell me what command is timing out and can you get lvm debugging from it (set lvm.conf log/level=7 and log/file="/tmp/lvm.log" and send the lvm.log file).

--- Additional comment from Barry Marson on 2019-07-17 20:06:54 CEST ---

Im trying to capture that data now.  So far with full lvm debugging, I haven't been able to create the timeout failure.  Watching the logs though, I have noticed the monitor logging taking anywhere from 2 seconds (idle) to upwards of over a minute ... so far ... I'll keep trying ...

Barry

--- Additional comment from David Teigland on 2019-07-17 22:15:02 CEST ---

This commit added a bunch of lvm commands to monitor (by way of calling validate) that can't be allowed:

https://github.com/ClusterLabs/resource-agents/commit/792077bf2994e2e582ccfb0768f3186517de9025

The periodic monitor is meant to run only a single dmsetup command, no lvm commands.  See the part of the comment beginning "we cannot afford to use LVM command in lvm_status".

From an initial review of that commit, I can't tell why lvm_validate was added to monitor, but the commit header looks like it might have just been added to check for incorrect args being passed by the caller, or something like that.  Can the lvm_validate call simply be dropped from monitor, or do we need to do some other kind of partial validation (without using any lvm commands)?

--- Additional comment from Oyvind Albrigtsen on 2019-07-23 15:30:54 CEST ---

Seems like I added that by accident thinking it was necessary to know e.g. access_mode_num or similar parameters.

Comment 1 Patrik Hagara 2019-07-25 08:40:53 UTC
qa_ack+

LVM-Activate agent's monitor action must not use any lvm commands (see description/comment#3 of cloned bug for specific commit that should be reverted)

Comment 5 errata-xmlrpc 2020-03-31 19:47:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1067