Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1392432 - LVM resource agent activates partial vg volume even though partial_activation=false
LVM resource agent activates partial vg volume even though partial_activation...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents (Show other bugs)
7.3
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Oyvind Albrigtsen
cluster-qe@redhat.com
:
Depends On: 1332909
Blocks:
  Show dependency treegraph
 
Reported: 2016-11-07 08:50 EST by michal novacek
Modified: 2017-08-01 10:55 EDT (History)
8 users (show)

See Also:
Fixed In Version: resource-agents-3.9.5-87.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1332909
Environment:
Last Closed: 2017-08-01 10:55:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1844 normal SHIPPED_LIVE resource-agents bug fix and enhancement update 2017-08-01 13:49:20 EDT

  None (edit)
Comment 3 Oyvind Albrigtsen 2017-01-27 09:44:02 EST
Updated patch (replaces the old patches):
https://github.com/ClusterLabs/resource-agents/pull/921
Comment 5 michal novacek 2017-06-06 09:45:26 EDT
I have verified that partial vg will not be activated by LVM ra with
resource-agents-3.9.5-100.el7.

---

Common setup:

* have configured running cluster [1], [2]
* have raid vg configured atop of several pvs [3]
* create and start LVM resource for this vg
> [root@virt-151 ~]# pcs resource
>  havg   (ocf::heartbeat:LVM):   Started virt-151

before the fix (resource-agents-3.9.5-34.el6.x86_64)
====================================================

[root@virt-246 ~]# echo offline > /sys/block/sda/device/state 
[root@virt-246 ~]# pcs resource debug-monitor havg

Operation monitor for havg (ocf:heartbeat:LVM) returned 0
 >  stdout: volume_list="rhel_virt-246"
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 1069154304: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 1069244416: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 0: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 4096: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 2048 at 0: Input/output error
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr:   WARNING: Couldn't find all devices for LV raidvg/raidlv_rimage_1 while checking used and assumed devices.
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 1069154304: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 1069244416: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 0: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 4096: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 2048 at 0: Input/output error
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr:   WARNING: Couldn't find all devices for LV raidvg/raidlv_rimage_1 while checking used and assumed devices.
[root@virt-246 ~]# pcs resource disable havg

[root@virt-246 ~]# pcs resource
 ...
 havg   (ocf::heartbeat:LVM):   Stopped (disabled)

[root@virt-246 ~]# pcs resource debug-start havg
> Operation start for havg (ocf:heartbeat:LVM) returned 0
 >  stdout: volume_list="rhel_virt-246"
 >  stdout:   Volume group "raidvg" successfully changed
 >  stdout: volume_list="rhel_virt-246"
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr: INFO: Activating volume group raidvg
 >  stderr: INFO: Reading all physical volumes. This may take a while... Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk. Found volume group "raidvg" using metadata type lvm2 Found volume group "rhel_virt-246" using metadata type lvm2
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr: INFO: New tag "pacemaker" added to raidvg
 >  stderr: INFO: Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk. 1 logical volume(s) in volume group "raidvg" now active
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.

[root@virt-246 ~]# pcs resource enable havg
[root@virt-246 ~]# pcs resource 
 ...
 havg   (ocf::heartbeat:LVM):   Started virt-246

after the fix (resource-agents-3.9.5-100.el7)
=============================================

[root@virt-246 ~]# echo offline > /sys/block/sda/device/state

[root@virt-246 ~]# pcs resource debug-monitor havg
Operation monitor for havg (ocf:heartbeat:LVM) returned 0
 >  stdout: volume_list="rhel_virt-246"
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 1069154304: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 1069244416: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 0: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 4096: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 2048 at 0: Input/output error
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr:   WARNING: Couldn't find all devices for LV raidvg/raidlv_rimage_1 while checking used and assumed devices.
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 1069154304: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 1069244416: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 0: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 512 at 4096: Input/output error
 >  stderr:   /dev/sda1: read failed after 0 of 2048 at 0: Input/output error
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr:   WARNING: Couldn't find all devices for LV raidvg/raidlv_rimage_1 while checking used and assumed devices.

[root@virt-246 ~]# pcs resource disable havg

[root@virt-246 ~]# pcs resource
 ...
 havg   (ocf::heartbeat:LVM):   Stopped (disabled)

[root@virt-246 ~]# pcs resource debug-start havg
Error performing operation: Operation not permitted
Operation start for havg (ocf:heartbeat:LVM) returned 1
 >  stderr:   Couldn't find device with uuid ITE5qx-VTYj-YnKw-O0fh-gGFk-S1X5-lrdlNk.
 >  stderr: ocf-exit-reason:Volume group [raidvg] has devices missing.
 >      Consider partial_activation=true to attempt to activate partially

[root@virt-246 ~]# pcs resource enable havg

[root@virt-246 ~]# pcs resource 
...
 havg   (ocf::heartbeat:LVM):   Stopped

-----

> (1) pcs-config
[root@virt-246 ~]# pcs config
Cluster Name: STSRHTS26224
Corosync Nodes:
 virt-246 virt-247
Pacemaker Nodes:
 virt-246 virt-247

Resources:
 Clone: dlm-clone
  Meta Attrs: interleave=true ordered=true 
  Resource: dlm (class=ocf provider=pacemaker type=controld)
   Operations: monitor interval=30s on-fail=fence (dlm-monitor-interval-30s)
               start interval=0s timeout=90 (dlm-start-interval-0s)
               stop interval=0s timeout=100 (dlm-stop-interval-0s)
 Clone: clvmd-clone
  Meta Attrs: interleave=true ordered=true 
  Resource: clvmd (class=ocf provider=heartbeat type=clvm)
   Attributes: with_cmirrord=1
   Operations: monitor interval=30s on-fail=fence (clvmd-monitor-interval-30s)
               start interval=0s timeout=90 (clvmd-start-interval-0s)
               stop interval=0s timeout=90 (clvmd-stop-interval-0s)
 Resource: havg (class=ocf provider=heartbeat type=LVM)
  Attributes: exclusive=true partial_activation=false volgrpname=raidvg
  Operations: monitor interval=10 timeout=30 (havg-monitor-interval-10)
              start interval=0s timeout=30 (havg-start-interval-0s)
              stop interval=0s timeout=30 (havg-stop-interval-0s)

Stonith Devices:
 Resource: fence-virt-246 (class=stonith type=fence_xvm)
  Attributes: delay=5 pcmk_host_check=static-list pcmk_host_list=virt-246 pcmk_host_map=virt-246:virt-246.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-246-monitor-interval-60s)
 Resource: fence-virt-247 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-247 pcmk_host_map=virt-247:virt-247.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-247-monitor-interval-60s)
Fencing Levels:

Location Constraints:
Ordering Constraints:
  start dlm-clone then start clvmd-clone (kind:Mandatory)
Colocation Constraints:
  clvmd-clone with dlm-clone (score:INFINITY)
Ticket Constraints:

Alerts:
 No alerts defined

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: STSRHTS26224
 dc-version: 1.1.16-10.el7-94ff4df
 have-watchdog: false
 last-lrm-refresh: 1496756145
 no-quorum-policy: freeze

Quorum:
  Options:

> (2) pcs-status
[root@virt-246 ~]# pcs status
Cluster name: STSRHTS26224
Stack: corosync
Current DC: virt-246 (version 1.1.16-10.el7-94ff4df) - partition with quorum
Last updated: Tue Jun  6 15:36:00 2017
Last change: Tue Jun  6 15:35:45 2017 by hacluster via crmd on virt-246

2 nodes configured
7 resources configured

Online: [ virt-246 virt-247 ]

Full list of resources:

 fence-virt-246 (stonith:fence_xvm):    Started virt-246
 fence-virt-247 (stonith:fence_xvm):    Started virt-247
 Clone Set: dlm-clone [dlm]
     Started: [ virt-246 virt-247 ]
 Clone Set: clvmd-clone [clvmd]
     Started: [ virt-246 virt-247 ]
 havg   (ocf::heartbeat:LVM):   Started virt-246

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

> (3) lv-vg-pv
[root@virt-246 ~]# lvs -a
  LV                VG            Attr       LSize   Pool Origin Data%  Meta%  
  raidlv            raidvg        Rwi-a-r---   2.97g                           
  [raidlv_rimage_0] raidvg        iwi-aor---   2.97g                           
  [raidlv_rimage_1] raidvg        iwi-aor---   2.97g                           
  [raidlv_rmeta_0]  raidvg        ewi-aor---   4.00m                           
  [raidlv_rmeta_1]  raidvg        ewi-aor---   4.00m                           
  root              rhel_virt-246 -wi-ao----  <6.38g                           
  swap              rhel_virt-246 -wi-ao---- 840.00m                           
[root@virt-246 ~]# lvs -a
  LV                VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raidlv            raidvg        Rwi-a-r---   2.97g                                    100.00          
  [raidlv_rimage_0] raidvg        iwi-aor---   2.97g                                                    
  [raidlv_rimage_1] raidvg        iwi-aor---   2.97g                                                    
  [raidlv_rmeta_0]  raidvg        ewi-aor---   4.00m                                                    
  [raidlv_rmeta_1]  raidvg        ewi-aor---   4.00m                                                    
  root              rhel_virt-246 -wi-ao----  <6.38g                                                    
  swap              rhel_virt-246 -wi-ao---- 840.00m                                                    
[root@virt-246 ~]# vgs -a
  VG            #PV #LV #SN Attr   VSize  VFree
  raidvg          6   1   0 wz--n-  5.95g    0 
  rhel_virt-246   1   2   0 wz--n- <7.20g    0 
[root@virt-246 ~]# pvs -o +devices
  PV         VG            Fmt  Attr PSize    PFree Devices       
  /dev/sda1  raidvg        lvm2 a--  1016.00m    0  /dev/sda1(0)  
  /dev/sdb1  raidvg        lvm2 a--  1016.00m    0  /dev/sdb1(0)  
  /dev/sdc1  raidvg        lvm2 a--  1016.00m    0  /dev/sdc1(0)  
  /dev/sdc1  raidvg        lvm2 a--  1016.00m    0  /dev/sdc1(1)  
  /dev/sdd1  raidvg        lvm2 a--  1016.00m    0  /dev/sdd1(0)  
  /dev/sde1  raidvg        lvm2 a--  1016.00m    0  /dev/sde1(0)  
  /dev/sde1  raidvg        lvm2 a--  1016.00m    0  /dev/sde1(1)  
  /dev/sdf1  raidvg        lvm2 a--  1016.00m    0  /dev/sdf1(0)  
  /dev/vda2  rhel_virt-246 lvm2 a--    <7.20g    0  /dev/vda2(0)  
  /dev/vda2  rhel_virt-246 lvm2 a--    <7.20g    0  /dev/vda2(210)
Comment 6 errata-xmlrpc 2017-08-01 10:55:11 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1844

Note You need to log in before you can comment on or make changes to this bug.