| Summary: | rgmanager should detect when clvmd is not running when using HA LVM w/ clvm locking | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> | ||||
| Component: | resource-agents | Assignee: | Jonathan Earl Brassow <jbrassow> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> | ||||
| Severity: | low | Docs Contact: | |||||
| Priority: | low | ||||||
| Version: | 6.1 | CC: | agk, cfeist, cluster-maint, mjuricek | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | resource-agents-3.9.2-11.el6 | Doc Type: | Bug Fix | ||||
| Doc Text: |
No documentation needed
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2012-06-20 14:38:45 UTC | Type: | --- | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Bug Depends On: | |||||||
| Bug Blocks: | 756082 | ||||||
| Attachments: |
|
||||||
|
Description
Corey Marthaler
2011-08-09 20:58:57 UTC
I think it does this already... Are you should you have a 'c' attribute on the volume group?
start)
if ! [[ $(vgs -o attr --noheadings $OCF_RESKEY_vg_name) =~ .....c ]]; then
ha_lvm_proper_setup_check || exit 1
fi
Yep. If you have clvmd running, the services work, if you don't, it complains that there's an invalid configuration.
[root@taft-01 ~]# vgs -a -o +devices
VG #PV #LV #SN Attr VSize VFree Devices
TAFT1 3 1 0 wz--nc 203.48g 187.48g ha_mimage_0(0),ha_mimage_1(0)
TAFT1 3 1 0 wz--nc 203.48g 187.48g /dev/sdh2(0)
TAFT1 3 1 0 wz--nc 203.48g 187.48g /dev/sdh1(0)
TAFT1 3 1 0 wz--nc 203.48g 187.48g /dev/sdg2(0)
TAFT2 3 1 0 wz--nc 203.48g 187.48g ha_mimage_0(0),ha_mimage_1(0)
TAFT2 3 1 0 wz--nc 203.48g 187.48g /dev/sdg1(0)
TAFT2 3 1 0 wz--nc 203.48g 187.48g /dev/sdf2(0)
TAFT2 3 1 0 wz--nc 203.48g 187.48g /dev/sdf1(0)
TAFT3 3 1 0 wz--nc 203.48g 187.48g ha_mimage_0(0),ha_mimage_1(0)
TAFT3 3 1 0 wz--nc 203.48g 187.48g /dev/sde2(0)
TAFT3 3 1 0 wz--nc 203.48g 187.48g /dev/sde1(0)
TAFT3 3 1 0 wz--nc 203.48g 187.48g /dev/sdd2(0)
TAFT4 3 1 0 wz--nc 203.48g 187.48g ha_mimage_0(0),ha_mimage_1(0)
TAFT4 3 1 0 wz--nc 203.48g 187.48g /dev/sdd1(0)
TAFT4 3 1 0 wz--nc 203.48g 187.48g /dev/sdc2(0)
TAFT4 3 1 0 wz--nc 203.48g 187.48g /dev/sdc1(0)
[root@taft-01 ~]# service rgmanager stop
Stopping Cluster Service Manager: [ OK ]
[root@taft-01 ~]# service clvmd stop
Deactivating clustered VG(s): 0 logical volume(s) in volume group "TAFT1" now active
0 logical volume(s) in volume group "TAFT2" now active
0 logical volume(s) in volume group "TAFT3" now active
clvmd not running on node taft-02
0 logical volume(s) in volume group "TAFT4" now active
clvmd not running on node taft-02
[ OK ]
Signaling clvmd to exit [ OK ]
clvmd terminated [ OK ]
[root@taft-01 ~]# service cman status
cluster is running.
[root@taft-01 ~]# service rgmanager start
Starting Cluster Service Manager: [ OK ]
Aug 12 13:52:24 taft-01 rgmanager[10551]: I am node #1
Aug 12 13:52:24 taft-01 rgmanager[10551]: Resource Group Manager Starting
Aug 12 13:52:24 taft-01 rgmanager[10551]: Loading Service Data
Aug 12 13:52:30 taft-01 rgmanager[10551]: Initializing Services
Aug 12 13:52:31 taft-01 rgmanager[11535]: [fs] stop: Could not match /dev/TAFT1/ha with a real device
Aug 12 13:52:31 taft-01 rgmanager[10551]: stop on fs "fs1" returned 2 (invalid argument(s))
Aug 12 13:52:31 taft-01 rgmanager[11559]: [fs] stop: Could not match /dev/TAFT2/ha with a real device
Aug 12 13:52:31 taft-01 rgmanager[10551]: stop on fs "fs2" returned 2 (invalid argument(s))
Aug 12 13:52:31 taft-01 rgmanager[11601]: [fs] stop: Could not match /dev/TAFT4/ha with a real device
Aug 12 13:52:31 taft-01 rgmanager[10551]: stop on fs "fs4" returned 2 (invalid argument(s))
Aug 12 13:52:31 taft-01 rgmanager[11611]: [fs] stop: Could not match /dev/TAFT3/ha with a real device
Aug 12 13:52:31 taft-01 rgmanager[10551]: stop on fs "fs3" returned 2 (invalid argument(s))
Aug 12 13:52:32 taft-01 rgmanager[11726]: [lvm] HA LVM: Improper setup detected
Aug 12 13:52:33 taft-01 rgmanager[11748]: [lvm] * "volume_list" not specified in lvm.conf.
Aug 12 13:52:33 taft-01 rgmanager[11762]: [lvm] HA LVM: Improper setup detected
Aug 12 13:52:33 taft-01 rgmanager[11787]: [lvm] HA LVM: Improper setup detected
Aug 12 13:52:33 taft-01 rgmanager[11790]: [lvm] HA LVM: Improper setup detected
Aug 12 13:52:33 taft-01 rgmanager[11784]: [lvm] WARNING: An improper setup can cause data corruption!
Aug 12 13:52:33 taft-01 rgmanager[11829]: [lvm] * "volume_list" not specified in lvm.conf.
Aug 12 13:52:34 taft-01 rgmanager[11863]: [lvm] * "volume_list" not specified in lvm.conf.
Aug 12 13:52:34 taft-01 rgmanager[11881]: [lvm] * "volume_list" not specified in lvm.conf.
Aug 12 13:52:34 taft-01 rgmanager[11902]: [lvm] WARNING: An improper setup can cause data corruption!
Aug 12 13:52:34 taft-01 rgmanager[11941]: [lvm] WARNING: An improper setup can cause data corruption!
Aug 12 13:52:34 taft-01 rgmanager[11955]: [lvm] WARNING: An improper setup can cause data corruption!
Aug 12 13:52:39 taft-01 rgmanager[10551]: Services Initialized
Aug 12 13:52:39 taft-01 rgmanager[10551]: State change: Local UP
Aug 12 13:52:39 taft-01 rgmanager[10551]: State change: taft-02 UP
Aug 12 13:52:39 taft-01 rgmanager[10551]: State change: taft-03 UP
Aug 12 13:52:39 taft-01 rgmanager[10551]: State change: taft-04 UP
Created attachment 567742 [details]
Patch to fix problem and provide better error reporting
Pushed upstream: https://github.com/ClusterLabs/resource-agents/commit/3d0503c330812f763ed31ccfbb184585d192292e
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
No documentation needed
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0947.html |