RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1619428 - HA LVM-activate: warn user they provided an invalid value for vg_access_mode
Summary: HA LVM-activate: warn user they provided an invalid value for vg_access_mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents
Version: 7.6
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Oyvind Albrigtsen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-20 20:09 UTC by Corey Marthaler
Modified: 2018-10-30 11:40 UTC (History)
7 users (show)

Fixed In Version: resource-agents-4.1.1-12.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 11:39:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1637012 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Product Errata RHBA-2018:3278 0 None None None 2018-10-30 11:40:15 UTC

Internal Links: 1637012

Description Corey Marthaler 2018-08-20 20:09:28 UTC
Description of problem:

[root@mckinley-01 ~]# pcs resource create lvm1 --group HA_LVM1 ocf:heartbeat:LVM-activate volgrpname=MCKINLEY1 activation_mode=exclusive
Error: invalid resource option 'volgrpname', allowed options are: activation_mode, lvname, tag, trace_file, trace_ra, vg_access_mode, vgname, use --force to override
Error: required resource options 'vg_access_mode', 'vgname' are missing, use --force to override

[root@mckinley-01 ~]# pcs resource create lvm1 --group HA_LVM1 ocf:heartbeat:LVM-activate vgname=MCKINLEY1 activation_mode=exclusive
Error: required resource option 'vg_access_mode' is missing, use --force to override

# This should fail with a simple "Error: invalid value for vg_access_mode: exclusive" instead of allowing it and starting fencing procedures.
[root@mckinley-01 ~]# pcs resource create lvm1 --group HA_LVM1 ocf:heartbeat:LVM-activate vgname=MCKINLEY1 vg_access_mode=exclusive
[root@mckinley-01 ~]# 


Aug 20 14:46:16 mckinley-01 crmd[22220]:  notice: Initiating stop operation lvm1_stop_0 locally on mckinley-01
Aug 20 14:46:16 mckinley-01 LVM-activate(lvm1)[48573]: ERROR: You specified an invalid value for vg_access_mode: exclusive
Aug 20 14:46:16 mckinley-01 lrmd[22217]:  notice: lvm1_stop_0:48573:stderr [ ocf-exit-reason:You specified an invalid value for vg_access_mode: exclusive ]
Aug 20 14:46:16 mckinley-01 crmd[22220]:  notice: Result of stop operation for lvm1 on mckinley-01: 2 (invalid parameter)
Aug 20 14:46:16 mckinley-01 crmd[22220]:  notice: mckinley-01-lvm1_stop_0:46 [ ocf-exit-reason:You specified an invalid value for vg_access_mode: exclusive\n ]
Aug 20 14:46:16 mckinley-01 crmd[22220]: warning: Action 31 (lvm1_stop_0) on mckinley-01 failed (target: 0 vs. rc: 2): Error
Aug 20 14:46:16 mckinley-01 crmd[22220]:  notice: Transition aborted by operation lvm1_stop_0 'modify' on mckinley-01: Event failed
Aug 20 14:46:16 mckinley-01 crmd[22220]: warning: Action 31 (lvm1_stop_0) on mckinley-01 failed (target: 0 vs. rc: 2): Error
Aug 20 14:46:16 mckinley-01 crmd[22220]:  notice: Transition 22 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=6, Source=/var/lib/pacemaker/pengine/pe-input-113.bz2): Coe
Aug 20 14:46:16 mckinley-01 pengine[22219]: warning: Processing failed stop of lvm1 on mckinley-01: invalid parameter
Aug 20 14:46:16 mckinley-01 pengine[22219]:   error: Preventing lvm1 from re-starting on mckinley-01: operation stop failed 'invalid parameter' (2)
Aug 20 14:46:16 mckinley-01 pengine[22219]: warning: Processing failed stop of lvm1 on mckinley-01: invalid parameter
Aug 20 14:46:16 mckinley-01 pengine[22219]:   error: Preventing lvm1 from re-starting on mckinley-01: operation stop failed 'invalid parameter' (2)
Aug 20 14:46:16 mckinley-01 pengine[22219]: warning: Cluster node mckinley-01 will be fenced: lvm1 failed there
Aug 20 14:46:17 mckinley-01 pengine[22219]: warning: Scheduling Node mckinley-01 for STONITH
Aug 20 14:46:17 mckinley-01 pengine[22219]:  notice: Stop of failed resource lvm1 is implicit after mckinley-01 is fenced
Aug 20 14:46:17 mckinley-01 pengine[22219]:  notice:  * Fence (reboot) mckinley-01 'lvm1 failed there'
Aug 20 14:46:17 mckinley-01 pengine[22219]:  notice:  * Move       mckinley-apc           ( mckinley-01 -> mckinley-02 )
Aug 20 14:46:17 mckinley-01 pengine[22219]:  notice:  * Stop       dlm_for_lvmlockd:0     (                mckinley-01 )   due to node availability
Aug 20 14:46:17 mckinley-01 pengine[22219]:  notice:  * Stop       lvmlockd:0             (                mckinley-01 )   due to node availability
Aug 20 14:46:17 mckinley-01 pengine[22219]:  notice:  * Recover    lvm1                   ( mckinley-01 -> mckinley-03 )



Aug 20 14:46:32 mckinley-02 pengine[84984]: warning: Forcing lvm1 away from mckinley-03 after 1000000 failures (max=1000000)
Aug 20 14:46:32 mckinley-02 pengine[84984]:  notice:  * Recover    lvm1             ( mckinley-03 -> mckinley-02 )
Aug 20 14:46:32 mckinley-02 pengine[84984]:  notice: Calculated transition 1, saving inputs in /var/lib/pacemaker/pengine/pe-input-79.bz2
Aug 20 14:46:32 mckinley-02 crmd[84985]:  notice: Initiating monitor operation mckinley-apc_monitor_60000 locally on mckinley-02
Aug 20 14:46:32 mckinley-02 crmd[84985]:  notice: Initiating stop operation lvm1_stop_0 on mckinley-03
Aug 20 14:46:32 mckinley-02 crmd[84985]:  notice: Initiating start operation lvm1_start_0 locally on mckinley-02
Aug 20 14:46:32 mckinley-02 LVM-activate(lvm1)[12234]: ERROR: You specified an invalid value for vg_access_mode: exclusive



Version-Release number of selected component (if applicable):
resource-agents-4.1.1-2.el7    BUILT: Tue 03 Jul 2018 07:32:31 AM CDT

3.10.0-931.el7.x86_64
lvm2-2.02.180-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
lvm2-libs-2.02.180-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
lvm2-cluster-2.02.180-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
lvm2-lockd-2.02.180-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
lvm2-python-boom-0.9-5.el7    BUILT: Wed Aug  1 11:24:13 CDT 2018
cmirror-2.02.180-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-1.02.149-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-libs-1.02.149-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-event-1.02.149-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-event-libs-1.02.149-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017

Comment 2 Corey Marthaler 2018-08-20 20:11:18 UTC
[root@mckinley-02 ~]# pcs resource describe LVM-activate
[...]

  vg_access_mode (required): This option decides which solution will be used to protect the volume group in cluster environment. Optional solutions are: lvmlockd, clvmd, system_id and tagging.

Comment 3 Roman Bednář 2018-08-21 08:53:23 UTC
Option validation for fence agents was added in this RFE 1434936. Perhaps resource agents should have been covered by this as well?

Comment 4 Oyvind Albrigtsen 2018-08-21 10:10:07 UTC
Seems like it just needs to return OCF_ERR_CONFIGURED instead of OCF_ERR_ARGS from validate.

Comment 5 Oyvind Albrigtsen 2018-08-21 10:16:54 UTC
https://github.com/ClusterLabs/resource-agents/pull/1194

Comment 6 Corey Marthaler 2018-08-21 18:28:04 UTC
It appears the latest scratch build does not fix this issue. The behavior is still the same.

[root@mckinley-01 ~]# rpm -qi resource-agents
Name        : resource-agents
Version     : 4.1.1
Release     : 8.el7
Architecture: x86_64
Install Date: Tue 21 Aug 2018 11:51:46 AM CDT
Group       : System Environment/Base
Size        : 1366639
License     : GPLv2+ and LGPLv2+ and ASL 2.0
Signature   : (none)
Source RPM  : resource-agents-4.1.1-8.el7.src.rpm
Build Date  : Tue 21 Aug 2018 05:29:33 AM CDT



Cluster name: MCKINLEY
Stack: corosync
Current DC: mckinley-02 (version 1.1.19-3.el7-c3c624ea3d) - partition with quorum
Last updated: Tue Aug 21 13:14:28 2018
Last change: Tue Aug 21 12:53:03 2018 by root via cibadmin on mckinley-03

3 nodes configured
7 resources configured

Online: [ mckinley-01 mckinley-02 mckinley-03 ]

Full list of resources:

 mckinley-apc   (stonith:fence_apc):    Started mckinley-01
 Clone Set: dlm_for_lvmlockd-clone [dlm_for_lvmlockd]
     Started: [ mckinley-01 mckinley-02 mckinley-03 ]
 Clone Set: lvmlockd-clone [lvmlockd]
     Started: [ mckinley-01 mckinley-02 mckinley-03 ]

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: inactive/disabled
[root@mckinley-01 ~]# lvs
  LV   VG               Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ha   MCKINLEY1        rwi-a-r---    8.00g                                                    
  ha   MCKINLEY2        rwi-a-r---    8.00g                                                    
  home rhel_mckinley-01 -wi-ao---- <502.75g                                                    
  root rhel_mckinley-01 -wi-ao----   50.00g                                                    
  swap rhel_mckinley-01 -wi-ao----    4.00g                                                    

[root@mckinley-01 ~]# pcs resource create lvm1 --group HA_LVM1 ocf:heartbeat:LVM-activate vgname=MCKINLEY1 vg_access_mode=exclusive


# mckinley-02
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice:  * Start      lvm1                   ( mckinley-02 )
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice: Calculated transition 1, saving inputs in /var/lib/pacemaker/pengine/pe-input-130.bz2
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Initiating monitor operation lvm1_monitor_0 on mckinley-03
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Initiating monitor operation lvm1_monitor_0 locally on mckinley-02
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Initiating monitor operation lvm1_monitor_0 on mckinley-01
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Result of probe operation for lvm1 on mckinley-02: 7 (not running)
Aug 21 13:14:34 mckinley-02 crmd[2342]: warning: Action 9 (lvm1_monitor_0) on mckinley-01 failed (target: 7 vs. rc: 0): Error
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Transition aborted by operation lvm1_monitor_0 'modify' on mckinley-01: Event failed
Aug 21 13:14:34 mckinley-02 crmd[2342]: warning: Action 9 (lvm1_monitor_0) on mckinley-01 failed (target: 7 vs. rc: 0): Error
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Transition 1 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=3, Source=/var/lib/pacemaker/pengine/pe-input-130.bz2): Complete
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice:  * Move       lvm1                   ( mckinley-01 -> mckinley-02 )
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice: Calculated transition 2, saving inputs in /var/lib/pacemaker/pengine/pe-input-131.bz2
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Initiating stop operation lvm1_stop_0 on mckinley-01
Aug 21 13:14:34 mckinley-02 crmd[2342]: warning: Action 31 (lvm1_stop_0) on mckinley-01 failed (target: 0 vs. rc: 6): Error
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Transition aborted by operation lvm1_stop_0 'modify' on mckinley-01: Event failed
Aug 21 13:14:34 mckinley-02 crmd[2342]: warning: Action 31 (lvm1_stop_0) on mckinley-01 failed (target: 0 vs. rc: 6): Error
Aug 21 13:14:34 mckinley-02 crmd[2342]:  notice: Transition 2 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=6, Source=/var/lib/pacemaker/pengine/pe-input-131.bz2): Complete
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Processing failed stop of lvm1 on mckinley-01: not configured
Aug 21 13:14:34 mckinley-02 pengine[2341]:   error: Preventing lvm1 from re-starting anywhere: operation stop failed 'not configured' (6)
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Processing failed stop of lvm1 on mckinley-01: not configured
Aug 21 13:14:34 mckinley-02 pengine[2341]:   error: Preventing lvm1 from re-starting anywhere: operation stop failed 'not configured' (6)
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Cluster node mckinley-01 will be fenced: lvm1 failed there
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Scheduling Node mckinley-01 for STONITH
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice: Stop of failed resource lvm1 is implicit after mckinley-01 is fenced
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice:  * Fence (reboot) mckinley-01 'lvm1 failed there'
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice:  * Move       mckinley-apc           ( mckinley-01 -> mckinley-02 )
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice:  * Stop       dlm_for_lvmlockd:2     (                mckinley-01 )   due to node availability
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice:  * Stop       lvmlockd:2             (                mckinley-01 )   due to node availability
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice:  * Stop       lvm1                   (                mckinley-01 )   due to node availability
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Calculated transition 3 (with warnings), saving inputs in /var/lib/pacemaker/pengine/pe-warn-17.bz2
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Processing failed stop of lvm1 on mckinley-01: not configured
Aug 21 13:14:34 mckinley-02 pengine[2341]:   error: Preventing lvm1 from re-starting anywhere: operation stop failed 'not configured' (6)
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Processing failed stop of lvm1 on mckinley-01: not configured
Aug 21 13:14:34 mckinley-02 pengine[2341]:   error: Preventing lvm1 from re-starting anywhere: operation stop failed 'not configured' (6)
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Cluster node mckinley-01 will be fenced: lvm1 failed there
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Forcing lvm1 away from mckinley-01 after 1000000 failures (max=1000000)
Aug 21 13:14:34 mckinley-02 pengine[2341]: warning: Scheduling Node mckinley-01 for STONITH
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice: Stop of failed resource lvm1 is implicit after mckinley-01 is fenced
Aug 21 13:14:34 mckinley-02 pengine[2341]:  notice:  * Fence (reboot) mckinley-01 'lvm1 failed there'





# mckinley-03
Aug 21 13:14:34 mckinley-03 crmd[2206]:  notice: Result of probe operation for lvm1 on mckinley-03: 7 (not running)
Aug 21 13:14:34 mckinley-03 stonith-ng[2202]:  notice: mckinley-apc can fence (reboot) mckinley-01 (aka. '1'): static-list
Aug 21 13:14:34 mckinley-03 stonith-ng[2202]:  notice: mckinley-apc can fence (reboot) mckinley-01 (aka. '1'): static-list
Aug 21 13:14:34 mckinley-03 fence_apc: Unable to connect/login to fencing device
Aug 21 13:14:34 mckinley-03 stonith-ng[2202]: warning: fence_apc[6022] stderr: [ 2018-08-21 13:14:34,711 ERROR: Unable to connect/login to fencing device ]
Aug 21 13:14:34 mckinley-03 stonith-ng[2202]: warning: fence_apc[6022] stderr: [  ]
Aug 21 13:14:34 mckinley-03 stonith-ng[2202]: warning: fence_apc[6022] stderr: [  ]
Aug 21 13:14:41 mckinley-03 corosync[1786]: [TOTEM ] A processor failed, forming new configuration.
Aug 21 13:14:43 mckinley-03 stonith-ng[2202]:  notice: Operation 'reboot' [6028] (call 2 from crmd.2342) for host 'mckinley-01' with device 'mckinley-apc' returned: 0 (OK)
Aug 21 13:14:45 mckinley-03 corosync[1786]: [TOTEM ] A new membership (10.15.104.62:640) was formed. Members left: 1
Aug 21 13:14:45 mckinley-03 corosync[1786]: [TOTEM ] Failed to receive the leave message. failed: 1
Aug 21 13:14:45 mckinley-03 attrd[2204]:  notice: Node mckinley-01 state is now lost
Aug 21 13:14:45 mckinley-03 cib[2201]:  notice: Node mckinley-01 state is now lost
Aug 21 13:14:45 mckinley-03 attrd[2204]:  notice: Removing all mckinley-01 attributes for peer loss
Aug 21 13:14:45 mckinley-03 attrd[2204]:  notice: Lost attribute writer mckinley-01
Aug 21 13:14:45 mckinley-03 cib[2201]:  notice: Purged 1 peer with id=1 and/or uname=mckinley-01 from the membership cache
Aug 21 13:14:45 mckinley-03 attrd[2204]:  notice: Purged 1 peer with id=1 and/or uname=mckinley-01 from the membership cache
Aug 21 13:14:45 mckinley-03 stonith-ng[2202]:  notice: Node mckinley-01 state is now lost
Aug 21 13:14:45 mckinley-03 stonith-ng[2202]:  notice: Purged 1 peer with id=1 and/or uname=mckinley-01 from the membership cache
Aug 21 13:14:45 mckinley-03 corosync[1786]: [QUORUM] Members[2]: 2 3
Aug 21 13:14:45 mckinley-03 corosync[1786]: [MAIN  ] Completed service synchronization, ready to provide service.
Aug 21 13:14:45 mckinley-03 crmd[2206]:  notice: Node mckinley-01 state is now lost
Aug 21 13:14:45 mckinley-03 pacemakerd[2187]:  notice: Node mckinley-01 state is now lost
Aug 21 13:14:45 mckinley-03 kernel: dlm: closing connection to node 1
Aug 21 13:14:46 mckinley-03 stonith-ng[2202]:  notice: Operation reboot of mckinley-01 by mckinley-03 for crmd.2342: OK

Comment 8 Corey Marthaler 2018-09-24 16:49:10 UTC
The behavior appears to be the same in the latest rpms.
[root@harding-02 ~]# rpm -qi resource-agents
Name        : resource-agents
Version     : 4.1.1
Release     : 10.el7
Architecture: x86_64
Install Date: Mon 24 Sep 2018 11:07:42 AM CDT
Group       : System Environment/Base
Size        : 1368011
License     : GPLv2+ and LGPLv2+ and ASL 2.0
Signature   : (none)
Source RPM  : resource-agents-4.1.1-10.el7.src.rpm
Build Date  : Wed 05 Sep 2018 02:48:08 AM CDT




[root@harding-02 ~]# pcs status
Cluster name: HARDING
Stack: corosync
Current DC: harding-03 (version 1.1.19-7.el7-c3c624ea3d) - partition with quorum
Last updated: Mon Sep 24 11:38:18 2018
Last change: Mon Sep 24 11:38:12 2018 by root via cibadmin on harding-02

2 nodes configured
5 resources configured

Online: [ harding-02 harding-03 ]

Full list of resources:

 smoke-apc      (stonith:fence_apc):    Started harding-02
 Clone Set: dlm_for_lvmlockd-clone [dlm_for_lvmlockd]
     Started: [ harding-02 harding-03 ]
 Clone Set: lvmlockd-clone [lvmlockd]
     Started: [ harding-02 harding-03 ]

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@harding-02 ~]# pcs status
Cluster name: HARDING
Stack: corosync
Current DC: harding-03 (version 1.1.19-7.el7-c3c624ea3d) - partition with quorum
Last updated: Mon Sep 24 11:40:05 2018
Last change: Mon Sep 24 11:38:12 2018 by root via cibadmin on harding-02

2 nodes configured
5 resources configured

Online: [ harding-02 harding-03 ]

Full list of resources:

 smoke-apc      (stonith:fence_apc):    Started harding-02
 Clone Set: dlm_for_lvmlockd-clone [dlm_for_lvmlockd]
     Started: [ harding-02 harding-03 ]
 Clone Set: lvmlockd-clone [lvmlockd]
     Started: [ harding-02 harding-03 ]

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled


Creating VG HARDING1 out of /dev/mapper/mpatha1 /dev/mapper/mpathb1 /dev/mapper/mpathc1
harding-02: vgchange --lock-start HARDING1
harding-03: vgchange --lock-start HARDING1
Creating HA raid1 LV(s) and ext4 filesystems on VG HARDING1
        lvcreate --activate ey --type raid1 --nosync -L 8G -n ha HARDING1
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
WARNING: ext4 signature detected on /dev/HARDING1/ha at offset 1080. Wipe it? [y/n]: [n]
  Aborted wiping of ext4.
  1 existing signature left on the device.
        Creating ext4 filesystem
mke2fs 1.42.9 (28-Dec-2013)

[root@harding-02 ~]# lvs -a -o +devices
  LV            VG        Attr       LSize Cpy%Sync Convert Devices                      
  ha            HARDING1  Rwi-a-r--- 8.00g 100.00           ha_rimage_0(0),ha_rimage_1(0)
  [ha_rimage_0] HARDING1  iwi-aor--- 8.00g                  /dev/mapper/mpatha1(1)       
  [ha_rimage_1] HARDING1  iwi-aor--- 8.00g                  /dev/mapper/mpathb1(1)       
  [ha_rmeta_0]  HARDING1  ewi-aor--- 4.00m                  /dev/mapper/mpatha1(0)       
  [ha_rmeta_1]  HARDING1  ewi-aor--- 4.00m                  /dev/mapper/mpathb1(0)       

[root@harding-02 ~]# pcs resource create lvm1 --group HA_LVM1 ocf:heartbeat:LVM-activate volgrpname=HARDING1 activation_mode=exclusive
Error: invalid resource option 'volgrpname', allowed options are: activation_mode, lvname, tag, trace_file, trace_ra, vg_access_mode, vgname, use --force to override
Error: required resource options 'vg_access_mode', 'vgname' are missing, use --force to override
[root@harding-02 ~]# pcs resource create lvm1 --group HA_LVM1 ocf:heartbeat:LVM-activate vgname=HARDING1 activation_mode=exclusive
Error: required resource option 'vg_access_mode' is missing, use --force to override
[root@harding-02 ~]# pcs resource create lvm1 --group HA_LVM1 ocf:heartbeat:LVM-activate vgname=HARDING1 vg_access_mode=exclusive
[root@harding-02 ~]# echo $?
0


# the error still exists in the messages (on another node in the cluster)
Sep 24 11:42:30 harding-03 crmd[3464]:  notice: Initiating start operation lvm1_start_0 locally on harding-03
Sep 24 11:42:30 harding-03 LVM-activate(lvm1)[11518]: ERROR: You specified an invalid value for vg_access_mode: exclusive
Sep 24 11:42:30 harding-03 lrmd[3461]:  notice: lvm1_start_0:11518:stderr [ ocf-exit-reason:You specified an invalid value for vg_access_mode: excl]
Sep 24 11:42:30 harding-03 crmd[3464]:  notice: Result of start operation for lvm1 on harding-03: 6 (not configured)
Sep 24 11:42:30 harding-03 crmd[3464]:  notice: harding-03-lvm1_start_0:31 [ ocf-exit-reason:You specified an invalid value for vg_access_mode: exc]
Sep 24 11:42:30 harding-03 crmd[3464]: warning: Action 26 (lvm1_start_0) on harding-03 failed (target: 0 vs. rc: 6): Error
Sep 24 11:42:30 harding-03 crmd[3464]:  notice: Transition aborted by operation lvm1_start_0 'modify' on harding-03: Event failed

Comment 15 errata-xmlrpc 2018-10-30 11:39:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3278


Note You need to log in before you can comment on or make changes to this bug.