RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1299977 - Move lvmlockd to full support, remove tech preview classification for lvmlockd
Summary: Move lvmlockd to full support, remove tech preview classification for lvmlockd
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Rajnoha
QA Contact: cluster-qe@redhat.com
Petr Bokoc
URL:
Whiteboard:
: 1241140 (view as bug list)
Depends On:
Blocks: 1295577 1313485
TreeView+ depends on / blocked
 
Reported: 2016-01-19 16:21 UTC by David Teigland
Modified: 2023-07-26 13:57 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.161-1.el7
Doc Type: Enhancement
Doc Text:
Improved LVM locking infrastructure `lvmlockd` is a next generation locking infrastucture for LVM. It allows LVM to safely manage shared storage from multiple hosts, using either the `dlm` or `sanlock` lock managers. `sanlock` allows `lvmlockd` to coordinate hosts through storage-based locking, without the need for an entire cluster infrastructure. For more information, see the *lvmlockd(8)* man page. This feature was originally introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. In Red Hat Enterprise Linux 7.3, `lvmlockd` is fully supported.
Clone Of:
Environment:
Last Closed: 2016-11-04 04:14:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Article) 3071171 0 None None None 2023-07-26 13:57:14 UTC
Red Hat Knowledge Base (Solution) 3551691 0 None None None 2023-07-26 13:57:00 UTC
Red Hat Product Errata RHBA-2016:1445 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-11-03 13:46:41 UTC

Description David Teigland 2016-01-19 16:21:08 UTC
Description of problem:

Remove the tech preview classification, and remove the RHEL-specific commit that prints a warning when a shared VG is created:

WARNING: shared lock type "sanlock" and lvmlockd are Technology Preview.
For more information on Technology Preview features, visit:
https://access.redhat.com/support/offerings/techpreview/

The lvmlockd man page provides a fairly good starting point for defining what should currently work with lvmlockd, and which lvm features/commands are not yet supported on shared VGs.  I intend to add more unsupported features/commands to the lvmlockd man page as they are discovered.

(There are some odd or exotic features that are not specifically mentioned such as --test mode, or swapping pool metadata LVs.)

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 David Teigland 2016-01-22 17:16:47 UTC
The following tests are of particular interest to me because they exercise locking in ways where a problem would be most directly apparent.

"parallel" means that multiple hosts (any number) run the following command concurrently.

--

This is a sequence of commands where one generally uses the result of the previous.

parallel: vgcreate --shared vg /dev/foo
result: one succeeds, others fail

parallel: lvcreate -L1G vg (no name)
result: pick different lvol# names

parallel: lvcreate -n `hostname` -L1G vg; lvchange -an vg/`hostname`; lvremove vg/`hostname`
result: all succeed

parallel: lvcreate -n foo -L1G -an vg
result: one succeeds, others find it exists

parallel: lvremove vg/foo
result: one succeeds, others don't find it

parallel: lvcreate -n foo -L1G -an vg
result: one succeeds, others find it exists

parallel: lvextend -L+1G vg/foo
result: all succeed

parallel: lvextend -L2G vg/foo
result: one does it, others find it's already done

parallel: lvchange -aey vg/foo
result: one succeeds, others fail

parallel: lvchange -an vg/foo
result: all succeed

parallel: lvchange -asy vg/foo
result: all succeed

parallel: lvchange -aey vg/foo
result: all fail

parallel: lvchange -an vg/foo
result: all succeed

parallel: lvchange --addtag `hostname` vg/foo
result: all succeed

parallel: lvchange --deltag `hostname` vg/foo
result: all succeed

parallel: for i in `seq 1 200`; do lvcreate -L 10M vg; sleep 1; done
result: all succeed

parallel: vgchange -an vg
result: all succeed

parallel: lvremove --yes vg
result: all succeed (one does the removals, others find no lvs)

parallel: lvcreate -n `hostname` -L1G vg
result: all succeed

parallel: lvextend -L+1G vg/`hostname`
result: all succeed

parallel: lvrename vg/`hostname` vg/`hostname`-2
result: all succeed

parallel: lvremove vg/`hostname`-2
result: all succeed

host1: lvcreate -n foo -L1G vg
result: succeeds

host2: lvremove vg/foo
result: fails

host1: lvchange -an vg
result: succeeds

host2: lvremove vg/foo
result: succeeds

host1: lvcreate -n foo -asy -L1G vg
result: succeeds

host2: lvchange -asy vg/foo
result: succeeds

host2: lvchange -an vg/foo
result: succeeds

host2: lvremove vg/foo
result: fails

--

These tests create/remove local VGs on shared devices.
system_id should not be used during these tests so that
the locking is exercised.

parallel: vgcreate vg /dev/foo
result: one succeeds, others fail (name exists)

parallel: vgextend vg /dev/bar
result: one succeeds, others fail

parallel: vgextend vg /dev/<different_devs>
result: all succeed

parallel: vgremove vg
result: one suceeds, others fail (not found)

parallel: vgcreate vg-`hostname` /dev/<different_devs>
result: all succeed

parallel: vgextend vg-`hostname` /dev/foo
result: one succeeds, others fail

parallel: vgextend vg-`hostname` /dev/<different_devs>
result: all succeed

parallel: vgremove vg-`hostname`
result: all succeed

parallel: vgcreate vg /dev/<different_devs>
result: one succeeds, others fail (name exists)

parallel: vgremove vg
result: one succeeds, others fail (not found)

parallel: vgcreate vg1 /dev/foo
result: one succeeds, others fail (name exists)

parallel: vgrename vg1 vg2
result: one succeeds, others fail

parallel: vgremove vg2
result: one succeeds, others fail

parallel: vgcreate vg1-`hostname` /dev/<different_devs>
result: all succeed

parallel: vgrename vg1-`hostname` vg2-`hostname`
result: all succeed

parallel: vgrename vg2-`hostname` vg2
result: one succeeds, others fail

parallel: pvcreate /dev/X /dev/Y
result: all succeed

parallel: pvchange -u /dev/X /dev/Y
result: all succeed

Comment 3 David Teigland 2016-06-30 17:19:57 UTC
Peter, this just involves removing lvm2-lvmlockd-tech-preview-warning.patch

Comment 4 Peter Rajnoha 2016-07-11 09:53:34 UTC
*** Bug 1241140 has been marked as a duplicate of this bug. ***

Comment 7 Corey Marthaler 2016-09-27 18:38:13 UTC
Marking this verified in the latest rpms. All of the commands listed in comment #1 passed (assuming an already existing global lock VG); as well as a large portion of the current lvm regression tests which were made to be executed on shared lvmlockd volumes.


3.10.0-510.el7.x86_64
lvm2-2.02.165-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
lvm2-libs-2.02.165-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
lvm2-cluster-2.02.165-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-1.02.134-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-libs-1.02.134-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-event-1.02.134-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-event-libs-1.02.134-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.165-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.165-4.el7    BUILT: Thu Sep 22 01:47:19 CDT 2016


Here is a list of currently open lvmlockd bugs not affecting the full support verification:


1379799 RFE: VG lock-stop propagation issue
1379793 RFE: better error when unable to grab VG lock on one node
1375664 confusion over --splitmirrors support on shared lvmlockd VGs
1374786 'R VGLK lock_san acquire error -202' potential errors when attempting shared volume configurations
1351805 "_update_pv_in_udev no dev found" warning when starting global lock space
1350051 unable to scrub raid with tpool on top residing on a shared VG
1269608 RFE: add command to restore missing lvmlock
1268445 limitations to what can be done to shared (-asy) activated LVs on shared VGs
1265768 unable to swap thin meta volume residing on a shared VG

Comment 9 errata-xmlrpc 2016-11-04 04:14:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html


Note You need to log in before you can comment on or make changes to this bug.