RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1467411 - 'lvcreate --test' needs to be turned off (or fixed) for use with lvmlockd
Summary: 'lvcreate --test' needs to be turned off (or fixed) for use with lvmlockd
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1908181
TreeView+ depends on / blocked
 
Reported: 2017-07-03 17:35 UTC by Corey Marthaler
Modified: 2021-09-03 12:35 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1908181 (view as bug list)
Environment:
Last Closed: 2021-01-15 07:39:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2017-07-03 17:35:56 UTC
Description of problem:
This appears to be an extension of bug 1290874, which turned off 'lvconvert --test' when used in lvmlockd mode. lvcreate also need this apparently.


# test mode
[root@host-115 ~]# lvcreate --activate ey --test --type raid10 -i 2 -n testraid -L 500M raid_sanity
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Using default stripesize 64.00 KiB.
  Test mode is not yet supported with lock type sanlock.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB(126 extents).
  Internal error: Attempt to write new VG metadata without locking raid_sanity
  Internal error: Attempt to unlock unlocked VG raid_sanity.
  Device '/dev/sde1' has been left open (1 remaining references).
  Device '/dev/sdc2' has been left open (1 remaining references).
  Device '/dev/sde2' has been left open (1 remaining references).
  Device '/dev/sdf1' has been left open (1 remaining references).
  Device '/dev/sdb1' has been left open (1 remaining references).
  Device '/dev/sdd2' has been left open (1 remaining references).
  Device '/dev/sdb2' has been left open (1 remaining references).
  Device '/dev/sdc1' has been left open (1 remaining references).
  Device '/dev/sdg1' has been left open (1 remaining references).
  Device '/dev/sdg2' has been left open (1 remaining references).
  Internal error: 10 device(s) were left open and have been closed.


# regular mode
[root@host-115 ~]# lvcreate --activate ey --type raid10 -i 2 -n testraid -L 500M raid_sanity
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB(126 extents).
  Logical volume "testraid" created.


[root@host-115 ~]# systemctl status lvm2-lvmlockd
â— lvm2-lvmlockd.service - LVM2 lock daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmlockd.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-06-30 16:32:14 CDT; 2 days ago
     Docs: man:lvmlockd(8)
 Main PID: 2452 (lvmlockd)
   CGroup: /system.slice/lvm2-lvmlockd.service
           └─2452 /usr/sbin/lvmlockd -f






Version-Release number of selected component (if applicable):
3.10.0-689.el7.x86_64

lvm2-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
lvm2-libs-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
lvm2-cluster-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-libs-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-event-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-event-libs-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017
cmirror-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
sanlock-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
sanlock-lib-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
lvm2-lockd-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017

Comment 2 Alasdair Kergon 2017-07-06 16:49:35 UTC
Well, the idea of --test mode is to do things as closely as possible to the normal operations but without actually making any persistent changes (like activating volumes or updating metadata).  To facilitate this it needs to be supported by the framework (i.e. low-level library) rather than needing changes to be made to every code path.  So let's now look for a way to ensure --test works with lockd across the full command set.

Comment 3 Alasdair Kergon 2017-07-06 17:16:53 UTC
So the question is whether enough locking infrastructure can safely be used with --test (in that the pieces used are considered to be below the LVM layer that needs to be tested so it's acceptable to activate them) or whether the locking operations should all default to 'success'.

Comment 4 David Teigland 2017-07-06 18:30:28 UTC
There is also 'lvmlockd --test' mode in which lvmlockd returns success for all operations.  With this, the command/client locking code is largely executed as normal (except for paths involving lock failures).

There is another issue involved here, which is properly cleaning up from a command that fails part way (or doesn't complete because of test mode).  If the command makes some changes in lvmlockd, then quits/fails, the state of things in lvmlockd may be incomplete/incorrect.  This is mainly an issue for complex thin/cache operations that modify multiple LVs (In those cases I think cleanly handling partial commands is a larger issue than locking.)

Comment 5 Alasdair Kergon 2017-07-07 00:13:36 UTC
So perhaps there could be a way for the client to tell the lvmlockd server to use test mode for all interactions from this particular client?

The basic use case for lvm --test is a user on a live system wanting to check as best they can what a command is going to do before running it for real.

It might be worth splitting this bugzilla into two - one for handling --test and one for working out way(s) of dealing with clean up in different sets of circumstances.  In some cases something might be able track what clean up is needed and do it automatically on failure, but there could remain cases where it needs to be worked out independently, possibly with the aid of hints stored in the metadata.

Comment 7 RHEL Program Management 2021-01-15 07:39:12 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.