Bug 1037450 - pvscan --cache doesn't work with pool-format VGs
Summary: pvscan --cache doesn't work with pool-format VGs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: All
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-03 08:10 UTC by Petr Rockai
Modified: 2021-09-08 20:29 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-14 18:47:29 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Petr Rockai 2013-12-03 08:10:37 UTC
In "pool" (GFS pool) formatted VGs, the metadata is distributed across multiple PVs in a non-redundant fashion. We can't currently cope with this in pvscan --cache, marking such PVs orphans and getting into all sort of trouble when we try to do something with them (while lvmetad is active).

The options are, either extend pvscan --cache to open multiple devices when we encounter a pool signature, or flat out refuse to deal with the pool-format VG.

Comment 1 Petr Rockai 2013-12-03 08:12:11 UTC
(This currently trips the pool-labels.sh upstream test in a fairly nasty way -- pvcreate is sent into an infinite loop when PVs are unexpectedly juggled between lists during a traversal.)

Comment 3 Alasdair Kergon 2013-12-03 13:31:50 UTC
If format pool metadata is encountered and lvmetad is in use, issue a

WARNING: Ignoring old GFS pool metadata on device %s when using lvmetad

Ideally pvcreate/vgcreate/vgextend should still warn about the metadata and prompt rather than silently overwriting it, so lvmetad might still need to record that the PV contains an unsupported metadata format.

Comment 4 Petr Rockai 2013-12-21 12:03:58 UTC
Well, I made pvscan --cache ignore pool labels as a stopgap. However, it doesn't seem to be particularly easier to add "unsupported PV type" code than to properly load up pool metadata in pvscan --cache. The only bit that really interferes is that format_pool peeks directly into lvmcache. I think I can fix that though. It will be slightly inefficient, but the hit will only happen when udev events come in for pool-formatted PVs, which wouldn't be often, and would run asynchronously in the background anyway.

Comment 5 Alasdair Kergon 2014-02-19 01:24:39 UTC
So I think we might still need to add WARNINGs to some code paths if pool (or format1) metadata is detected.  (Treat it the same way as we treat non-lvm metadata like md perhaps?)   In other words, we should not overwrite such metadata silently.

Comment 6 Ludek Smid 2014-06-26 10:49:29 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.

Comment 7 Ludek Smid 2014-06-26 11:14:34 UTC
The comment above is incorrect. The correct version is bellow.
I'm sorry for any inconvenience.
---------------------------------------------------------------

This request was NOT resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you need
to escalate this bug.

Comment 10 Petr Rockai 2015-04-25 10:59:10 UTC
We currently test this by writing out a hand-crafted pool label on a device. I don't know how hypothetical or real the use-case is, or whether there are any pool-formatted PVs out in the wild. As Alasdair points out, the only use-case here might be that we want to avoid overwriting the metadata/labels in this format accidentally (without a confirmation). Nonetheless, we could also ignore the pool format completely and probably never hit an issue.

Comment 12 Petr Rockai 2015-07-14 18:47:29 UTC
We cleanly ignore pool metadata in pvscan --cache since bd3edb2566189e1e3e016933e397a58a90bc81ea (december 2013); we have decided that this level of treatment is sufficient for pool-format metadata for lvmetad-enabled systems.


Note You need to log in before you can comment on or make changes to this bug.