RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 796200 - Provide race-free wiping of newly created LVs
Summary: Provide race-free wiping of newly created LVs
Keywords:
Status: CLOSED DUPLICATE of bug 1003441
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: Unspecified
OS: Unspecified
low
high
Target Milestone: rc
: ---
Assignee: Peter Rajnoha
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-02-22 13:21 UTC by Peter Rajnoha
Modified: 2013-10-08 13:16 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-10-08 13:16:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Peter Rajnoha 2012-02-22 13:21:30 UTC
Description of problem:
LVM2 wipes the start of all newly created LVs (unless the "-Z n" option is used). Nowadays, we need to activate the LV first to actually wipe it. But there's a race where the activation generates a udev CHANGE event and processing this event in the udev rules can fire a scan (e.g. a blkid call) which could still see the old signature or any other old metadata (even written by foreign utilities) because the actual wipe hasn't been executed yet in the lvm2 code.

For this to work properly, we'd need to get rid of the activation and we should try to wipe the underlying device directly (hence avoiding the CHANGE event).

Actual results:
Scanning done on devices that is not wiped yet and seeing all the old stuff in there that could mislead the system (and it could end up with bugs like bug #
783841 comment #28).

Expected results:
Race-free wiping.

Comment 1 Zdenek Kabelac 2012-02-22 13:32:39 UTC
For thin provisioning we do not know mapping, so there is not an easy way to wipe start of thin volume device without having thin pool and thin volume active.

We need to discussion several options here.

Comment 2 Milan Broz 2012-02-22 13:44:35 UTC
lvm should be using wipefs through blkid library call to clear signatures.

(zeroing is something else though)

Comment 3 RHEL Program Management 2012-07-10 06:23:40 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 4 RHEL Program Management 2012-07-10 23:56:44 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 5 Peter Rajnoha 2012-10-12 11:53:52 UTC
Moving for consideration to 6.5 (the problematic scenario was mainly with anaconda, see also bug #783841 comment 28 - anaconda does the proper cleanup before installation now, so they do not hit this problem anymore... however, we should still consider a solution on lvm side as well).

Comment 8 Peter Rajnoha 2013-08-26 14:31:58 UTC
One possible way would be to provide a udev flag that will cause all scanning to be skipped on newly created LVs. Let's say it's "DM_UDEV_SKIP_SCAN" flag. So the sequence would look like:

  1)  lvcreate (change event with "SKIP_SCAN" flag set)
  2)  wipe newly created LV
  3a) rely on watch rule to fire the CHANGE event after the close of the device opened for wiping in step 2 - this will reload the udev db with info scanned
  3b) generate CHANGE event by writing "change" to /sys/block/.../uevent (that will cause the udev db reload).
  4) clear the "SKIP_SCAN" flag in udev db if found from previous udev db state

Step 3 would depend on whether the WATCH udev rule is used for DM devices (it's not in RHEL6!)

It's also questionable whether 3a) or 3b) is necessary at all since the LV has just been wiped before that so there shouldn't be anything left...

Comment 9 Peter Rajnoha 2013-10-08 13:16:36 UTC
The patches are upstream now:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=ce7489ed228da22c2a355d0b403a1e5dc6d8c0e0

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=2f5ddfbadea5dae6b2fc236d4f243bd88d955aa8

This should be included in 6.5 as a solution to bug #1003441. I'm closing this one as a dup (the other bug has more discussion about this feature anyway).

*** This bug has been marked as a duplicate of bug 1003441 ***


Note You need to log in before you can comment on or make changes to this bug.