RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1129311 - pvcreate on a dev with existing MD signature confuses .cache
Summary: pvcreate on a dev with existing MD signature confuses .cache
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Alasdair Kergon
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-12 14:59 UTC by Nenad Peric
Modified: 2014-10-14 08:25 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.109-2.el6
Doc Type: Bug Fix
Doc Text:
If pvcreate is run against a device with an existing MD signature, the device is now correctly made available for immediate use after the signature is wiped. Sometimes previously the device continued to be filtered out, as if the signature had not been wiped.
Clone Of:
Environment:
Last Closed: 2014-10-14 08:25:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1387 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2014-10-14 01:39:47 UTC

Description Nenad Peric 2014-08-12 14:59:41 UTC
Copied from Bug 1120564:

[root@virt-064 ~]# mdadm --create md1 -l 1 --raid-devices 2 /dev/sdc1 /dev/sdd1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/md1 started.

[root@virt-064 ~]# vgs
  Incorrect metadata area header checksum on /dev/sdc1 at offset 4096
  Incorrect metadata area header checksum on /dev/sdd1 at offset 4096
  VG         #PV #LV #SN Attr   VSize VFree
  vg_virt064   1   2   0 wz--n- 7.51g    0 


[root@virt-064 ~]# mdadm -S /dev/md/md1
mdadm: stopped /dev/md/md1
[root@virt-064 ~]# vgcreate two /dev/sdc1 /dev/sdd1 /dev/sdf1
  Incorrect metadata area header checksum on /dev/sdc1 at offset 4096
  Incorrect metadata area header checksum on /dev/sdd1 at offset 4096
WARNING: software RAID md superblock detected on /dev/sdc1. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sdc1.
WARNING: software RAID md superblock detected on /dev/sdd1. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sdd1.
  Incorrect metadata area header checksum on /dev/sdc1 at offset 4096
  Incorrect metadata area header checksum on /dev/sdc1 at offset 4096
  Incorrect metadata area header checksum on /dev/sdd1 at offset 4096
  Incorrect metadata area header checksum on /dev/sdd1 at offset 4096
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdd1" successfully created
  Clustered volume group "two" successfully created
[root@virt-064 ~]# vgs
  VG   #PV #LV #SN Attr   VSize  VFree 
  two    3   0   0 wz--nc 44.99g 44.99g
[root@virt-064 ~]# lvs
[root@virt-064 ~]# vgs -a
  VG   #PV #LV #SN Attr   VSize  VFree 
  two    3   0   0 wz--nc 44.99g 44.99g

The VG which was there in the beginning "vanished"

However if obtain_device_list_from_udev = 1 then:

[root@virt-064 ~]# mdadm -S /dev/md/md1
mdadm: stopped /dev/md/md1
[root@virt-064 ~]# vgs
  VG         #PV #LV #SN Attr   VSize VFree
  vg_virt064   1   2   0 wz--n- 7.51g    0 
[root@virt-064 ~]# lvs -a
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root vg_virt064 -wi-ao----   6.71g                                                    
  lv_swap vg_virt064 -wi-ao---- 816.00m                                                    
[root@virt-064 ~]# vgcreate two /dev/sdd1 /dev/sdc1 /dev/sdf1
WARNING: software RAID md superblock detected on /dev/sdd1. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sdd1.
WARNING: software RAID md superblock detected on /dev/sdc1. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sdc1.
  Physical volume "/dev/sdd1" successfully created
  Physical volume "/dev/sdc1" successfully created
  Clustered volume group "two" successfully created
[root@virt-064 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree 
  two          3   0   0 wz--nc 44.99g 44.99g
  vg_virt064   1   2   0 wz--n-  7.51g     0 


There is a difference in messages as well where in the second try, LVM does not complain about incorrect metadata area header checksum. Maybe that is related. 

Tested with:

lvm2-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
lvm2-libs-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014


Comment from Petr Rajnoha:

Happens only if MD is incorporated:

(with swap - problem not hit)

[root@rhel6-b ~]# mkswap /dev/sda
mkswap: /dev/sda: warning: don't erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 131068 KiB
no label, UUID=a2265867-65d5-461a-b12d-ec4dc9aac8aa

[root@rhel6-b ~]# vgcreate vg /dev/sda
WARNING: swap signature detected on /dev/sda. Wipe it? [y/n]: y
  Wiping swap signature on /dev/sda.
  Physical volume "/dev/sda" successfully created
  Clustered volume group "vg" successfully created

[root@rhel6-b ~]# vgs
  VG       #PV #LV #SN Attr   VSize   VFree  
  VolGroup   1   2   0 wz--n-   9.51g      0 
  vg         1   0   0 wz--nc 124.00m 124.00m



(with MD - problem hit)

[root@rhel6-b ~]# mdadm --create /dev/md0 -l1 --raid-devices 2 /dev/sda /dev/sdb
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@rhel6-b ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0

[root@rhel6-b ~]# vgcreate vg /dev/sda
WARNING: software RAID md superblock detected on /dev/sda. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sda.
  Physical volume "/dev/sda" successfully created
  Clustered volume group "vg" successfully created

[root@rhel6-b ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree  
  vg     1   0   0 wz--nc 124.00m 124.00m


( MISSING VolGroup VG!!!)

[root@rhel6-b ~]# cat /etc/lvm/cache/.cache 
# This file is automatically maintained by lvm.

persistent_filter_cache {
	valid_devices=[
		"/dev/disk/by-id/scsi-360000000000000000e00000000020001",
		"/dev/disk/by-id/wwn-0x60000000000000000e00000000020001",
		"/dev/block/8:0",
		"/dev/disk/by-path/ip-192.168.122.1:3260-iscsi-iqn.2012-07.com.redhat.brq.alatyr.virt:host.target_rhel6-lun-1",
		"/dev/sda"
	]
}

( .cache CONTAINS ONLY sda and its aliases)

Comment 2 Peter Rajnoha 2014-08-12 15:26:45 UTC
Well, I've managed to reproduce with non-clustered mode too. So changing the summary. The thing that confused me here was that pvs called after pvcreate recreated the .cache corerctly, while vgs does not recreate the .cache.

[root@rhel6-b ~]# dd if=/dev/zero of=/dev/sda bs=1M
dd: writing `/dev/sda': No space left on device
129+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0.602169 s, 223 MB/s

[root@rhel6-b ~]# dd if=/dev/zero of=/dev/sdb bs=1M
dd: writing `/dev/sdb': No space left on device
129+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0.629256 s, 213 MB/s

[root@rhel6-b ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup" using metadata type lvm2

[root@rhel6-b ~]# mdadm --create /dev/md0 -l1 --raid-devices 2 /dev/sda /dev/sdb
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

[root@rhel6-b ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0

[root@rhel6-b ~]# pvcreate /dev/sda 
WARNING: software RAID md superblock detected on /dev/sda. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sda.
  Physical volume "/dev/sda" successfully created
[root@rhel6-b ~]# cat /etc/lvm/cache/.cache 
# This file is automatically maintained by lvm.

persistent_filter_cache {
	valid_devices=[
		"/dev/disk/by-id/scsi-360000000000000000e00000000020001",
		"/dev/disk/by-id/wwn-0x60000000000000000e00000000020001",
		"/dev/block/8:0",
		"/dev/disk/by-path/ip-192.168.122.1:3260-iscsi-iqn.2012-07.com.redhat.brq.alatyr.virt:host.target_rhel6-lun-1",
		"/dev/sda"
	]
}

Comment 3 Alasdair Kergon 2014-08-12 18:09:02 UTC
Same problem if you respond 'no' to the 'wipe' confirmation prompt.

This probably dates back to this fix:

commit acb4b5e4de3c49d36fe756f6fb9997ec179b89c2
Author: Milan Broz <mbroz>
Date:   Wed Mar 17 14:44:18 2010 +0000

    Fix pvcreate device check.

    If user try to vgcreate or vgextend non-existent VG,
    these messages appears:
    
    # vgcreate xxx /dev/xxx
      Internal error: Volume Group xxx was not unlocked
      Device /dev/xxx not found (or ignored by filtering).
      Unable to add physical volume '/dev/xxx' to volume group 'xxx'.
      Internal error: Attempt to unlock unlocked VG xxx.
    
    (the same with existing VG and non-existing PV & vgextend)
    # vgextend vg_test /dev/xxx
    ...
    
    It is caused because code tries to "refresh" cache if
    md filter is switched on using cache destroy.
    
    But we can change filters and rescan even without this
    machinery now, just use refresh_filters
    (and reset md filter afterwards).
    
    (Patch also  discovers cache alias bug in vgsplit test,
    fix it by using better filter line.)

Comment 4 Alasdair Kergon 2014-08-14 00:37:02 UTC
Fix applied upstream:

  https://www.redhat.com/archives/lvm-devel/2014-August/msg00019.html

Comment 7 Nenad Peric 2014-08-20 12:32:12 UTC
[root@virt-065 ~]# mdadm -S /dev/md/md1
mdadm: stopped /dev/md/md1
[root@virt-065 ~]# vgcreate two /dev/sdc1 /dev/sdd1 /dev/sdf1
WARNING: software RAID md superblock detected on /dev/sdc1. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sdc1.
  Incorrect metadata area header checksum on /dev/sdc1 at offset 4096
WARNING: software RAID md superblock detected on /dev/sdd1. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sdd1.
  Incorrect metadata area header checksum on /dev/sdd1 at offset 4096
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdd1" successfully created
  Clustered volume group "two" successfully created
[root@virt-065 ~]# vgs
  VG   #PV #LV #SN Attr   VSize  VFree 
  two    3   0   0 wz--nc 44.99g 44.99g
  vg_virt065   1   2   0 wz--n-  7.51g     0 


The old VGs no longer vanish



Marking VERIFIED with:

lvm2-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
lvm2-libs-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
lvm2-cluster-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
udev-147-2.57.el6    BUILT: Thu Jul 24 15:48:47 CEST 2014
device-mapper-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-libs-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-event-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-event-libs-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 15:43:06 CEST 2014
cmirror-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014

Comment 8 errata-xmlrpc 2014-10-14 08:25:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1387.html


Note You need to log in before you can comment on or make changes to this bug.