RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1656658 - pvscan fails on mdadm raid 1 when lvmetad is enabled
Summary: pvscan fails on mdadm raid 1 when lvmetad is enabled
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-06 02:14 UTC by Andrew Schorr
Modified: 2021-09-03 12:54 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-09 21:22:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Andrew Schorr 2018-12-06 02:14:53 UTC
Description of problem: After upgrading from 7.5 to 7.6, the system no longer boots. pvscan fails saying it found duplicate PVs.


Version-Release number of selected component (if applicable):
lvm2-2.02.180-10.el7_6.2.x86_64


How reproducible: always


Steps to Reproduce:
1. configure raid 1 mdadm array on partitions with 0.90 metadata
2. watch pvscan fail
3.

Actual results: system doesn't boot because it can't mount lvm filesystems


Expected results: system should boot.


Additional info: This seems to be similar to bug # 1653032

I was able to get the system to boot by disabling lvmetad. Here's what I see in journalctl:

Dec 05 19:52:55 ajserver lvm[621]: WARNING: found device with duplicate /dev/sdc1
Dec 05 19:52:55 ajserver lvm[621]: WARNING: found device with duplicate /dev/md127
Dec 05 19:52:55 ajserver lvm[621]: WARNING: Disabling lvmetad cache which does not support duplicate PVs.
Dec 05 19:52:55 ajserver lvm[621]: WARNING: Scan found duplicate PVs.
Dec 05 19:52:55 ajserver lvm[621]: WARNING: Not using lvmetad because cache update failed.
Dec 05 19:52:55 ajserver systemd[1]: Removed slice system-lvm2\x2dpvscan.slice.
Dec 05 19:55:19 ajserver lvm[1668]: WARNING: found device with duplicate /dev/sdc1
Dec 05 19:55:19 ajserver lvm[1668]: WARNING: Disabling lvmetad cache which does not support duplicate PVs.
Dec 05 19:55:19 ajserver lvm[1668]: WARNING: Scan found duplicate PVs.
Dec 05 19:55:19 ajserver lvm[1668]: WARNING: Not using lvmetad because cache update failed.
Dec 05 19:55:19 ajserver lvm[1668]: WARNING: Not using device /dev/sdc1 for PV BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB.
Dec 05 19:55:19 ajserver lvm[1668]: WARNING: PV BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB prefers device /dev/sdb1 because of previous preference.
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: found device with duplicate /dev/sdc1
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: found device with duplicate /dev/md127
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: Disabling lvmetad cache which does not support duplicate PVs.
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: Scan found duplicate PVs.
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: Not using lvmetad because cache update failed.
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: Autoactivation reading from disk instead of lvmetad.
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: Not using device /dev/sdc1 for PV BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB.
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: Not using device /dev/md127 for PV BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB.
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: PV BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB prefers device /dev/sdb1 because of previous preference.
Dec 05 19:55:19 ajserver lvm[3153]: WARNING: PV BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB prefers device /dev/sdb1 because of previous preference.
Dec 05 19:55:19 ajserver lvm[3153]: device-mapper: reload ioctl on  (253:0) failed: Device or resource busy
Dec 05 19:55:19 ajserver lvm[3153]: 0 logical volume(s) in volume group "vg_data" now active
Dec 05 19:55:19 ajserver lvm[3153]: vg_data: autoactivation failed.

[schorr@ajserver ~]$ cat /proc/mdstat 
Personalities : [raid1] 
md127 : active raid1 sdc1[1] sdb1[0]
      488386496 blocks [2/2] [UU]
      
unused devices: <none>

[schorr@ajserver ~]$ sudo mdadm --examine /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : c7615e92:fd021ba4:a61b5fe6:f5e692c2
  Creation Time : Sat Jan  3 16:31:56 2009
     Raid Level : raid1
  Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
     Array Size : 488386496 (465.76 GiB 500.11 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 127

    Update Time : Wed Dec  5 20:51:36 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : cc1f8104 - correct
         Events : 335120


      Number   Major   Minor   RaidDevice State
this     0       8       17        0      active sync   /dev/sdb1

   0     0       8       17        0      active sync   /dev/sdb1
   1     1       8       33        1      active sync   /dev/sdc1


[schorr@ajserver ~]$ sudo mdadm --examine /dev/sdc1
/dev/sdc1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : c7615e92:fd021ba4:a61b5fe6:f5e692c2
  Creation Time : Sat Jan  3 16:31:56 2009
     Raid Level : raid1
  Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
     Array Size : 488386496 (465.76 GiB 500.11 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 127

    Update Time : Wed Dec  5 20:51:36 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : cc1f8116 - correct
         Events : 335120


      Number   Major   Minor   RaidDevice State
this     1       8       33        1      active sync   /dev/sdc1

   0     0       8       17        0      active sync   /dev/sdb1
   1     1       8       33        1      active sync   /dev/sdc1

[schorr@ajserver ~]$ sudo blkid
/dev/sdc1: UUID="c7615e92-fd02-1ba4-a61b-5fe6f5e692c2" TYPE="linux_raid_member" 
/dev/sdb1: UUID="c7615e92-fd02-1ba4-a61b-5fe6f5e692c2" TYPE="linux_raid_member" 
/dev/sda1: UUID="6ddff44f-0c0b-412c-aaec-9acc1e1ba745" TYPE="ext4" 
/dev/sda2: UUID="e0c2b647-fa68-4ed3-80a5-943b182a62fc" TYPE="xfs" 
/dev/sda3: UUID="a1b78767-76a3-4ea4-95ba-76f2390f8002" TYPE="xfs" 
/dev/md127: UUID="BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB" TYPE="LVM2_member" 
/dev/mapper/vg_data-ajsdata: UUID="826ef25b-7a84-4b11-a965-e3b3da0396f3" TYPE="xfs" 

[schorr@ajserver ~]$ sudo lsblk -f
NAME                  FSTYPE            LABEL UUID                                   MOUNTPOINT
sda                                                                                  
├─sda1                ext4                    6ddff44f-0c0b-412c-aaec-9acc1e1ba745   /boot
├─sda2                xfs                     e0c2b647-fa68-4ed3-80a5-943b182a62fc   /
└─sda3                xfs                     a1b78767-76a3-4ea4-95ba-76f2390f8002   
sdb                                                                                  
└─sdb1                linux_raid_member       c7615e92-fd02-1ba4-a61b-5fe6f5e692c2   
  └─md127             LVM2_member             BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB 
    └─vg_data-ajsdata xfs                     826ef25b-7a84-4b11-a965-e3b3da0396f3   /nfs/ajserver
sdc                                                                                  
└─sdc1                linux_raid_member       c7615e92-fd02-1ba4-a61b-5fe6f5e692c2   
  └─md127             LVM2_member             BhF7Pt-vmfM-gemH-v4MS-lNdj-pY47-td10hB 
    └─vg_data-ajsdata xfs                     826ef25b-7a84-4b11-a965-e3b3da0396f3   /nfs/ajserver

Comment 2 Gordon Messmer 2018-12-06 22:46:34 UTC
I believe this is the same as bug 1656424.

Comment 3 Andrew Schorr 2018-12-16 19:19:28 UTC
FYI, this also affects the rescue shell, which renders the rescue shell not so useful for rescuing...

Comment 4 David Teigland 2020-11-09 21:22:47 UTC
several fixes were made in this area a couple years ago


Note You need to log in before you can comment on or make changes to this bug.