RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1925871 - lvm[698]: Failed to get primary device for 259:2
Summary: lvm[698]: Failed to get primary device for 259:2
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.4
Hardware: All
OS: Linux
high
medium
Target Milestone: rc
: 8.4
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1928298 (view as bug list)
Depends On: 1859659
Blocks: 1796871
TreeView+ depends on / blocked
 
Reported: 2021-02-07 06:29 UTC by Frank Liang
Modified: 2021-10-12 11:07 UTC (History)
19 users (show)

Fixed In Version: lvm2-2.03.11-4.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 15:02:12 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 191359 0 None None None 2021-02-09 16:35:57 UTC
Red Hat Product Errata RHBA-2021:1659 0 None None None 2021-05-18 15:02:30 UTC

Description Frank Liang 2021-02-07 06:29:39 UTC
Description of problem:
Found below failure in journal in our aws testing.
[root@ip-10-116-1-199 ec2-user]# journalctl |grep -i lvm
Feb 07 03:12:11 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   readlink failed: Invalid argument
Feb 07 03:12:11 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   Failed to get primary device for 259:1.
Feb 07 03:12:11 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   readlink failed: Invalid argument
Feb 07 03:12:11 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   Failed to get primary device for 259:2.
Feb 07 03:12:11 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   readlink failed: Invalid argument
Feb 07 03:12:11 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   Failed to get primary device for 259:3.
Feb 07 03:12:11 ip-10-22-1-250.us-west-2.compute.internal lvm[761]: [43B blob data]
Feb 07 03:12:11 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   Failed to get primary device for 259:1.
Feb 07 03:12:13 ip-10-22-1-250.us-west-2.compute.internal lvm[761]: [43B blob data]
Feb 07 03:12:13 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   Failed to get primary device for 259:2.
Feb 07 03:12:13 ip-10-22-1-250.us-west-2.compute.internal lvm[761]: [43B blob data]
Feb 07 03:12:13 ip-10-22-1-250.us-west-2.compute.internal lvm[761]:   Failed to get primary device for 259:3.
Feb 07 03:12:13 ip-10-22-1-250.us-west-2.compute.internal systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.

[root@ip-10-116-1-199 ec2-user]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0   10G  0 disk
├─nvme0n1p1 259:1    0  200M  0 part /boot/efi
├─nvme0n1p2 259:2    0  512M  0 part /boot
└─nvme0n1p3 259:3    0  9.3G  0 part /

# rpm -qf /sbin/lvm
lvm2-2.03.11-3.el8.aarch64

Version-Release number of selected components (if applicable):

RHEL Version:
RHEL8.4(4.18.0-282.el8.aarch64)

How reproducible:
100%

Steps to Reproduce:
1. Start a RHEL-8.4 instance .
2. Try to check journal log

Actual results:
lvm failed to get primary device for nvme block.

Expected results:
No such fail

Additional info:
- N/A

Comment 1 David Teigland 2021-02-08 18:07:26 UTC
Hi, could you please attach the output of the following?

pvs -vvvv
ls /sys/dev/block/259:0/
ls /sys/dev/block/259:1/
ls /sys/dev/block/259:2/
ls /sys/dev/block/259:3/
ls -l /sys/dev/block/259:1
ls -l /sys/dev/block/259:2
ls -l /sys/dev/block/259:3
cat /sys/dev/block/259:1/partition
cat /sys/dev/block/259:2/partition
cat /sys/dev/block/259:3/partition


An example of what I'm looking for, taking from a test machine in our lab:

$ ls -l /dev/nvme0n1p1
brw-rw----. 1 root disk 259, 3 Jan 19 14:30 /dev/nvme0n1p1

$ ls /sys/dev/block/259:3/
alignment_offset  dev  discard_alignment  holders  inflight  partition  power  ro  size  start  stat  subsystem  trace  uevent

$ cat /sys/dev/block/259:3/partition
1

$ ls -l /sys/dev/block/259:3
lrwxrwxrwx. 1 root root 0 Nov 18 14:26 /sys/dev/block/259:3 -> ../../devices/pci0000:00/0000:00:1c.4/0000:03:00.0/nvme/nvme0/nvme0n1/nvme0n1p1

Comment 2 David Teigland 2021-02-08 18:15:53 UTC
Note that this bug likely appeared as a result of the fix in bug 1859659 to recognize multipath nvme.

Comment 3 Frank Liang 2021-02-09 02:54:10 UTC

[root@ip-10-116-2-190 ec2-user]# journalctl |grep -i inval
Feb 09 02:43:27 localhost kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence
Feb 09 02:43:56 ip-10-22-1-35.us-west-2.compute.internal lvm[758]:   readlink failed: Invalid argument
Feb 09 02:43:56 ip-10-22-1-35.us-west-2.compute.internal lvm[758]:   readlink failed: Invalid argument
Feb 09 02:43:56 ip-10-22-1-35.us-west-2.compute.internal lvm[758]:   readlink failed: Invalid argument
[root@ip-10-116-2-190 ec2-user]# pvs -vvvv
02:47:18.533250 pvs[4892] lvmcmdline.c:2999  Parsing: pvs -vvvv
02:47:18.533276 pvs[4892] lvmcmdline.c:1991  Recognised command pvs_general (id 122 / enum 103).
02:47:18.533301 pvs[4892] filters/filter-sysfs.c:331  Sysfs filter initialised.
02:47:18.533307 pvs[4892] filters/filter-internal.c:82  Internal filter initialised.
02:47:18.533313 pvs[4892] filters/filter-type.c:61  LVM type filter initialised.
02:47:18.533317 pvs[4892] filters/filter-usable.c:209  Usable device filter initialised (scan_lvs 0).
02:47:18.533322 pvs[4892] filters/filter-mpath.c:402  mpath filter initialised.
02:47:18.533328 pvs[4892] filters/filter-partitioned.c:78  Partitioned filter initialised.
02:47:18.533334 pvs[4892] filters/filter-signature.c:95  signature filter initialised.
02:47:18.533338 pvs[4892] filters/filter-md.c:157  MD filter initialised.
02:47:18.533343 pvs[4892] filters/filter-composite.c:103  Composite filter initialised.
02:47:18.533350 pvs[4892] filters/filter-persistent.c:196  Persistent filter initialised.
02:47:18.533355 pvs[4892] device_mapper/libdm-config.c:987  devices/hints not found in config: defaulting to all
02:47:18.533364 pvs[4892] device_mapper/libdm-config.c:1086  metadata/record_lvs_history not found in config: defaulting to 0
02:47:18.533370 pvs[4892] lvmcmdline.c:3056  DEGRADED MODE. Incomplete RAID LVs will be processed.
02:47:18.533377 pvs[4892] lvmcmdline.c:3062  Processing command: pvs -vvvv
02:47:18.533382 pvs[4892] lvmcmdline.c:3063  Command pid: 4892
02:47:18.533387 pvs[4892] lvmcmdline.c:3064  System ID: 
02:47:18.533390 pvs[4892] lvmcmdline.c:3067  O_DIRECT will be used
02:47:18.533394 pvs[4892] device_mapper/libdm-config.c:1014  global/locking_type not found in config: defaulting to 1
02:47:18.533402 pvs[4892] locking/locking.c:143  File locking settings: readonly:0 sysinit:0 ignorelockingfailure:0 global/metadata_read_only:0 global/wait_for_locks:1.
02:47:18.536172 pvs[4892] device_mapper/libdm-common.c:986  Preparing SELinux context for /run/lock/lvm to system_u:object_r:lvm_lock_t:s0.
02:47:18.536315 pvs[4892] device_mapper/libdm-common.c:989  Resetting SELinux context to default value.
02:47:18.536334 pvs[4892] device_mapper/libdm-config.c:987  devices/md_component_checks not found in config: defaulting to auto
02:47:18.536342 pvs[4892] lvmcmdline.c:2907  Using md_component_checks auto use_full_md_check 0
02:47:18.536353 pvs[4892] device_mapper/libdm-config.c:987  report/output_format not found in config: defaulting to basic
02:47:18.536360 pvs[4892] device_mapper/libdm-config.c:1086  log/report_command_log not found in config: defaulting to 0
02:47:18.536366 pvs[4892] device_mapper/libdm-config.c:1086  report/aligned not found in config: defaulting to 1
02:47:18.536374 pvs[4892] device_mapper/libdm-config.c:1086  report/buffered not found in config: defaulting to 1
02:47:18.536380 pvs[4892] device_mapper/libdm-config.c:1086  report/headings not found in config: defaulting to 1
02:47:18.536387 pvs[4892] device_mapper/libdm-config.c:987  report/separator not found in config: defaulting to  
02:47:18.536392 pvs[4892] device_mapper/libdm-config.c:1086  report/prefixes not found in config: defaulting to 0
02:47:18.536398 pvs[4892] device_mapper/libdm-config.c:1086  report/quoted not found in config: defaulting to 1
02:47:18.536402 pvs[4892] device_mapper/libdm-config.c:1086  report/columns_as_rows not found in config: defaulting to 0
02:47:18.536408 pvs[4892] device_mapper/libdm-config.c:987  report/pvs_sort not found in config: defaulting to pv_name
02:47:18.536414 pvs[4892] device_mapper/libdm-config.c:987  report/pvs_cols_verbose not found in config: defaulting to pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid
02:47:18.536420 pvs[4892] device_mapper/libdm-config.c:987  report/compact_output_cols not found in config: defaulting to 
02:47:18.536483 pvs[4892] toollib.c:4377  Processing each PV
02:47:18.536493 pvs[4892] misc/lvm-flock.c:230  Locking /run/lock/lvm/P_global RB
02:47:18.537551 pvs[4892] device_mapper/libdm-common.c:986  Preparing SELinux context for /run/lock/lvm/P_global to system_u:object_r:lvm_lock_t:s0.
02:47:18.537576 pvs[4892] misc/lvm-flock.c:114  _do_flock /run/lock/lvm/P_global:aux WB
02:47:18.537660 pvs[4892] misc/lvm-flock.c:47  _undo_flock /run/lock/lvm/P_global:aux
02:47:18.537675 pvs[4892] misc/lvm-flock.c:114  _do_flock /run/lock/lvm/P_global RB
02:47:18.537691 pvs[4892] device_mapper/libdm-common.c:989  Resetting SELinux context to default value.
02:47:18.537704 pvs[4892] cache/lvmcache.c:1066  Finding VG info
02:47:18.537712 pvs[4892] label/label.c:1028  Finding devices to scan
02:47:18.537745 pvs[4892] device/dev-cache.c:1175  Creating list of system devices.
02:47:18.538305 pvs[4892] device/dev-cache.c:714  Found dev 259:0 /dev/nvme0n1 - new.
02:47:18.538348 pvs[4892] device/dev-cache.c:751  Found dev 259:0 /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol06f8135aa0df566b1 - new alias.
02:47:18.538366 pvs[4892] device/dev-cache.c:751  Found dev 259:0 /dev/disk/by-id/nvme-nvme.1d0f-766f6c3036663831333561613064663536366231-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 - new alias.
02:47:18.538379 pvs[4892] device/dev-cache.c:751  Found dev 259:0 /dev/disk/by-path/pci-0000:00:04.0-nvme-1 - new alias.
02:47:18.538423 pvs[4892] device/dev-cache.c:714  Found dev 259:1 /dev/nvme0n1p1 - new.
02:47:18.538480 pvs[4892] device/dev-cache.c:751  Found dev 259:1 /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol06f8135aa0df566b1-part1 - new alias.
02:47:18.538493 pvs[4892] device/dev-cache.c:751  Found dev 259:1 /dev/disk/by-id/nvme-nvme.1d0f-766f6c3036663831333561613064663536366231-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1 - new alias.
02:47:18.538504 pvs[4892] device/dev-cache.c:751  Found dev 259:1 /dev/disk/by-id/wwn-nvme.1d0f-766f6c3036663831333561613064663536366231-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1 - new alias.
02:47:18.538518 pvs[4892] device/dev-cache.c:751  Found dev 259:1 /dev/disk/by-partlabel/EFI\x20System\x20Partition - new alias.
02:47:18.538529 pvs[4892] device/dev-cache.c:751  Found dev 259:1 /dev/disk/by-partuuid/1a42cc33-a163-4878-87ee-50d096aaeb03 - new alias.
02:47:18.538541 pvs[4892] device/dev-cache.c:751  Found dev 259:1 /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part1 - new alias.
02:47:18.538552 pvs[4892] device/dev-cache.c:751  Found dev 259:1 /dev/disk/by-uuid/FAE3-1E3F - new alias.
02:47:18.538597 pvs[4892] device/dev-cache.c:714  Found dev 259:2 /dev/nvme0n1p2 - new.
02:47:18.538646 pvs[4892] device/dev-cache.c:751  Found dev 259:2 /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol06f8135aa0df566b1-part2 - new alias.
02:47:18.538659 pvs[4892] device/dev-cache.c:751  Found dev 259:2 /dev/disk/by-id/nvme-nvme.1d0f-766f6c3036663831333561613064663536366231-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2 - new alias.
02:47:18.538671 pvs[4892] device/dev-cache.c:751  Found dev 259:2 /dev/disk/by-id/wwn-nvme.1d0f-766f6c3036663831333561613064663536366231-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2 - new alias.
02:47:18.538685 pvs[4892] device/dev-cache.c:751  Found dev 259:2 /dev/disk/by-partuuid/9c2b0977-37c4-4f20-a68c-8265744b6733 - new alias.
02:47:18.538696 pvs[4892] device/dev-cache.c:751  Found dev 259:2 /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part2 - new alias.
02:47:18.538704 pvs[4892] device/dev-cache.c:751  Found dev 259:2 /dev/disk/by-uuid/752cf38c-23c9-4d44-a1e1-55d3e9844035 - new alias.
02:47:18.538749 pvs[4892] device/dev-cache.c:714  Found dev 259:3 /dev/nvme0n1p3 - new.
02:47:18.538797 pvs[4892] device/dev-cache.c:751  Found dev 259:3 /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol06f8135aa0df566b1-part3 - new alias.
02:47:18.538810 pvs[4892] device/dev-cache.c:751  Found dev 259:3 /dev/disk/by-id/nvme-nvme.1d0f-766f6c3036663831333561613064663536366231-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part3 - new alias.
02:47:18.538821 pvs[4892] device/dev-cache.c:751  Found dev 259:3 /dev/disk/by-id/wwn-nvme.1d0f-766f6c3036663831333561613064663536366231-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part3 - new alias.
02:47:18.538835 pvs[4892] device/dev-cache.c:751  Found dev 259:3 /dev/disk/by-partuuid/f9323384-6c5c-495e-a1a0-623eecb04507 - new alias.
02:47:18.538845 pvs[4892] device/dev-cache.c:751  Found dev 259:3 /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part3 - new alias.
02:47:18.538854 pvs[4892] device/dev-cache.c:751  Found dev 259:3 /dev/disk/by-uuid/75498971-61dc-411a-b82d-4bc4ddad5363 - new alias.
02:47:18.538903 pvs[4892] label/label.c:1095  Filtering devices to scan (nodata)
02:47:18.539036 pvs[4892] device/dev-io.c:418  Opened /dev/nvme0n1 RO O_DIRECT
02:47:18.539045 pvs[4892] device/dev-io.c:117  /dev/nvme0n1: size is 20971520 sectors
02:47:18.539052 pvs[4892] device/dev-io.c:455  Closed /dev/nvme0n1
02:47:18.539057 pvs[4892] device/dev-type.c:88  Found nvme device /dev/nvme0n1
02:47:18.539084 pvs[4892] filters/filter-persistent.c:140  filter caching good /dev/nvme0n1
02:47:18.539096 pvs[4892] device/dev-io.c:418  Opened /dev/nvme0n1p1 RO O_DIRECT
02:47:18.539103 pvs[4892] device/dev-io.c:117  /dev/nvme0n1p1: size is 409600 sectors
02:47:18.539109 pvs[4892] device/dev-io.c:455  Closed /dev/nvme0n1p1
02:47:18.539116 pvs[4892] device/dev-type.c:88  Found nvme device /dev/nvme0n1p1
02:47:18.539151 pvs[4892] device/dev-type.c:664  �x: readlink failed: No such file or directory
02:47:18.539159 pvs[4892] filters/filter-mpath.c:237  Failed to get primary device for 259:1.
02:47:18.539165 pvs[4892] filters/filter-persistent.c:140  filter caching good /dev/nvme0n1p1
02:47:18.539175 pvs[4892] device/dev-io.c:418  Opened /dev/nvme0n1p2 RO O_DIRECT
02:47:18.539183 pvs[4892] device/dev-io.c:117  /dev/nvme0n1p2: size is 1048576 sectors
02:47:18.539190 pvs[4892] device/dev-io.c:455  Closed /dev/nvme0n1p2
02:47:18.539194 pvs[4892] device/dev-type.c:88  Found nvme device /dev/nvme0n1p2
02:47:18.539205 pvs[4892] device/dev-type.c:664  �x: readlink failed: No such file or directory
02:47:18.539212 pvs[4892] filters/filter-mpath.c:237  Failed to get primary device for 259:2.
02:47:18.539217 pvs[4892] filters/filter-persistent.c:140  filter caching good /dev/nvme0n1p2
02:47:18.539227 pvs[4892] device/dev-io.c:418  Opened /dev/nvme0n1p3 RO O_DIRECT
02:47:18.539233 pvs[4892] device/dev-io.c:117  /dev/nvme0n1p3: size is 19511263 sectors
02:47:18.539238 pvs[4892] device/dev-io.c:455  Closed /dev/nvme0n1p3
02:47:18.539243 pvs[4892] device/dev-type.c:88  Found nvme device /dev/nvme0n1p3
02:47:18.539254 pvs[4892] device/dev-type.c:664  �x: readlink failed: No such file or directory
02:47:18.539261 pvs[4892] filters/filter-mpath.c:237  Failed to get primary device for 259:3.
02:47:18.539267 pvs[4892] filters/filter-persistent.c:140  filter caching good /dev/nvme0n1p3
02:47:18.539368 pvs[4892] label/hints.c:684  Reading hint file
02:47:18.539386 pvs[4892] config/config.c:1475  devices/global_filter not found in config: defaulting to global_filter = [ "a|.*|" ]
02:47:18.539397 pvs[4892] config/config.c:1475  devices/filter not found in config: defaulting to filter = [ "a|.*|" ]
02:47:18.539408 pvs[4892] label/hints.c:1365  get_hints: no entries
02:47:18.539417 pvs[4892] label/label.c:911  Checking fd limit for num_devs 4 want 36 soft 1024 hard 262144
02:47:18.539425 pvs[4892] label/label.c:692  Scanning 4 devices for VG info
02:47:18.539432 pvs[4892] label/label.c:589  open /dev/nvme0n1 ro di 0 fd 5
02:47:18.539471 pvs[4892] label/label.c:589  open /dev/nvme0n1p1 ro di 1 fd 6
02:47:18.539491 pvs[4892] label/label.c:589  open /dev/nvme0n1p2 ro di 2 fd 7
02:47:18.539509 pvs[4892] label/label.c:589  open /dev/nvme0n1p3 ro di 3 fd 8
02:47:18.539524 pvs[4892] label/label.c:728  Scanning submitted 4 reads
02:47:18.540331 pvs[4892] label/label.c:744  Processing data from device /dev/nvme0n1 259:0 di 0 block 0xaaaaf4bad0c0
02:47:18.540341 pvs[4892] device/dev-io.c:94  /dev/nvme0n1: using cached size 20971520 sectors
02:47:18.540376 pvs[4892] filters/filter-partitioned.c:44  /dev/nvme0n1: Skipping: Partition table signature found
02:47:18.540384 pvs[4892] filters/filter-persistent.c:140  filter caching bad /dev/nvme0n1
02:47:18.540391 pvs[4892] label/label.c:393  <backtrace>
02:47:18.540619 pvs[4892] label/label.c:744  Processing data from device /dev/nvme0n1p1 259:1 di 1 block 0xaaaaf4bad100
02:47:18.540628 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p1: using cached size 409600 sectors
02:47:18.540644 pvs[4892] device/dev-type.c:664  `�:���: readlink failed: Invalid argument
02:47:18.540651 pvs[4892] filters/filter-mpath.c:237  Failed to get primary device for 259:1.
02:47:18.540664 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p1: using cached size 409600 sectors
02:47:18.540778 pvs[4892] filters/filter-persistent.c:140  filter caching good /dev/nvme0n1p1
02:47:18.540790 pvs[4892] label/label.c:414  /dev/nvme0n1p1: No lvm label detected
02:47:18.540797 pvs[4892] label/label.c:744  Processing data from device /dev/nvme0n1p2 259:2 di 2 block 0xaaaaf4bad140
02:47:18.540803 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p2: using cached size 1048576 sectors
02:47:18.540816 pvs[4892] device/dev-type.c:664  `�:���: readlink failed: Invalid argument
02:47:18.540823 pvs[4892] filters/filter-mpath.c:237  Failed to get primary device for 259:2.
02:47:18.540835 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p2: using cached size 1048576 sectors
02:47:18.540927 pvs[4892] filters/filter-persistent.c:140  filter caching good /dev/nvme0n1p2
02:47:18.540937 pvs[4892] label/label.c:414  /dev/nvme0n1p2: No lvm label detected
02:47:18.540944 pvs[4892] label/label.c:744  Processing data from device /dev/nvme0n1p3 259:3 di 3 block 0xaaaaf4bad180
02:47:18.540949 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p3: using cached size 19511263 sectors
02:47:18.540960 pvs[4892] device/dev-type.c:664  `�:���: readlink failed: Invalid argument
02:47:18.540967 pvs[4892] filters/filter-mpath.c:237  Failed to get primary device for 259:3.
02:47:18.540979 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p3: using cached size 19511263 sectors
02:47:18.541069 pvs[4892] filters/filter-persistent.c:140  filter caching good /dev/nvme0n1p3
02:47:18.541079 pvs[4892] label/label.c:414  /dev/nvme0n1p3: No lvm label detected
02:47:18.541085 pvs[4892] label/label.c:824  Scanned devices: read errors 0 process errors 0 failed 0
02:47:18.541091 pvs[4892] label/hints.c:917  Writing hint file 4
02:47:18.541121 pvs[4892] config/config.c:1475  devices/global_filter not found in config: defaulting to global_filter = [ "a|.*|" ]
02:47:18.541133 pvs[4892] config/config.c:1475  devices/filter not found in config: defaulting to filter = [ "a|.*|" ]
02:47:18.541139 pvs[4892] device/dev-io.c:94  /dev/nvme0n1: using cached size 20971520 sectors
02:47:18.541147 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p1: using cached size 409600 sectors
02:47:18.541152 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p2: using cached size 1048576 sectors
02:47:18.541158 pvs[4892] device/dev-io.c:94  /dev/nvme0n1p3: using cached size 19511263 sectors
02:47:18.541170 pvs[4892] label/hints.c:1023  Wrote hint file with devs_hash 1490990139 count 4
02:47:18.541180 pvs[4892] cache/lvmcache.c:1136  Found VG info for 0 VGs
02:47:18.541185 pvs[4892] toollib.c:3886  Getting list of all devices from system
02:47:18.541191 pvs[4892] filters/filter-persistent.c:100  /dev/nvme0n1: filter cache skipping (cached bad)
02:47:18.541198 pvs[4892] filters/filter-persistent.c:106  /dev/nvme0n1p1: filter cache using (cached good)
02:47:18.541202 pvs[4892] filters/filter-persistent.c:106  /dev/nvme0n1p2: filter cache using (cached good)
02:47:18.541209 pvs[4892] filters/filter-persistent.c:106  /dev/nvme0n1p3: filter cache using (cached good)
02:47:18.541214 pvs[4892] toollib.c:4301  Processing PVs in VG #orphans_lvm2
02:47:18.541220 pvs[4892] metadata/metadata.c:5004  Reading orphan VG #orphans_lvm2.
02:47:18.541227 pvs[4892] device_mapper/libdm-config.c:1086  report/compact_output not found in config: defaulting to 0
02:47:18.541235 pvs[4892] misc/lvm-flock.c:84  Unlocking /run/lock/lvm/P_global
02:47:18.541242 pvs[4892] misc/lvm-flock.c:47  _undo_flock /run/lock/lvm/P_global
02:47:18.541256 pvs[4892] cache/lvmcache.c:2091  Destroy lvmcache content
02:47:18.628145 pvs[4892] lvmcmdline.c:3168  Completed: pvs -vvvv

[root@ip-10-116-2-190 ec2-user]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0   10G  0 disk 
├─nvme0n1p1 259:1    0  200M  0 part /boot/efi
├─nvme0n1p2 259:2    0  512M  0 part /boot
└─nvme0n1p3 259:3    0  9.3G  0 part /
[root@ip-10-116-2-190 ec2-user]# ls /sys/dev/block/259:0/
alignment_offset  dev                ext_range  inflight   nsid       nvme0n1p3  range      size    subsystem  wwid
bdi               device             hidden     integrity  nvme0n1p1  power      removable  slaves  trace
capability        discard_alignment  holders    mq         nvme0n1p2  queue      ro         stat    uevent
[root@ip-10-116-2-190 ec2-user]# ls /sys/dev/block/259:1/
alignment_offset  dev  discard_alignment  holders  inflight  partition  power  ro  size  start  stat  subsystem  trace  uevent
[root@ip-10-116-2-190 ec2-user]# ls /sys/dev/block/259:2/
alignment_offset  dev  discard_alignment  holders  inflight  partition  power  ro  size  start  stat  subsystem  trace  uevent
[root@ip-10-116-2-190 ec2-user]# ls /sys/dev/block/259:3/
alignment_offset  dev  discard_alignment  holders  inflight  partition  power  ro  size  start  stat  subsystem  trace  uevent
[root@ip-10-116-2-190 ec2-user]# ls -l /sys/dev/block/259:1
lrwxrwxrwx. 1 root root 0 Feb  9 02:43 /sys/dev/block/259:1 -> ../../devices/pci0000:00/0000:00:04.0/nvme/nvme0/nvme0n1/nvme0n1p1
[root@ip-10-116-2-190 ec2-user]# ls -l /sys/dev/block/259:2
lrwxrwxrwx. 1 root root 0 Feb  9 02:43 /sys/dev/block/259:2 -> ../../devices/pci0000:00/0000:00:04.0/nvme/nvme0/nvme0n1/nvme0n1p2
[root@ip-10-116-2-190 ec2-user]# ls -l /sys/dev/block/259:3
lrwxrwxrwx. 1 root root 0 Feb  9 02:43 /sys/dev/block/259:3 -> ../../devices/pci0000:00/0000:00:04.0/nvme/nvme0/nvme0n1/nvme0n1p3
[root@ip-10-116-2-190 ec2-user]# cat /sys/dev/block/259:1/partition
1
[root@ip-10-116-2-190 ec2-user]# cat /sys/dev/block/259:2/partition
2
[root@ip-10-116-2-190 ec2-user]# cat /sys/dev/block/259:3/partition
3
[root@ip-10-116-2-190 ec2-user]# ls -l /dev/nvme0n1p1
brw-rw----. 1 root disk 259, 1 Feb  9 02:44 /dev/nvme0n1p1
[root@ip-10-116-2-190 ec2-user]# ls -l /dev/nvme0n1p2
brw-rw----. 1 root disk 259, 2 Feb  9 02:44 /dev/nvme0n1p2
[root@ip-10-116-2-190 ec2-user]# ls -l /dev/nvme0n1p3
brw-rw----. 1 root disk 259, 3 Feb  9 02:44 /dev/nvme0n1p3

Comment 4 David Teigland 2021-02-09 16:11:13 UTC
Thanks for the info, it was a bug in the fix for bug 1859659.
fix in main branch
https://sourceware.org/git/?p=lvm2.git;a=commit;h=f74f94c2ddb1d33d75d325c959344a566a621fd5

Comment 5 David Teigland 2021-02-09 17:36:56 UTC
I managed to reproduce these errors (without the fix) by running pvcreate on an md partition.
- fdisk /dev/md0
- create partitions
- pvcreate /dev/md0p1
- readlink errors printed

Existing lvm tests do this but the readlink errors are non-critical so they don't cause the command to fail or do anything wrong.

Comment 10 Corey Marthaler 2021-02-16 22:59:39 UTC
Fix verified in the latest rpms.

kernel-4.18.0-287.el8    BUILT: Thu Feb 11 03:15:20 CST 2021
lvm2-2.03.11-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021
lvm2-libs-2.03.11-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021


[root@hayes-01 ~]# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdc /dev/sdb
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

[root@hayes-01 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1] [raid0] 
md0 : active raid0 sdb[1] sdc[0]
      3905681408 blocks super 1.2 512k chunks
      
unused devices: <none>

[root@hayes-01 ~]# fdisk /dev/md0
[...]
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Using default response p.
Partition number (1-4, default 1): 
First sector (2048-4294967295, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-4294967294, default 4294967294): 

Created a new partition 1 of type 'Linux' and of size 2 TiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

[root@hayes-01 ~]# cat /proc/partitions | grep md
   9        0 3905681408 md0
 259        1 2147482623 md0p1

[root@hayes-01 ~]# pvcreate /dev/md0p1 
  Physical volume "/dev/md0p1" successfully created.

Comment 12 errata-xmlrpc 2021-05-18 15:02:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1659

Comment 13 David Teigland 2021-08-24 15:09:32 UTC
*** Bug 1928298 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.