Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionCorey Marthaler
2019-08-13 23:20:58 UTC
Description of problem:
Filing this issue to decouple comment 1672336#c14 from that bug.
We still see the "WARNING: Not updating lvmetad because cache update failed." messages at boot from time to time on a variety of test machines running the latest 7.7 rpms. We do see the proper pvs/lvs listed after boot however, and we tried the quick check listed in comment #9 and saw no messages.
We (QA) will attempt to narrow down what device state from testing environments may be causing warnings like this, but we know they still existed in:
3.10.0-1058.el7.x86_64
lvm2-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-libs-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-cluster-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-lockd-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-python-boom-0.9-18.el7 BUILT: Fri Jun 21 04:18:58 CDT 2019
cmirror-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-persistent-data-0.8.5-1.el7 BUILT: Mon Jun 10 03:58:20 CDT 2019
Repeating 1672336#c14:
A couple test machines just last week:
[root@hayes-01 ~]# grep "cache update failed" /var/log/message*
/var/log/messages-20190630:Jun 26 14:21:42 hayes-01 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 14:21:42 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 16:18:17 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 16:18:17 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 16:18:17 hayes-01 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 17:16:45 hayes-01 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 17:16:45 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 17:16:45 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
[root@hayes-03 ~]# grep "cache update failed" /var/log/message*
/var/log/messages-20190630:Jun 24 12:02:06 hayes-03 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 24 12:02:06 hayes-03 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 24 14:01:54 hayes-03 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 13:47:33 hayes-03 lvm: WARNING: Not using lvmetad because cache update failed.
Jun 26 14:21:41 hayes-01 systemd: Starting LVM2 PV scan on device 8:113...
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 multipathd: sdg: add path (uevent)
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 multipathd: sdg: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 kernel: cryptd: max_cpu_qlen set to 1000
Jun 26 14:21:42 hayes-01 multipathd: sdf: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdf: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 multipathd: sdi: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdi: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 multipathd: sdk: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdk: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 multipathd: sdc: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdc: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 multipathd: sdb: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdb: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 kernel: AVX2 version of gcm_enc/dec engaged.
Jun 26 14:21:42 hayes-01 kernel: AES CTR mode by8 optimization enabled
Jun 26 14:21:42 hayes-01 kernel: alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
Jun 26 14:21:42 hayes-01 kernel: alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni)
Jun 26 14:21:42 hayes-01 multipathd: sda: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sda: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 kernel: dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.3)
Jun 26 14:21:42 hayes-01 systemd: Found device PERC_H330_Adp 2.
Jun 26 14:21:42 hayes-01 systemd: Activating swap /dev/disk/by-uuid/a65a2321-6664-44b9-9786-cdabc080c63a...
Jun 26 14:21:42 hayes-01 systemd: Found device PERC_H330_Adp 1.
Jun 26 14:21:42 hayes-01 lvm: WARNING: lvmetad is being updated by another command (pid 1190).
Jun 26 14:21:42 hayes-01 lvm: WARNING: lvmetad is being updated by another command (pid 1190).
Jun 26 14:21:42 hayes-01 lvm: WARNING: Not using lvmetad because cache update failed.
Jun 26 14:21:42 hayes-01 kernel: Adding 4194300k swap on /dev/sda2. Priority:-2 extents:1 across:4194300k FS
Jun 26 14:21:42 hayes-01 systemd: Activated swap /dev/disk/by-uuid/a65a2321-6664-44b9-9786-cdabc080c63a.
Jun 26 14:21:42 hayes-01 systemd: Reached target Swap.
Jun 26 14:21:42 hayes-01 kernel: iTCO_vendor_support: vendor-support=0
Jun 26 14:21:42 hayes-01 lvm: WARNING: lvmetad is being updated by another command (pid 1190).
Jun 26 14:21:42 hayes-01 kernel: iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
Jun 26 14:21:42 hayes-01 kernel: iTCO_wdt: Found a Wellsburg TCO device (Version=2, TCOBASE=0x0460)
Jun 26 14:21:42 hayes-01 lvm: WARNING: lvmetad is being updated by another command (pid 1190).
Jun 26 14:21:42 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
Jun 26 14:21:42 hayes-01 kernel: iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
Jun 26 14:21:42 hayes-01 lvm: Command failed with status code 5.
Comment 2Jonathan Earl Brassow
2019-08-19 19:24:59 UTC
Thank you David for the update!
Monte,
since there is commit, can RH change the status to POST or MODIFIED?
can you check if commit makes it in the next RHEL7.8 snapshot 1 release?
I checked dist-git and it is not included AFAICT. I'm not sure how these kinds of things get flagged to get the attention of people who can approve them.
Comment 14Jonathan Earl Brassow
2020-02-17 17:53:47 UTC
We have missed this one for rhel7.8. The warning should not affect operation.
This will be fixed for 7.9. We also have the option of fixing in a 7.8 zstream update, which would allow the fix to be had before a 7.9 release. Please advise if this is desired.
I'm not seeing the commit from comment 7 in rhel7, but we don't really have anything specific linking that commit to the errors seen here, so it may be worthwhile to collect the messages from the failing boot again, including systemctl status -f -n 1000 lvm2-* (which should include some extra info in the new version.)
Thanks, that output was helpful, were all the VGs properly autoactivated in spite of the two lvm2-pvscan services that report failures? Could you try a test package if I built one that includes the missing patch?
Can someone check if all the VGs were activated? It appears so, in which case this is just an issue of changing the exit code to avoid reporting an error in the service when there was none.
Marking this verified in the latest rhel7.9 rpms. We didn't see this error on the machines doing rhel7.9 regression testing that we did in rhel7.8.
[root@mckinley-01 ~]# grep "cache update failed" /var/log/message*
[root@mckinley-01 ~]#
lvm2-2.02.187-5.el7 BUILT: Sun Jun 7 08:13:11 CDT 2020
lvm2-libs-2.02.187-5.el7 BUILT: Sun Jun 7 08:13:11 CDT 2020
lvm2-cluster-2.02.187-5.el7 BUILT: Sun Jun 7 08:13:11 CDT 2020
lvm2-lockd-2.02.187-5.el7 BUILT: Sun Jun 7 08:13:11 CDT 2020
device-mapper-1.02.170-5.el7 BUILT: Sun Jun 7 08:13:11 CDT 2020
device-mapper-libs-1.02.170-5.el7 BUILT: Sun Jun 7 08:13:11 CDT 2020
device-mapper-event-1.02.170-5.el7 BUILT: Sun Jun 7 08:13:11 CDT 2020
device-mapper-event-libs-1.02.170-5.el7 BUILT: Sun Jun 7 08:13:11 CDT 2020
device-mapper-persistent-data-0.8.5-3.el7 BUILT: Mon Apr 20 09:49:16 CDT 2020
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2020:3927