RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1740944 - WARNING: Not using lvmetad because cache update failed
Summary: WARNING: Not using lvmetad because cache update failed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.7
Hardware: x86_64
OS: Linux
urgent
medium
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1712125 (view as bug list)
Depends On:
Blocks: 1689420 1704833 1715931 1821862
TreeView+ depends on / blocked
 
Reported: 2019-08-13 23:20 UTC by Corey Marthaler
Modified: 2023-10-19 02:36 UTC (History)
16 users (show)

Fixed In Version: lvm2-2.02.187-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-29 19:55:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
lvmetad (8.53 KB, application/octet-stream)
2020-04-15 19:44 UTC, Patrick
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:3927 0 None None None 2020-09-29 19:56:28 UTC

Description Corey Marthaler 2019-08-13 23:20:58 UTC
Description of problem:
Filing this issue to decouple comment 1672336#c14 from that bug.

We still see the "WARNING: Not updating lvmetad because cache update failed." messages at boot from time to time on a variety of test machines running the latest 7.7 rpms. We do see the proper pvs/lvs listed after boot however, and we tried the quick check listed in comment #9 and saw no messages.

We (QA) will attempt to narrow down what device state from testing environments may be causing warnings like this, but we know they still existed in:

3.10.0-1058.el7.x86_64

lvm2-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-libs-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-cluster-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-lockd-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-python-boom-0.9-18.el7    BUILT: Fri Jun 21 04:18:58 CDT 2019
cmirror-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-libs-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-libs-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019


Repeating 1672336#c14:

A couple test machines just last week: 
[root@hayes-01 ~]#  grep "cache update failed" /var/log/message*
/var/log/messages-20190630:Jun 26 14:21:42 hayes-01 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 14:21:42 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 16:18:17 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 16:18:17 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 16:18:17 hayes-01 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 17:16:45 hayes-01 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 17:16:45 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 17:16:45 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.

[root@hayes-03 ~]#  grep "cache update failed" /var/log/message*
/var/log/messages-20190630:Jun 24 12:02:06 hayes-03 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 24 12:02:06 hayes-03 lvm: WARNING: Not updating lvmetad because cache update failed.
/var/log/messages-20190630:Jun 24 14:01:54 hayes-03 lvm: WARNING: Not using lvmetad because cache update failed.
/var/log/messages-20190630:Jun 26 13:47:33 hayes-03 lvm: WARNING: Not using lvmetad because cache update failed.





Jun 26 14:21:41 hayes-01 systemd: Starting LVM2 PV scan on device 8:113...
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 multipathd: sdg: add path (uevent)
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 lvm: WARNING: lvmetad is being updated, retrying (setup) for 10 more seconds.
Jun 26 14:21:41 hayes-01 multipathd: sdg: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 kernel: cryptd: max_cpu_qlen set to 1000
Jun 26 14:21:42 hayes-01 multipathd: sdf: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdf: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 multipathd: sdi: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdi: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 multipathd: sdk: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdk: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 multipathd: sdc: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdc: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 multipathd: sdb: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sdb: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 kernel: AVX2 version of gcm_enc/dec engaged.
Jun 26 14:21:42 hayes-01 kernel: AES CTR mode by8 optimization enabled
Jun 26 14:21:42 hayes-01 kernel: alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
Jun 26 14:21:42 hayes-01 kernel: alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni)
Jun 26 14:21:42 hayes-01 multipathd: sda: add path (uevent)
Jun 26 14:21:42 hayes-01 multipathd: sda: spurious uevent, path already in pathvec
Jun 26 14:21:42 hayes-01 kernel: dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.3)
Jun 26 14:21:42 hayes-01 systemd: Found device PERC_H330_Adp 2.
Jun 26 14:21:42 hayes-01 systemd: Activating swap /dev/disk/by-uuid/a65a2321-6664-44b9-9786-cdabc080c63a...
Jun 26 14:21:42 hayes-01 systemd: Found device PERC_H330_Adp 1.
Jun 26 14:21:42 hayes-01 lvm: WARNING: lvmetad is being updated by another command (pid 1190).
Jun 26 14:21:42 hayes-01 lvm: WARNING: lvmetad is being updated by another command (pid 1190).
Jun 26 14:21:42 hayes-01 lvm: WARNING: Not using lvmetad because cache update failed.
Jun 26 14:21:42 hayes-01 kernel: Adding 4194300k swap on /dev/sda2.  Priority:-2 extents:1 across:4194300k FS
Jun 26 14:21:42 hayes-01 systemd: Activated swap /dev/disk/by-uuid/a65a2321-6664-44b9-9786-cdabc080c63a.
Jun 26 14:21:42 hayes-01 systemd: Reached target Swap.
Jun 26 14:21:42 hayes-01 kernel: iTCO_vendor_support: vendor-support=0
Jun 26 14:21:42 hayes-01 lvm: WARNING: lvmetad is being updated by another command (pid 1190).
Jun 26 14:21:42 hayes-01 kernel: iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
Jun 26 14:21:42 hayes-01 kernel: iTCO_wdt: Found a Wellsburg TCO device (Version=2, TCOBASE=0x0460)
Jun 26 14:21:42 hayes-01 lvm: WARNING: lvmetad is being updated by another command (pid 1190).
Jun 26 14:21:42 hayes-01 lvm: WARNING: Not updating lvmetad because cache update failed.
Jun 26 14:21:42 hayes-01 kernel: iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
Jun 26 14:21:42 hayes-01 lvm: Command failed with status code 5.

Comment 2 Jonathan Earl Brassow 2019-08-19 19:24:59 UTC
related to bug 1672336

Comment 3 Peter Rajnoha 2019-08-20 15:52:06 UTC
*** Bug 1712125 has been marked as a duplicate of this bug. ***

Comment 7 David Teigland 2019-09-27 14:59:44 UTC
Here is a new commit that may resolve some more of the lvmetad problems:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=5d6bf1efb225b964bfff398277e68345acdac1d0

Comment 8 Trinh Dao 2019-11-04 21:10:35 UTC
David, 
will this commit going to included in next RHEL7.8 snapshot 1 build?

thanks,
trinh

Comment 9 David Teigland 2019-11-12 16:38:56 UTC
I doubt that the commit in comment 7 it is included in 7.8 since that commit is quite recent.

Comment 10 Trinh Dao 2019-11-19 14:48:03 UTC
Thank you David for the update!

Monte, 
since there is commit, can RH change the status to POST or MODIFIED?
can you check if commit makes it in the next RHEL7.8 snapshot 1 release?

Comment 11 Trinh Dao 2020-02-12 20:51:14 UTC
David, will the fix be included in RHEL7.8 snapshot-5 release on 2/18?

thanks,
trinh

Comment 12 David Teigland 2020-02-12 21:04:43 UTC
I checked dist-git and it is not included AFAICT.  I'm not sure how these kinds of things get flagged to get the attention of people who can approve them.

Comment 14 Jonathan Earl Brassow 2020-02-17 17:53:47 UTC
We have missed this one for rhel7.8.  The warning should not affect operation.

This will be fixed for 7.9.  We also have the option of fixing in a 7.8 zstream update, which would allow the fix to be had before a 7.9 release.  Please advise if this is desired.

Comment 18 Trinh Dao 2020-04-08 20:25:56 UTC
RH, is this bug fixed in RHEL7.8 GA?

thanks,
trinh

Comment 19 David Teigland 2020-04-08 20:44:30 UTC
I'm not sure, but I suspect it might since it was added to the errata in a previous comment.

Comment 20 Trinh Dao 2020-04-14 13:26:09 UTC
Hi David, 
HPE engineer PatrickV retested with RHEL7.8GA and still see the same error message.

thanks,
trinh

Comment 21 David Teigland 2020-04-14 14:23:43 UTC
I'm not seeing the commit from comment 7 in rhel7, but we don't really have anything specific linking that commit to the errors seen here, so it may be worthwhile to collect the messages from the failing boot again, including systemctl status -f -n 1000 lvm2-* (which should include some extra info in the new version.)

Comment 22 Trinh Dao 2020-04-15 19:26:03 UTC
Patrick,
can you please collect info in comment 21 need to debug this issue?

thanks,
trinh

Comment 23 Patrick 2020-04-15 19:44:37 UTC
Created attachment 1679159 [details]
lvmetad

Comment 24 David Teigland 2020-04-15 20:33:39 UTC
Thanks, that output was helpful, were all the VGs properly autoactivated in spite of the two lvm2-pvscan services that report failures?  Could you try a test package if I built one that includes the missing patch?

Comment 25 Patrick 2020-04-15 21:16:30 UTC
I'm not sure I can test it since the issue I'm seeing is on boot to install, BZ 1712125.

Comment 26 David Teigland 2020-04-16 17:03:11 UTC
Can someone check if all the VGs were activated?  It appears so, in which case this is just an issue of changing the exit code to avoid reporting an error in the service when there was none.

Comment 27 Corey Marthaler 2020-06-09 02:05:56 UTC
Marking this verified in the latest rhel7.9 rpms. We didn't see this error on the machines doing rhel7.9 regression testing that we did in rhel7.8.

[root@mckinley-01 ~]# grep "cache update failed" /var/log/message*
[root@mckinley-01 ~]# 


lvm2-2.02.187-5.el7    BUILT: Sun Jun  7 08:13:11 CDT 2020
lvm2-libs-2.02.187-5.el7    BUILT: Sun Jun  7 08:13:11 CDT 2020
lvm2-cluster-2.02.187-5.el7    BUILT: Sun Jun  7 08:13:11 CDT 2020
lvm2-lockd-2.02.187-5.el7    BUILT: Sun Jun  7 08:13:11 CDT 2020
device-mapper-1.02.170-5.el7    BUILT: Sun Jun  7 08:13:11 CDT 2020
device-mapper-libs-1.02.170-5.el7    BUILT: Sun Jun  7 08:13:11 CDT 2020
device-mapper-event-1.02.170-5.el7    BUILT: Sun Jun  7 08:13:11 CDT 2020
device-mapper-event-libs-1.02.170-5.el7    BUILT: Sun Jun  7 08:13:11 CDT 2020
device-mapper-persistent-data-0.8.5-3.el7    BUILT: Mon Apr 20 09:49:16 CDT 2020

Comment 29 errata-xmlrpc 2020-09-29 19:55:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3927


Note You need to log in before you can comment on or make changes to this bug.