Bug 1598179
Summary: | [GSS] [Tracker:RHEL] CNS: Number of devices per brick leads to long LVM Scan time on Server Reboot | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Matthew Robson <mrobson> |
Component: | rhgs-server-container | Assignee: | Raghavendra Talur <rtalur> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Prasanth <pprakash> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | cns-3.9 | CC: | agk, akamra, bkunal, coughlan, ekuric, hchiramm, hongkliu, jarrpa, jmulligan, jstrunk, kramdoss, Lee.McClintock, madam, moagrawa, mpillai, mrobson, nberry, pprakash, prajnoha, rhs-bugs, rtalur, sarumuga, sheggodu, teigland, vbellur, zkabelac |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-02-19 14:30:25 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1613141 | ||
Bug Blocks: | 1641915, 1642792 |
Description
Matthew Robson
2018-07-04 14:31:59 UTC
Are all VGs active? (should they be?) Are there happen to be LVs within LVs? Is lvmetadata up and running (I don't think there's a need for it) ? Can you try just filtering out the brick LVs in lvm.conf global_filter = [ "r|brick|" ] Filtering them out means that lvm will not waste time scanning those brick LVs for other PVs. You do not appear to be "stacking" PVs on top of LVs, in which case there is no reason to scan the LVs. I said the same up in comment 13. If you filter them out, then the pvscan commands that are run to scan each brick LV will do nothing. Nothing needs to be added to the lvmetad cache from these LVs. Moving the component to CNS Ansible, mainly to point out that, we need to get this as a install step to setup LVM global setting while getting the node setup. Probably, though I'm not the right person to determine that nor make that change. I also don't know who would be. David, isn't this a deja-vu with VDSM (see https://bugzilla.redhat.com/show_bug.cgi?id=1374545 ) ? We need to disable LVs we do not need/usr, we need to disable lvmetad and we need a correct lvm.conf, no? (I don't remember if we need to re-run dracut?) This bug may be the same recently fixed bug 1613141. I'd suggest trying the fix from that bug. (In reply to Yaniv Kaul from comment #39) > David, isn't this a deja-vu with VDSM (see > https://bugzilla.redhat.com/show_bug.cgi?id=1374545 ) ? We need to disable > LVs we do not need/usr, we need to disable lvmetad and we need a correct > lvm.conf, no? > (I don't remember if we need to re-run dracut?) In the vdsm case there is shared storage, but here I don't think there is, so lvmetad can still legitimately be used. Also, I don't think there are any PVs layered on the LVs (from guests or otherwise), which means there should be no foreign LVs (e.g. from guests) that need to be excluded. Adding the LVs to the filter will not cause them to disappear, it will just prevent lvm from scanning them for layered (guest) PVs. When there are hundreds or thousands of LVs, scanning them can waste a lot of time and cause contention with lvmetad. Updating the initramfs to include the lvm.conf filter change would probably be best, although it's probably not necessary (there should be no pvscans or autoactivation happening in the initramfs). |