Bug 156110
Summary: | activating restored volumes after hardware failure can fail | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 4 | Reporter: | Corey Marthaler <cmarthal> |
Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Cluster QE <mspqa-list> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.0 | CC: | agk, ccaulfie, cfeist, dwysocha, jbrassow, mbroz |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2010-05-14 22:28:11 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Corey Marthaler
2005-04-27 16:46:07 UTC
Looks like a job for agk. Devel ACK. Is this one a cluster problem or a core rhel bug? If it is core RHEL need to change the product and component fields. cluster-specific, but any fix would go into core lvm2 This simply looks like another manifestation of the 'clvmd internal cache not getting updated' problem. I think this was fixed with various changes in lvmcache code. If it is still reproducible, please reopen. |