Bug 1236025
Summary: | [Backup]: Glusterfind fails after a snap restore with 'historical changelogs not available' | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sweta Anandpara <sanandpa> |
Component: | glusterfind | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
Status: | CLOSED WONTFIX | QA Contact: | Sweta Anandpara <sanandpa> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | avishwan, mchangir, rhs-bugs |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
The time stamp of the files/dirs changes when one executes a snapshot restore, resulting in a failure to read the appropriate changelogs. 'Glusterfind pre' fails with the following error: 'historical changelogs not available'
Existing glusterfind sessions fail to work after a snapshot restore
Workaround:
Gather the necessary information from existing glusterfind sessions, remove the sessions, perform a snapshot restore, and then create new glusterfind sessions.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-04-16 03:04:15 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1224102 | ||
Bug Blocks: | 1216951, 1223636 |
Description
Sweta Anandpara
2015-06-26 11:26:36 UTC
Sosreports updated at: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1236025/ Doc text is edited. Please sign off to be included in Known Issues. The edit that was done to what was already written has made it incorrect, Monti. Have edited it again. Doc text looks good to me. Feel free to open this bug if the issue still persists and you require a fix. Closing this as WONTFIX as we are not working on this bug, and treating it as a 'TIMEOUT'. |