Bug 1766640
| Summary: | EC inservice upgrade fails from RHGS 3.3.1->3.5.0 | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
| Component: | disperse | Assignee: | Pranith Kumar K <pkarampu> |
| Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | rhgs-3.5 | CC: | amukherj, pkarampu, pprakash, rhs-bugs, saraut, sasundar, sheggodu, storage-qa-internal |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 3.5.z Batch Update 1 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-6.0-23 | Doc Type: | Known Issue |
| Doc Text: |
Special handling is sometimes required to ensure I/O on clients with older versions works correctly during an in-service upgrade. Servers with dispersed volumes do not do this handling for Red Hat Gluster Storage 3.3.1 clients when upgrading to version 3.5.
Workaround: If you use dispersed volumes and have clients on Red Hat Gluster Storage 3.3.1, perform an offline upgrade when moving server and client to version 3.5.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-01-30 06:42:48 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1696815 | ||
|
Description
Nag Pavan Chilakam
2019-10-29 14:47:32 UTC
Verified with RHGS 3.5.1 interim build ( glusterfs-6.0-24.el7rhgs ) with the following steps 1. Created a 6 node trusted storage pool ( gluster cluster ) with RHGS 3.3.1 ( glusterfs-3.8.4-54.15.el7rhgs ) 2. Created 1x(4+2) and 2x(4+2) disperse volumes 3. Disable disperse.eager-lock and disperse.optimistic-change-log off 4. Mount the volumes from 2 clients 5. Start kernel untar workload 6. Kill glusterfsd(brick), glusterfs and glusterd process in node1 ( # pkill glusterfsd; pkill glusterfs; systemctl stop glusterd ) 7. Perform upgrade to glusterfs-6.0-24.el7rhgs 8. Post successful upgrade, start glusterd 9. Wait for self-heal to get completed on both the disperse volumes 10. Repeat steps 6 to 10 on other nodes and monitor the progress of kernel untar workload, post upgrading each node is completed. Observation 1. Kernel untar workload was in progress and no interruption With these steps, marking this bug as verified. After upgrading the server, also bumped-up the op-version to 70000 and also unmounted the client, upgraded the client and remounted the disperse volumes Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0288 |