Bug 1303045
| Summary: | NFS+attach tier:IOs pause for some time during attach tier | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | |
| Component: | tier | Assignee: | Mohammed Rafi KC <rkavunga> | |
| Status: | CLOSED WONTFIX | QA Contact: | krishnaram Karthick <kramdoss> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | rhgs-3.1 | CC: | hgowtham, jbyers, rcyriac, rhinduja, rhs-bugs, rkavunga, sankarshan | |
| Target Milestone: | --- | Keywords: | ZStream | |
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | tier-fuse-nfs-samba | |||
| Fixed In Version: | Doc Type: | Known Issue | ||
| Doc Text: |
When a tier is attached while I/O is occurring on an NFS mount, I/O pauses temporarily, usually for between 3 to 5 minutes. If I/O does not resume within 5 minutes, use the 'gluster volume start $VOLNAME force' command to resume I/O without interruption.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1358586 (view as bug list) | Environment: | ||
| Last Closed: | 2018-11-08 18:59:48 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1268895, 1358586 | |||
|
Description
Nag Pavan Chilakam
2016-01-29 12:15:25 UTC
We have another issue (bug 1306194) related to the same area, So I think it is better to have one doc instead of two. So the doc would be something like, When a tier is attached while I/O is occurring on an NFS mount, I/O pauses temporarily, usually for between 3 to 5 minutes. If it is not resuming more than the expected time, executing the command gluster volume start volname force will help you to resume the i/0 with out any i/o interruption. Looks good to me. the workaround suggested for volume restart incase of IO not resuming works well except a small glitch that can be seen in form of bug bz#1309186 - file creates fail with " failed to open '<filename>': Too many levels of symbolic links for file create/write when restarting NFS using vol start force has been raised I will try the following steps, 1)Reproducing the issue with 3.1.3 2)Measure the difference in time with add-brick on a regular volume and attach-tier on a tier volume 3)try to reproduce the issue with just restarting the gNFS without any attach-tier. Test result for comment9 1) I reproduced this issue in latest master, though it is 100% consistent it is very easy to reproduce. 2)For both attach-tier and add-brick, there is a delay of approx 3mnts to resume an operation. 3)When I do a simple restart of nfs server, so far I haven't seen a huge delay to resume the operation As tier is not being actively developed, I'm closing this bug. Feel free to open it if necessary. |