Bug 1569336
Summary: | Volume status inode is broken with brickmux | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | hari gowtham <hgowtham> |
Component: | glusterd | Assignee: | hari gowtham <hgowtham> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.12 | CC: | amukherj, bugs, hgowtham, moagrawa, rhinduja, rhs-bugs, rmadaka, storage-qa-internal, vbellur |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.12.15 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1566067 | Environment: | |
Last Closed: | 2018-10-23 14:21:35 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1566067, 1569346 | ||
Bug Blocks: | 1559452 |
Comment 1
hari gowtham
2018-04-19 05:22:39 UTC
Hari - is this patch backported to 3.12? Hi Atin, No, it wasnt backported. it has some dependent patches missing. Will have to edit it to send it to 3.12. Shall I proceed with it? Regards, Hari. a patch was sent back then to back port it and later got abandoned https://review.gluster.org/#/c/glusterfs/+/19903/ Well, I haven't seen any users reporting any problems around brick multiplexing in 3.12 series where there were plenty of them. This points to a fact that probably we don't have much users who're using this feature with 3.12. If the backport effort is huge, I'd not go for it. (In reply to Atin Mukherjee from comment #5) > Well, I haven't seen any users reporting any problems around brick > multiplexing in 3.12 series where there were plenty of them. This points to > a fact that probably we don't have much users who're using this feature with > 3.12. If the backport effort is huge, I'd not go for it. Especially after we have released 4.0, 4.1 & now branch 5 is in place. (In reply to Atin Mukherjee from comment #6) > (In reply to Atin Mukherjee from comment #5) > > Well, I haven't seen any users reporting any problems around brick > > multiplexing in 3.12 series where there were plenty of them. This points to > > a fact that probably we don't have much users who're using this feature with > > 3.12. If the backport effort is huge, I'd not go for it. > > Especially after we have released 4.0, 4.1 & now branch 5 is in place. As the bug was already backported, the changes were minimal. so have backported it again. Given 3.12 is going to EOL, we have decided not to accept this backport considering we don't have any users reporting issues against brick multiplexing at 3.12. Hi Atin, Do we have to backport this considering that there is going to be another release on 3.12. As users might come across this in the future. Regards, Hari. I'll leave it up to you to decide, but if no one has used 3.12 series so far for trying brick multiplexing, they wouldn't do the same in upgrading to the last update, rather they should be encouraged to upgrade to the latest versions. I thought of providing the fix so that it will be there to avoid the issue if it pops up when someone gives it a try. But as they aren't using brick mux with 3.12 it makes sense for them to upgrade to the latest release. The bug was merged by jiffin was it was available for review. https://review.gluster.org/#/c/glusterfs/+/19903/ I'm changing the status. This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.15, please open a new bug report. glusterfs-3.12.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000114.html [2] https://www.gluster.org/pipermail/gluster-users/ |