Bug 1552404 - [CIOT] : Gluster CLI says "io-threads : enabled" on existing volumes post upgrade.
Summary: [CIOT] : Gluster CLI says "io-threads : enabled" on existing volumes post upg...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1543068 1545056
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-07 04:59 UTC by Ravishankar N
Modified: 2018-03-26 12:32 UTC (History)
6 users (show)

Fixed In Version: glusterfs-4.0.1
Clone Of: 1545056
Environment:
Last Closed: 2018-03-26 12:32:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2018-03-07 04:59:03 UTC
+++ This bug was initially created as a clone of Bug #1545056 +++

+++ This bug was initially created as a clone of Bug #1543068 +++

Description of problem:

I have also repro'd it for old volumes. Have a replicate volume , upgrade gluster bits ,bump up op version and check CIOT via CLI.

Freshly created volumes have IOT as "off" post upgrade,which is expected.

I can confirm that the xlator is not loaded in the volfile in eoither case.

--- Additional comment from Worker Ant on 2018-02-14 02:31:09 EST ---

REVIEW: https://review.gluster.org/19567 (glusterd: fix bug in volume get value for client-io-threads) posted (#1) for review on master by Ravishankar N

--- Additional comment from Worker Ant on 2018-03-06 23:48:39 EST ---

COMMIT: https://review.gluster.org/19567 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: volume get fixes for client-io-threads & quorum-type

1. If a replica volume created on glusterfs-3.8 was upgraded to
glusterfs-3.12, `gluster vol get volname client-io-threads` displayed
'on' even though it wasn't and the xlator wasn't loaded on
the client-graph. This was due to removing certain checks in
glusterd_get_default_val_for_volopt as a part of commit
47604fad4c2a3951077e41e0c007ceb979bb2c24. Fix it.

2. Also, as a part of op-version bump-up, client-io-threads was being
loaded on the clients  during volfile regeneration. Prevent it.

3. AFR assumes quorum-type to be auto in newly created replic 3 (odd
replica in general) volumes but `gluster vol get quorum-type` displays
'none'. Fix it.

Change-Id: I19e586361ed1065c70fb378533d3b4dac1095df9
BUG: 1545056
Signed-off-by: Ravishankar N <ravishankar>

Comment 1 Worker Ant 2018-03-07 05:01:41 UTC
REVIEW: https://review.gluster.org/19683 (glusterd: volume get fixes for client-io-threads & quorum-type) posted (#1) for review on release-4.0 by Ravishankar N

Comment 2 Worker Ant 2018-03-16 13:39:45 UTC
COMMIT: https://review.gluster.org/19683 committed in release-4.0 by "Shyamsundar Ranganathan" <srangana> with a commit message- glusterd: volume get fixes for client-io-threads & quorum-type

1. If a replica volume created on glusterfs-3.8 was upgraded to
glusterfs-3.12, `gluster vol get volname client-io-threads` displayed
'on' even though it wasn't and the xlator wasn't loaded on
the client-graph. This was due to removing certain checks in
glusterd_get_default_val_for_volopt as a part of commit
47604fad4c2a3951077e41e0c007ceb979bb2c24. Fix it.

2. Also, as a part of op-version bump-up, client-io-threads was being
loaded on the clients  during volfile regeneration. Prevent it.

3. AFR assumes quorum-type to be auto in newly created replic 3 (odd
replica in general) volumes but `gluster vol get quorum-type` displays
'none'. Fix it.

Change-Id: I19e586361ed1065c70fb378533d3b4dac1095df9
BUG: 1552404
Signed-off-by: Ravishankar N <ravishankar>
(cherry picked from commit bd2c45fe3180fe36b042d5eabd348b6eaeb8d3e2)

Comment 3 Shyamsundar 2018-03-26 12:32:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.1, please open a new bug report.

glusterfs-4.0.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000093.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.