Bug 1543068 - [CIOT] : Gluster CLI says "io-threads : enabled" on existing volumes post upgrade.
Summary: [CIOT] : Gluster CLI says "io-threads : enabled" on existing volumes post upg...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.4.0
Assignee: Ravishankar N
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On:
Blocks: 1503137 1545056 1552404
TreeView+ depends on / blocked
 
Reported: 2018-02-07 16:45 UTC by Ambarish
Modified: 2018-09-04 06:43 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.12.2-5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1545056 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:42:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:43:39 UTC

Description Ambarish 2018-02-07 16:45:29 UTC
Description of problem:
------------------------

Gluster v get says io-threads is "on" by default on a freshly created replicate volume.

[root@gqas013 ~]# gluster v get drogon client-io-threads
Option                                  Value                                   
------                                  -----                                   
performance.client-io-threads           on                                      


I checked volfile (and client mount logs) . I do not see the xlator loaded though.


Version-Release number of selected component (if applicable):
---------------------------------------------------------------

3.12.2-3

How reproducible:
-----------------

100%


Additional info:
----------------

 
Volume Name: drogon
Type: Distributed-Replicate
Volume ID: 577867b7-8aec-46e5-8f9b-ab8d85517bc2
Status: Started
Snapshot Count: 0
Number of Bricks: 24 x 3 = 72
Transport-type: tcp
Bricks:
Brick1: gqas013.sbu.lab.eng.bos.redhat.com:/bricks1/A1
Brick2: gqas016.sbu.lab.eng.bos.redhat.com:/bricks1/A1
Brick3: gqas006.sbu.lab.eng.bos.redhat.com:/bricks1/A1
Brick4: gqas008.sbu.lab.eng.bos.redhat.com:/bricks1/A1
Brick5: gqas003.sbu.lab.eng.bos.redhat.com:/bricks1/A1
Brick6: gqas007.sbu.lab.eng.bos.redhat.com:/bricks1/A1
Brick7: gqas013.sbu.lab.eng.bos.redhat.com:/bricks2/A1
Brick8: gqas016.sbu.lab.eng.bos.redhat.com:/bricks2/A1
Brick9: gqas006.sbu.lab.eng.bos.redhat.com:/bricks2/A1
Brick10: gqas008.sbu.lab.eng.bos.redhat.com:/bricks2/A1
Brick11: gqas003.sbu.lab.eng.bos.redhat.com:/bricks2/A1
Brick12: gqas007.sbu.lab.eng.bos.redhat.com:/bricks2/A1
Brick13: gqas013.sbu.lab.eng.bos.redhat.com:/bricks3/A1
Brick14: gqas016.sbu.lab.eng.bos.redhat.com:/bricks3/A1
Brick15: gqas006.sbu.lab.eng.bos.redhat.com:/bricks3/A1
Brick16: gqas008.sbu.lab.eng.bos.redhat.com:/bricks3/A1
Brick17: gqas003.sbu.lab.eng.bos.redhat.com:/bricks3/A1
Brick18: gqas007.sbu.lab.eng.bos.redhat.com:/bricks3/A1
Brick19: gqas013.sbu.lab.eng.bos.redhat.com:/bricks4/A1
Brick20: gqas016.sbu.lab.eng.bos.redhat.com:/bricks4/A1
Brick21: gqas006.sbu.lab.eng.bos.redhat.com:/bricks4/A1
Brick22: gqas008.sbu.lab.eng.bos.redhat.com:/bricks4/A1
Brick23: gqas003.sbu.lab.eng.bos.redhat.com:/bricks4/A1
Brick24: gqas007.sbu.lab.eng.bos.redhat.com:/bricks4/A1
Brick25: gqas013.sbu.lab.eng.bos.redhat.com:/bricks5/A1
Brick26: gqas016.sbu.lab.eng.bos.redhat.com:/bricks5/A1
Brick27: gqas006.sbu.lab.eng.bos.redhat.com:/bricks5/A1
Brick28: gqas008.sbu.lab.eng.bos.redhat.com:/bricks5/A1
Brick29: gqas003.sbu.lab.eng.bos.redhat.com:/bricks5/A1
Brick30: gqas007.sbu.lab.eng.bos.redhat.com:/bricks5/A1
Brick31: gqas013.sbu.lab.eng.bos.redhat.com:/bricks6/A1
Brick32: gqas016.sbu.lab.eng.bos.redhat.com:/bricks6/A1
Brick33: gqas006.sbu.lab.eng.bos.redhat.com:/bricks6/A1
Brick34: gqas008.sbu.lab.eng.bos.redhat.com:/bricks6/A1
Brick35: gqas003.sbu.lab.eng.bos.redhat.com:/bricks6/A1
Brick36: gqas007.sbu.lab.eng.bos.redhat.com:/bricks6/A1
Brick37: gqas013.sbu.lab.eng.bos.redhat.com:/bricks7/A1
Brick38: gqas016.sbu.lab.eng.bos.redhat.com:/bricks7/A1
Brick39: gqas006.sbu.lab.eng.bos.redhat.com:/bricks7/A1
Brick40: gqas008.sbu.lab.eng.bos.redhat.com:/bricks7/A1
Brick41: gqas003.sbu.lab.eng.bos.redhat.com:/bricks7/A1
Brick42: gqas007.sbu.lab.eng.bos.redhat.com:/bricks7/A1
Brick43: gqas013.sbu.lab.eng.bos.redhat.com:/bricks8/A1
Brick44: gqas016.sbu.lab.eng.bos.redhat.com:/bricks8/A1
Brick45: gqas006.sbu.lab.eng.bos.redhat.com:/bricks8/A1
Brick46: gqas008.sbu.lab.eng.bos.redhat.com:/bricks8/A1
Brick47: gqas003.sbu.lab.eng.bos.redhat.com:/bricks8/A1
Brick48: gqas007.sbu.lab.eng.bos.redhat.com:/bricks8/A1
Brick49: gqas013.sbu.lab.eng.bos.redhat.com:/bricks9/A1
Brick50: gqas016.sbu.lab.eng.bos.redhat.com:/bricks9/A1
Brick51: gqas006.sbu.lab.eng.bos.redhat.com:/bricks9/A1
Brick52: gqas008.sbu.lab.eng.bos.redhat.com:/bricks9/A1
Brick53: gqas003.sbu.lab.eng.bos.redhat.com:/bricks9/A1
Brick54: gqas007.sbu.lab.eng.bos.redhat.com:/bricks9/A1
Brick55: gqas013.sbu.lab.eng.bos.redhat.com:/bricks10/A1
Brick56: gqas016.sbu.lab.eng.bos.redhat.com:/bricks10/A1
Brick57: gqas006.sbu.lab.eng.bos.redhat.com:/bricks10/A1
Brick58: gqas008.sbu.lab.eng.bos.redhat.com:/bricks10/A1
Brick59: gqas003.sbu.lab.eng.bos.redhat.com:/bricks10/A1
Brick60: gqas007.sbu.lab.eng.bos.redhat.com:/bricks10/A1
Brick61: gqas013.sbu.lab.eng.bos.redhat.com:/bricks11/A1
Brick62: gqas016.sbu.lab.eng.bos.redhat.com:/bricks11/A1
Brick63: gqas006.sbu.lab.eng.bos.redhat.com:/bricks11/A1
Brick64: gqas008.sbu.lab.eng.bos.redhat.com:/bricks11/A1
Brick65: gqas003.sbu.lab.eng.bos.redhat.com:/bricks11/A1
Brick66: gqas007.sbu.lab.eng.bos.redhat.com:/bricks11/A1
Brick67: gqas013.sbu.lab.eng.bos.redhat.com:/bricks12/A1
Brick68: gqas016.sbu.lab.eng.bos.redhat.com:/bricks12/A1
Brick69: gqas006.sbu.lab.eng.bos.redhat.com:/bricks12/A1
Brick70: gqas008.sbu.lab.eng.bos.redhat.com:/bricks12/A1
Brick71: gqas003.sbu.lab.eng.bos.redhat.com:/bricks12/A1
Brick72: gqas007.sbu.lab.eng.bos.redhat.com:/bricks12/A1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@gqas013 ~]#

Comment 4 Ambarish 2018-02-08 06:58:06 UTC
Reproduced on a single brick volume as well.

Comment 5 Ravishankar N 2018-02-08 07:07:07 UTC
I was not able to re-create this on my dev VMs with the latest rhgs-3.4.0 source install (git HEAD has 3 extra patches on top of 3.12.2-3, namely:
* 4f5197f58 - (HEAD -> rhgs-3.4.0, origin/rhgs-3.4.0) cluster/dht: Add migration checks to dht_(f)xattrop (18 hours ago) <N Balachandran>
* ef0809430 - rpc: Showing some unusual timer error logs during brick stop (18 hours ago) <Mohit Agrawal>
* 14a8e4783 - md-cache: Add additional samba and macOS specific EAs to mdcache (19 hours ago) <Günther Deschner>)
----------------------------------------------------------------------
#gluster v create testvol replica 3 $VM1:/bricks/brick1 $VM2:/bricks/brick1 $VM1:/bricks/brick2 $VM2:/bricks/brick2 $VM1:/bricks/brick3 $VM2:/bricks/brick3 force

gluster v info
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: c41ea54f-f900-455b-84b6-4abe8934dae0
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.43.122:/bricks/brick1
Brick2: 10.70.42.54:/bricks/brick1
Brick3: 10.70.43.122:/bricks/brick2
Brick4: 10.70.42.54:/bricks/brick2
Brick5: 10.70.43.122:/bricks/brick3
Brick6: 10.70.42.54:/bricks/brick3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

# gluster  v get testvol client-io-threads
Option                                  Value                                   
------                                  -----                                   
performance.client-io-threads           off    
----------------------------------------------------------------------
Ambarish, could you see if you are able to reproduce this on a single node setup with fresh install bits (rhel 7.x + rhgs-3.4)? That can help us identify if this is related to upgrading only.

Comment 8 Ambarish 2018-02-09 04:44:42 UTC
Cannot reproduce the reported issue on a fresh install.

gluster v create testvol replica 2 gqas013:/bricks1/A1 gqas016:/bricks1/A1

Volume Name: testvol
Type: Replicate
Volume ID: 5d2b436e-2fd1-4201-8119-7f8d1eb25928
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gqas013:/bricks1/A1
Brick2: gqas016:/bricks1/A1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Comment 11 Ambarish 2018-02-09 06:10:02 UTC
STEPS TO  REPRODUCE :

1. Create rep 3 volume on 3.3.1 async.

[root@gqas013 /]# gluster v get drogon client-io-threads
Option                                  Value                                   
------                                  -----                                   
performance.client-io-threads           off                                     
[root@gqas013 /]# 

No mounts necessary.

2. Stop glusterd , volume.

3. Uprade glusterfs to 3.4 bits, in an offline way.

4. Restart glusterd,start force volume

5. Check io-threads on old volume. It becomes "on" for some reason post upgrade.

[root@gqas013 ~]# gluster v get drogon client-io-threads
Option                                  Value                                   
------                                  -----                                   
performance.client-io-threads           on                                      


6. Post this create a new rep3 volume. Observe that CIOT is "on" there as well.

[root@gqas013 ~]# gluster v create rep3 gqas013:/bricks1/aa gqas006:/bricks/Aa gqas003:/bricks1/Aaa 
volume create: rep3: success: please start the volume to access data

[root@gqas013 ~]# gluster v start rep3
volume start: rep3: success
[root@gqas013 ~]# 

[root@gqas013 ~]# gluster v get rep3 client-io-threads
Option                                  Value                                   
------                                  -----                                   
performance.client-io-threads           on

Comment 12 Ravishankar N 2018-02-09 06:43:30 UTC
I think this is a problem with cluster-op-version before and after upgrade still being at 31101, while a fresh install of rhgs-3.4.0 would have it at 31301?

Ambarish, could you confirm that var/lib/glusterd/glusterd.info still at 31101 post upgrade?

@Atin, wondering if we should bump up the cluster op-version to 31301 post upgrade?

Comment 13 Atin Mukherjee 2018-02-09 07:55:24 UTC
If the cluster.op-version hasn’t been bumped up as part of the upgrade process then please close this bug as that step is a mandate.

Comment 17 Ambarish 2018-02-12 09:31:13 UTC
Thanks Atin for https://bugzilla.redhat.com/show_bug.cgi?id=1543068#c14.

I have also repro'd it for old volumes. Have a replicate volume , upgrade gluster bits ,bump up op version and check CIOT via CLI.

Freshly created volumes have IOT as "off" post upgrade,which is expected.

I can confirm that the xlator is not loaded in the volfile in eoither case.

Giving the bug an appropriate summary.

Comment 19 Ravishankar N 2018-02-14 07:32:33 UTC
Upstream patch: https://review.gluster.org/#/c/19567/1

Comment 24 Bala Konda Reddy M 2018-04-12 11:10:21 UTC
Build: 3.12.2-7

Upgraded from 3.8.4-54.4 to 3.12.2-7
Before upgrade: The client-io-threads options is turned off
gluster v get 2cross3_98 client-io-threads
Option                                  Value                                   
------                                  -----                                   
performance.client-io-threads           off    

After upgrade: The option is turned off as expected, earlier it was showing on.

Hence marking it as verified.

Comment 26 errata-xmlrpc 2018-09-04 06:42:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.