Description of problem: In test mode, when we switch on the features.record-counters, a message needs to be displayed on the screen prompting the user to set the read/write counter values (if they are 0). Version-Release number of selected component (if applicable): glusterfs-3.7.5-9.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. I had a disperse cold tier and a dist-rep hot tier volume. 2. Verify the value of 'features.record-counters' (to be off) and 'cluster.write-freq-threshold' and 'cluster.read-freq-threshold' to be '0' 3. Switch on the value of 'features.record-counters' Actual results: No message is displayed mentioning that the read/write counters are set to '0' and the user can choose to change, if required Expected results: A message should pop up on the screen, prompting the user to set the read/write counter values Additional info: It is a follow up of bug 1286346. The case mentioned above is something that is not fixed in 1286346 and this new bug is raised to track the same. [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v info nash Volume Name: nash Type: Tier Volume ID: 66caac13-cb0a-4a5d-93e3-544ad19472c2 Status: Started Number of Bricks: 10 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: 10.70.37.203:/rhs/thinbrick2/nash2 Brick2: 10.70.37.55:/rhs/thinbrick2/nash2 Brick3: 10.70.37.203:/rhs/thinbrick2/nash Brick4: 10.70.37.55:/rhs/thinbrick2/nash Cold Tier: Cold Tier Type : Disperse Number of Bricks: 1 x (4 + 2) = 6 Brick5: 10.70.37.55:/rhs/thinbrick1/nash Brick6: 10.70.37.203:/rhs/thinbrick1/nash Brick7: 10.70.37.210:/rhs/thinbrick1/nash Brick8: 10.70.37.141:/rhs/thinbrick1/nash Brick9: 10.70.37.210:/rhs/thinbrick2/nash Brick10: 10.70.37.141:/rhs/thinbrick2/nash Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on cluster.disperse-self-heal-daemon: enable performance.readdir-ahead: on nfs.disable: off ganesha.enable: off features.record-counters: off cluster.write-freq-threshold: 0 cluster.read-freq-threshold: 0 nfs-ganesha: disable cluster.enable-shared-storage: enable [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v set nash features.record-counters on volume set: success [root@dhcp37-55 ~]# [root@dhcp37-55 ~]# gluster v info nash Volume Name: nash Type: Tier Volume ID: 66caac13-cb0a-4a5d-93e3-544ad19472c2 Status: Started Number of Bricks: 10 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: 10.70.37.203:/rhs/thinbrick2/nash2 Brick2: 10.70.37.55:/rhs/thinbrick2/nash2 Brick3: 10.70.37.203:/rhs/thinbrick2/nash Brick4: 10.70.37.55:/rhs/thinbrick2/nash Cold Tier: Cold Tier Type : Disperse Number of Bricks: 1 x (4 + 2) = 6 Brick5: 10.70.37.55:/rhs/thinbrick1/nash Brick6: 10.70.37.203:/rhs/thinbrick1/nash Brick7: 10.70.37.210:/rhs/thinbrick1/nash Brick8: 10.70.37.141:/rhs/thinbrick1/nash Brick9: 10.70.37.210:/rhs/thinbrick2/nash Brick10: 10.70.37.141:/rhs/thinbrick2/nash Options Reconfigured: cluster.tier-mode: cache features.ctr-enabled: on cluster.disperse-self-heal-daemon: enable performance.readdir-ahead: on nfs.disable: off ganesha.enable: off features.record-counters: on cluster.write-freq-threshold: 0 cluster.read-freq-threshold: 0 nfs-ganesha: disable cluster.enable-shared-storage: enable [root@dhcp37-55 ~]# [root@dhcp37-55 ~]#
Thank you for your bug report. We are no longer working on any improvements for Tier. This bug will be set to CLOSED WONTFIX to reflect this. Please reopen if the rfe is deemed critical.