Description of problem: ======================= Done the peer probe to a rhgs node from another node having quota enabled volume, the peer probe got success and peer status was in "Rejected" state. Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.5-8 How reproducible: ================= Always Steps to Reproduce: =================== 1. Have a RHGS node (node-1) with quota enabled volume 2. Peer probe another node (node-2) from node-1 3. Check the peer status //it will be in rejected state. Actual results: =============== Peer probe status is "Rejected" Expected results: ================= Peer probe status should be in correct state. Additional info:
Patch submitted upstream: http://review.gluster.org/#/c/12865/
upstream patch: http://review.gluster.org/#/c/12865/ downstream patch: https://code.engineering.redhat.com/gerrit/#/c/63352/ release-3.7 patch: http://review.gluster.org/#/c/12872/
root@rhs001 ~]# gluster v create vol0 replica 2 10.70.47.143:/rhs/brick1/b1 10.70.47.145:/rhs/brick1/b2 10.70.47.143:/rhs/brick2/b3 10.70.47.145:/rhs/brick2/b4 volume create: vol0: success: please start the volume to access data [root@rhs001 ~]# gluster v status Volume vol0 is not started [root@rhs001 ~]# gluster v start vol0 volume start: vol0: success [root@rhs001 ~]# gluster v quota vol0 enable volume quota : success [root@rhs001 ~]# gluster v quota vol0 limit-usage / 20GBvolume quota : success [root@rhs001 ~]# gluster v info Volume Name: vol0 Type: Distributed-Replicate Volume ID: b0a1562f-0d57-4d85-a481-0f3f3e4eefcd Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick1/b1 Brick2: 10.70.47.145:/rhs/brick1/b2 Brick3: 10.70.47.143:/rhs/brick2/b3 Brick4: 10.70.47.145:/rhs/brick2/b4 Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on performance.readdir-ahead: on [root@rhs001 ~]# gluster peer status Number of Peers: 1 Hostname: 10.70.47.145 Uuid: af76d670-850c-4530-bf51-c8a7a149a16d State: Peer in Cluster (Connected) [root@rhs001 ~]# gluster peer probe 10.70.47.2 peer probe: success. [root@rhs001 ~]# gluster peer status Number of Peers: 2 Hostname: 10.70.47.145 Uuid: af76d670-850c-4530-bf51-c8a7a149a16d State: Peer in Cluster (Connected) Hostname: 10.70.47.2 Uuid: 97d3ca0c-bd2d-443f-9b57-627920e1f026 State: Peer in Cluster (Connected) [root@rhs001 ~]# gluster peer probe 10.70.47.3 peer probe: success. [root@rhs001 ~]# gluster peer status Number of Peers: 3 Hostname: 10.70.47.145 Uuid: af76d670-850c-4530-bf51-c8a7a149a16d State: Peer in Cluster (Connected) Hostname: 10.70.47.2 Uuid: 97d3ca0c-bd2d-443f-9b57-627920e1f026 State: Peer in Cluster (Connected) Hostname: 10.70.47.3 Uuid: 70201ffd-3bfe-4efa-82ca-f2be135e4a31 State: Peer in Cluster (Connected) Bug verified on build glusterfs-3.7.5-11.el7rhgs.x86_64
Moving bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html