Created attachment 765474 [details] Attaching engine and vdsm logs. Description of problem: After creating a distributed replicate volume it becomes replicate volume with a message in the event tab saying "Detected changes in properties of volume vol3 of cluster Cluster_anshi, and updated the same in engine DB." Version-Release number of selected component (if applicable): glusterfs-3.3.0.10rhs-1.el6rhs.x86_64 vdsm-4.9.6-24.el6rhs.x86_64 rhsc-2.1.0-0.bb4.el6rhs.noarch How reproducible: Always Steps to Reproduce: 1. Login to console. 2. Create a distributed replicate volume with the replica count of 2. 3. Actual results: Distributed replicate volume gets changed to replicate volume once done with the creation and gives an event message saying "Detected changes in properties of volume vol3 of cluster Cluster_anshi, and updated the same in engine DB." Expected results: A volume of type distributed replicate should be created. Additional info:
If the volume type selected is "Distributed Replicate" with Replica Count = 2 and we add exactly two bricks to the volume, the final type for the volume would be Replicate only. This is expected behaviors. If the no of bricks added is multiple of Replica Count (i.e. 4, 6, 8 ...) the volume type is set as "Distributed Replicate" properly. Kindly check if the no of bricks is exactly the same as the value of replica count or multiple of the value.
Hi Shubednu, I am still able to reporduce the issue. Thanks kasturi.
Also, once the volume type is changed in UI, check what details are shown for volume info in CLI.
Hi shubendu, Once the volume type is changed in UI, these are the details shown for volume info in CLI. Volume Name: vol1 Type: Distributed-Replicate Volume ID: d41f97ab-6493-46f9-97bc-6dd65be0de89 Status: Created Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.37.156:/rhs/brick1/b1 Brick2: 10.70.37.48:/rhs/brick1/b1 Brick3: 10.70.37.156:/rhs/brick1/b2 Brick4: 10.70.37.48:/rhs/brick1/b2 Options Reconfigured: auth.allow: * user.cifs: on nfs.disable: off Thanks kasturi.
The output from command "gluster volume info myVol --xml" --------------------------------------------------------- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volInfo> <volumes> <volume> <name>myVol</name> <id>4d0242ed-af97-4b38-8ba5-44e5b6e809d5</id> <type>2</type> <status>0</status> <brickCount>4</brickCount> <distCount>2</distCount> <stripeCount>1</stripeCount> <replicaCount>2</replicaCount> <transport>0</transport> <bricks> <brick>10.70.37.48:/rhs/brick1/mm11</brick> <brick>10.70.37.48:/rhs/brick1/mm22</brick> <brick>10.70.37.156:/rhs/nn11</brick> <brick>10.70.37.156:/rhs/nn22</brick> </bricks> <optCount>3</optCount> <options> <option> <name>auth.allow</name> <value>*</value> </option> <option> <name>user.cifs</name> <value>on</value> </option> <option> <name>nfs.disable</name> <value>off</value> </option> </options> </volume> <count>1</count> </volumes> </volInfo> </cliOutput> Output from the command "vdsClient -s localhost glusterVolumesList" -------------------------------------------------------------------- {'status': {'code': 0, 'message': 'Done'}, 'volumes': {'myVol': {'brickCount': '4', 'bricks': ['10.70.37.48:/rhs/brick1/mm11', '10.70.37.48:/rhs/brick1/mm22', '10.70.37.156:/rhs/nn11', '10.70.37.156:/rhs/nn22'], 'distCount': '2', 'options': {'auth.allow': '*', 'nfs.disable': 'off', 'user.cifs': 'on'}, 'replicaCount': '2', 'stripeCount': '1', 'transportType': ['TCP'], 'uuid': '4d0242ed-af97-4b38-8ba5-44e5b6e809d5', 'volumeName': 'myVol', 'volumeStatus': 'OFFLINE', 'volumeType': 'REPLICATE'}}} Done
Could you please attach the vdsm.log (/var/log/vdsm/vdsm.log)
Created attachment 773563 [details] Attaching engine and vdsm logs.
Created attachment 773564 [details] Attaching vdsm logs in node2
Created attachment 773565 [details] Attaching engine.log
Attached engine and vdsm logs.
This works in RHS 2.1. As there are no more updates on RHS 2.0 , we're closing it NEXTRELEASE