Bug 978269 - [RHSC] - After creating the distributed replicate volume it becomes a replicate volume in the 2.0U5 node in 3.1 cluster.
[RHSC] - After creating the distributed replicate volume it becomes a replica...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Timothy Asir
RamaKasturi
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-26 04:46 EDT by RamaKasturi
Modified: 2013-08-05 06:28 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-31 06:29:20 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Attaching engine and vdsm logs. (3.70 MB, text/x-log)
2013-06-26 04:46 EDT, RamaKasturi
no flags Details
Attaching engine and vdsm logs. (76.56 KB, text/x-log)
2013-07-15 02:51 EDT, RamaKasturi
no flags Details
Attaching vdsm logs in node2 (66.79 KB, text/x-log)
2013-07-15 02:52 EDT, RamaKasturi
no flags Details
Attaching engine.log (77.23 KB, text/x-log)
2013-07-15 02:53 EDT, RamaKasturi
no flags Details

  None (edit)
Description RamaKasturi 2013-06-26 04:46:28 EDT
Created attachment 765474 [details]
Attaching engine and vdsm logs.

Description of problem:
After creating a distributed replicate volume it becomes replicate volume with a message in the event tab saying "Detected changes in properties of volume vol3 of cluster Cluster_anshi, and updated the same in engine DB."

Version-Release number of selected component (if applicable):
glusterfs-3.3.0.10rhs-1.el6rhs.x86_64
vdsm-4.9.6-24.el6rhs.x86_64
rhsc-2.1.0-0.bb4.el6rhs.noarch

How reproducible:
Always

Steps to Reproduce:
1. Login to console.
2. Create a distributed replicate volume with the replica count of 2.
3. 

Actual results:
Distributed replicate volume gets changed to replicate volume once done with the creation and gives an event message saying "Detected changes in properties of volume vol3 of cluster Cluster_anshi, and updated the same in engine DB."

Expected results:
A volume of type distributed replicate should be created.

Additional info:
Comment 2 Shubhendu Tripathi 2013-07-02 08:20:13 EDT
If the volume type selected is "Distributed Replicate" with Replica Count = 2 and we add exactly two bricks to the volume, the final type for the volume would be Replicate only. This is expected behaviors.

If the no of bricks added is multiple of Replica Count (i.e. 4, 6, 8 ...) the volume type is set as "Distributed Replicate" properly.

Kindly check if the no of bricks is exactly the same as the value of replica count or multiple of the value.
Comment 3 RamaKasturi 2013-07-03 02:53:17 EDT
Hi Shubednu,

        I am still able to reporduce the issue.

Thanks 
kasturi.
Comment 4 Shubhendu Tripathi 2013-07-04 03:12:20 EDT
Also, once the volume type is changed in UI, check what details are shown for volume info in CLI.
Comment 5 RamaKasturi 2013-07-04 03:36:20 EDT
Hi shubendu,

   Once the volume type is changed in UI, these are the details shown for volume info in CLI.

Volume Name: vol1
Type: Distributed-Replicate
Volume ID: d41f97ab-6493-46f9-97bc-6dd65be0de89
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.37.156:/rhs/brick1/b1
Brick2: 10.70.37.48:/rhs/brick1/b1
Brick3: 10.70.37.156:/rhs/brick1/b2
Brick4: 10.70.37.48:/rhs/brick1/b2
Options Reconfigured:
auth.allow: *
user.cifs: on
nfs.disable: off

Thanks 
kasturi.
Comment 6 Shubhendu Tripathi 2013-07-05 07:07:27 EDT
The output from command "gluster volume info myVol --xml"
---------------------------------------------------------

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volInfo>
    <volumes>
      <volume>
        <name>myVol</name>
        <id>4d0242ed-af97-4b38-8ba5-44e5b6e809d5</id>
        <type>2</type>
        <status>0</status>
        <brickCount>4</brickCount>
        <distCount>2</distCount>
        <stripeCount>1</stripeCount>
        <replicaCount>2</replicaCount>
        <transport>0</transport>
        <bricks>
          <brick>10.70.37.48:/rhs/brick1/mm11</brick>
          <brick>10.70.37.48:/rhs/brick1/mm22</brick>
          <brick>10.70.37.156:/rhs/nn11</brick>
          <brick>10.70.37.156:/rhs/nn22</brick>
        </bricks>
        <optCount>3</optCount>
        <options>
          <option>
            <name>auth.allow</name>
            <value>*</value>
          </option>
          <option>
            <name>user.cifs</name>
            <value>on</value>
          </option>
          <option>
            <name>nfs.disable</name>
            <value>off</value>
          </option>
        </options>
      </volume>
      <count>1</count>
    </volumes>
  </volInfo>
</cliOutput>


Output from the command "vdsClient -s localhost glusterVolumesList"
--------------------------------------------------------------------

{'status': {'code': 0, 'message': 'Done'},
 'volumes': {'myVol': {'brickCount': '4',
                       'bricks': ['10.70.37.48:/rhs/brick1/mm11',
                                  '10.70.37.48:/rhs/brick1/mm22',
                                  '10.70.37.156:/rhs/nn11',
                                  '10.70.37.156:/rhs/nn22'],
                       'distCount': '2',
                       'options': {'auth.allow': '*',
                                   'nfs.disable': 'off',
                                   'user.cifs': 'on'},
                       'replicaCount': '2',
                       'stripeCount': '1',
                       'transportType': ['TCP'],
                       'uuid': '4d0242ed-af97-4b38-8ba5-44e5b6e809d5',
                       'volumeName': 'myVol',
                       'volumeStatus': 'OFFLINE',
                       'volumeType': 'REPLICATE'}}}
Done
Comment 7 Timothy Asir 2013-07-15 02:33:05 EDT
Could you please attach the vdsm.log (/var/log/vdsm/vdsm.log)
Comment 8 RamaKasturi 2013-07-15 02:51:58 EDT
Created attachment 773563 [details]
Attaching engine and vdsm logs.
Comment 9 RamaKasturi 2013-07-15 02:52:30 EDT
Created attachment 773564 [details]
Attaching vdsm logs in node2
Comment 10 RamaKasturi 2013-07-15 02:53:26 EDT
Created attachment 773565 [details]
Attaching engine.log
Comment 11 RamaKasturi 2013-07-15 02:56:07 EDT
Attached engine and vdsm logs.
Comment 12 Sahina Bose 2013-07-31 06:29:20 EDT
This works in RHS 2.1. 
As there are no more updates on RHS 2.0 , we're closing it NEXTRELEASE

Note You need to log in before you can comment on or make changes to this bug.