Bug 986321 - [RHS-RHOS] Cinder volume-create fails during rebalance
[RHS-RHOS] Cinder volume-create fails during rebalance
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: shishir gowda
Sudhir D
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-19 08:08 EDT by Anush Shetty
Modified: 2013-12-08 20:36 EST (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.20rhs-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
virt rhos cinder integration
Last Closed: 2013-09-23 18:35:55 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anush Shetty 2013-07-19 08:08:25 EDT
Description of problem: Creating cinder volumes fails while rebalance is going on. The volume-create reports an error.

The cinder-volumes are hosted over RHS volume

Version-Release number of selected component (if applicable):

Cinder:
# rpm -qa | grep cinder
python-cinder-2013.1.2-3.el6ost.noarch
openstack-cinder-2013.1.2-3.el6ost.noarch
python-cinderclient-1.0.4-1.el6ost.noarch

RHS:glusterfs-3.3.0.11rhs-1.el6rhs.x86_64


How reproducible: Consistent


Steps to Reproduce:
1. Create RHS volume 

2. Configure cinder for glusterfs
     # openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
   # openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
   # openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

3. Create /etc/cinder/shares.conf 
   # cat /etc/cinder/shares.conf
      10.70.37.66:cinder-vol

4. Restart openstack-cinder services which will mount the RHS cinder volume

5. Add brick to the RHS volume and perform rebalance 

5. During rebalance, try creating cinder volume
   nova volume-create --display-name vol7 10

Actual results:

Creating cinder volume failed

Expected results:

Cinder volume creating should be successful during rebalance

Additional info:

# df -h
Filesystem            Size  Used Avail Use% Mounted on

10.70.37.66:glance-vol
                      300G  6.3G  294G   3% /var/lib/glance/images
10.70.37.66:cinder-vol
                      100G   15G   86G  15% /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7
10.70.37.66:cinder-vol
                      100G   15G   86G  15% /var/lib/nova/mnt/cf55327cba40506e44b37f45f55af5e7



[root@rhs ~]# gluster volume rebalance cinder-vol status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            5            0      completed
                             10.70.37.66                0            0            5            0      completed
                             10.70.37.71                0            0            5            0      completed
                            10.70.37.158                0            0            5            0      completed




[root@rhs ~]# gluster volume status cinder-vol
Status of volume: cinder-vol
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.66:/brick6/s1				24023	Y	17578
Brick 10.70.37.173:/brick6/s1				24024	Y	23960
Brick 10.70.37.66:/brick5/s1				24024	Y	18716
Brick 10.70.37.173:/brick5/s1				24025	Y	25240
NFS Server on localhost					38467	Y	25246
Self-heal Daemon on localhost				N/A	Y	25252
NFS Server on 10.70.37.71				38467	Y	12847
Self-heal Daemon on 10.70.37.71				N/A	Y	12853
NFS Server on 10.70.37.158				38467	Y	2849
Self-heal Daemon on 10.70.37.158			N/A	Y	2855
NFS Server on 10.70.37.66				38467	Y	18722
Self-heal Daemon on 10.70.37.66				N/A	Y	18728



[root@rhs ~]# gluster volume info cinder-vol
 
Volume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: 19f5abf1-5739-417a-bcff-e56d0a5baa74
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.37.66:/brick6/s1
Brick2: 10.70.37.173:/brick6/s1
Brick3: 10.70.37.66:/brick5/s1
Brick4: 10.70.37.173:/brick5/s1
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: on


# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| 0eb5dc97-aa33-48e1-9d92-6b00aacfb5fa | available |     vol3     |  10  |     None    |  false   |                                      |
| 48d0df07-ec35-4842-abea-a7b95f04a62a |   error   |     vol7     |  10  |     None    |  false   |                                      |
| 5cdc0cf1-6ed3-4a3f-8428-a4605a23a183 | available |     vol4     |  10  |     None    |  false   |                                      |
| 6e2a039d-003e-4a0f-a35d-58eed8650d58 | available |     vol1     |  10  |     None    |  false   |                                      |
| a0d74942-4957-40be-b34f-d83712feb90b | available |     vol2     |  10  |     None    |  false   |                                      |
| bc28a417-cf0f-4f87-b619-7a22449c5167 |   error   |     vol5     |  10  |     None    |  false   |                                      |
| c894392a-1df9-4b46-a71c-9ddaf017db7b |   error   |     vol6     |  10  |     None    |  false   |                                      |
| ed4e8700-fde5-4324-8300-b17942abe06e |   error   |     vol5     |  10  |     None    |  false   |                                      |
| f1ea0af9-cef1-4024-8e47-1f17d7de35e4 |   in-use  |      1       |  15  |     None    |  false   | dad3f2eb-2219-4a3b-b065-89dc21cf59a6 |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
Comment 3 Anush Shetty 2013-08-19 08:23:30 EDT
Tried this case again with glusterfs-3.4.0.20rhs-2.el6rhs.x86_64. Didn't see this issue again.
Comment 4 Amar Tumballi 2013-08-19 08:33:16 EDT
Marking ON_QA as per comment #3
Comment 5 Anush Shetty 2013-08-19 08:42:59 EDT
Verified with glusterfs-3.4.0.20rhs-2.el6rhs.x86_64
Comment 6 Scott Haines 2013-09-23 18:35:55 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.