Bug 1023344 - [RHS-RHOS] Cinder volume files not available in the RHS bricks
[RHS-RHOS] Cinder volume files not available in the RHS bricks
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
Sudhir D
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-25 04:58 EDT by shilpa
Modified: 2013-10-25 07:05 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-10-25 07:05:22 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description shilpa 2013-10-25 04:58:10 EDT
Description of problem:
The cinder volumes are getting created locally on the RHOS node instead of RHS bricks even though RHOS is configured to have cinder files located in the RHS bricks.

Version-Release number of selected component (if applicable):
RHS-glusterfs-3.4.0.34.1u2rhs-1.el6rhs.x86_64
RHOS-Havana openstack-cinder-2013.2-1.el6.noarch

How reproducible:
Tested twice

Steps to Reproduce:
1. Create 2 6x2 Distributed-Replicate volumes for cinder and glance

2. Tag the volumes with group virt
   (i.e) gluster volume set cinder-vol group virt
         gluster volume set glance-vol group virt

3. Set the storage.owner-uid and storage.owner-gid of glance-vol to 161
         gluster volume set glance-vol storage.owner-uid 161
         gluster volume set glance-vol storage.owner-gid 161

4. Set the storage.owner-uid and storage.owner-gid of cinder-vol to 165
         gluster volume set cinder-vol storage.owner-uid 165
         gluster volume set cinder-vol storage.owner-gid 165

5. Volume info:

 gluster v i
 
#Volume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: 8b20ce62-3606-4c52-b36e-567f97ebff7f
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.168:/rhs/brick2/c1
Brick2: 10.70.37.214:/rhs/brick2/c2
Brick3: 10.70.37.181:/rhs/brick2/c3
Brick4: 10.70.37.164:/rhs/brick2/c4
Brick5: 10.70.37.168:/rhs/brick2/c5
Brick6: 10.70.37.214:/rhs/brick2/c6
Brick7: 10.70.37.181:/rhs/brick2/c7
Brick8: 10.70.37.164:/rhs/brick2/c8
Brick9: 10.70.37.168:/rhs/brick2/c9
Brick10: 10.70.37.214:/rhs/brick2/c10
Brick11: 10.70.37.181:/rhs/brick2/c11
Brick12: 10.70.37.164:/rhs/brick2/c12
Options Reconfigured:
server.allow-insecure: on
storage.owner-uid: 165
storage.owner-gid: 165
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
 
Volume Name: glance-vol
Type: Distributed-Replicate
Volume ID: b6fcdad4-b6d1-4dca-a3ab-0a6cfda677ee
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.168:/rhs/brick1/g1
Brick2: 10.70.37.214:/rhs/brick1/g2
Brick3: 10.70.37.181:/rhs/brick1/g3
Brick4: 10.70.37.164:/rhs/brick1/g4
Brick5: 10.70.37.168:/rhs/brick1/g5
Brick6: 10.70.37.214:/rhs/brick1/g6
Brick7: 10.70.37.181:/rhs/brick1/g7
Brick8: 10.70.37.164:/rhs/brick1/g8
Brick9: 10.70.37.168:/rhs/brick1/g9
Brick10: 10.70.37.214:/rhs/brick1/g10
Brick11: 10.70.37.181:/rhs/brick1/g11
Brick12: 10.70.37.164:/rhs/brick1/g12
Options Reconfigured:
server.allow-insecure: on
storage.owner-gid: 161
storage.owner-uid: 161

6. Configure cinder to use glusterfs volume

  a. # openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
      # openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
      # openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
  
  b. # cat /etc/cinder/shares.conf
     10.70.37.168:cinder-vol

  c. for i in api scheduler volume; do sudo service openstack-cinder-${i} restart; done 
Ideally we should now be able to see the cinder volume fuse-mounted automatically.

7. Mount the RHS glance volume on /var/lib/glance/images and upload an OS image to glance.

8. Create cinder volumes of different sizes

# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| 00c99f85-a483-4148-ae5f-e2ec72534ecd | available |     vol3     |  2   |     None    |  false   |                                      |                                  |
| 2e53d718-e3d9-4abd-907a-244e8d9396af | available |     vol4     |  5   |     None    |  false   |                                      |
| 6459f51e-2c30-4bb7-bda1-8c32930fc68d | available |     vol1     |  10  |     None    |  false   |                                                                          |
| bc2983ce-47b2-4a2c-89e4-4d48519278ef |   in-use  |     vol2     |  5   |     None    |  false   | d145425c-d6ef-478d-a3a0-5ed8da4dda1d |
| d7dbe5fc-dc76-4320-b850-f2aa1ab7becf | available |     vol6     |  2   |     None    |  false   |                                      |
| fb1f112f-1a3c-4b9d-ae08-2324c677e476 |   in-use  |     vol5     |  2   |     None    |  false   | 4bc5125a-6d92-4815-8a3c-37e8ec0bc1b1 |



Actual results:

These volumes created should reside in the RHS bricks /rhs/brick2 as volume files. 

Expected results:

The volume files reside locally in /etc/cinder/volumes/ even though the configured path is /var/lib/cinder/volumes.

# ls -l /etc/cinder/volumes/
total 28
-rw-r--r--. 1 cinder cinder 232 Oct 25 12:50 volume-00c99f85-a483-4148-ae5f-e2ec72534ecd
-rw-r--r--. 1 cinder cinder 232 Oct 25 12:50 volume-2e53d718-e3d9-4abd-907a-244e8d9396af
-rw-r--r--. 1 cinder cinder 232 Oct 25 12:50 volume-a94d60c5-885b-4326-a891-8376a04ab7d2
-rw-r--r--. 1 cinder cinder 232 Oct 25 12:50 volume-bc2983ce-47b2-4a2c-89e4-4d48519278ef
-rw-r--r--. 1 cinder cinder 232 Oct 25 12:50 volume-d7dbe5fc-dc76-4320-b850-f2aa1ab7becf
-rw-r--r--. 1 cinder cinder 232 Oct 25 12:50 volume-fb1f112f-1a3c-4b9d-ae08-2324c677e476



Additional info:

See these messages in /var/log/cinder/volume.log:

2013-10-25 12:37:26.863 23333 WARNING cinder.volume.manager [req-4a603709-c3c5-463f-b3dd-34ac255b8fc9 None None] Unable to update stats, driver is uninitialized
2013-10-25 12:44:20.602 25333 WARNING cinder.volume.manager [req-d0e6d8ff-5491-456d-b35e-8f93518321e8 None None] Unable to update stats, driver is uninitialized
2013-10-25 12:47:03.540 26814 WARNING cinder.volume.manager [req-5ee28ba7-110a-4a2d-8414-9415be7fa81d None None] Unable to update stats, driver is uninitialized
2013-10-25 12:48:50.377 27566 WARNING cinder.volume.manager [req-183068be-c046-40c8-81b2-bf5f4ca63c52 None None] Unable to update stats, driver is uninitialized
2013-10-25 12:50:17.643 28236 WARNING cinder.volume.manager [req-e028c5b0-e6cf-4422-9d93-4fc101d815f5 None None] Unable to update stats, driver is uninitialized


# cat /etc/cinder/shares.conf 
10.70.37.168:cinder-vol
Comment 2 shilpa 2013-10-25 07:05:22 EDT
Found out tt was due to the enabled_backends option in cinder.conf that I turned on without defining the settings for it. Configuration issue. Resolved.

Note You need to log in before you can comment on or make changes to this bug.