Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1141563

Summary: Error encountered during initialization of driver: RBDDrive
Product: Red Hat OpenStack Reporter: bkopilov <bkopilov>
Component: openstack-cinderAssignee: Eric Harney <eharney>
Status: CLOSED NOTABUG QA Contact: nlevinki <nlevinki>
Severity: unspecified Docs Contact:
Priority: high    
Version: 5.0 (RHEL 7)CC: eharney, yeylon
Target Milestone: ---Keywords: ZStream
Target Release: 5.0 (RHEL 7)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-09-15 09:25:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs - var/cinder/ none

Description bkopilov 2014-09-14 18:35:37 UTC
Description of problem:
Automation run . 
RHOS 5 with RHEL7.0 
We are installing openstack with ceph backend  ( glance and cinder ) with dedicated pools .
Looks like on client side (openstack) we are not able to initialize connection .

http://jenkins.rhev.lab.eng.brq.redhat.com/view/RHOS-STORAGE-QE/view/RHOS-5.0-RHEL7.0/job/rhos-5-rhel-7.0-cinder-ceph/

Please go over the logs and let me know if you see some root cause for this issue .
Please verify from client side that nothing changed .

""""
2014-09-14 11:57:52.679 27250 DEBUG cinder.openstack.common.service [-] ******************************************************************************** log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1955
2014-09-14 12:02:52.564 27425 ERROR cinder.volume.drivers.rbd [req-0650d51d-b25f-400d-a310-940bbdf2f842 - - - - -] error connecting to ceph cluster
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd Traceback (most recent call last):
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 263, in check_for_setup_error
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd     with RADOSClient(self):
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 235, in __init__
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd     self.cluster, self.ioctx = driver._connect_to_rados(pool)
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 283, in _connect_to_rados
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd     client.connect()
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd   File "/usr/lib/python2.7/site-packages/rados.py", line 419, in connect
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd     raise make_ex(ret, "error calling connect")
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd TimedOut: error calling connect
2014-09-14 12:02:52.564 27425 TRACE cinder.volume.drivers.rbd 
2014-09-14 12:02:52.566 27425 ERROR cinder.volume.manager [req-0650d51d-b25f-400d-a310-940bbdf2f842 - - - - -] Error encountered during initialization of driver: RBDDriver
2014-09-14 12:02:52.566 27425 ERROR cinder.volume.manager [req-0650d51d-b25f-400d-a310-940bbdf2f842 - - - - -] Bad or unexpected response from the storage volume backend API: error connecting to ceph cluster
2014-09-14 12:02:52.566 27425 TRACE cinder.volume.manager Traceback (most recent call last):
2014-09-14 12:02:52.566 27425 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 243, in init_host
2014-09-14 12:02:52.566 27425 TRACE cinder.volume.manager     self.driver.check_for_setup_error()
2014-09-14 12:02:52.566 27425 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 268, in check_for_setup_error
2014-09-14 12:02:52.566 27425 TRACE cinder.volume.manager     raise exception.VolumeBackendAPIException(data=msg)
2014-09-14 12:02:52.566 27425 TRACE cinder.volume.manager VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: error connecting to ceph cluster
2014-09-14 12:02:52.566 27425 TRACE cinder.volume.manager 
2014-09-14 12:02:52.568 27425 DEBUG cinder.openstack.common.lockutils [-] Got semaphore "dbapi_backend" for method "__get_backend"... inner /usr/lib/python2.7/site-packages/cinder/openstack/common/lockutils.py:191
2014-09-14 12:02:52.806 27425 DEBUG cinder.service [-] Creating RPC server for service cinder-volume start /usr/lib/python2.7/site-packages/cinder/service.py:113
2014-09-14 12:02:52.807 27425 DEBUG stevedore.extension [-] found extension EntryPoint.parse('blocking = oslo.messaging._executors.impl_blocking:BlockingExecutor') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156
2014-09-14 12:02:52.808 27425 DEBUG stevedore.extension [-] found extension EntryPoint.parse('eventlet = oslo.messaging._executors.impl_eventlet:EventletExecutor') _load_plugins /usr/lib/python2.7/site-packages/stevedore/extension.py:156
2014-09-14 12:02:52.822 27425 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.2.31:5672


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Configure ceph on openstack side 
2. Tried to create volume .
3.

Actual results:


Expected results:


Additional info:

Comment 1 bkopilov 2014-09-14 18:38:19 UTC
Created attachment 937373 [details]
logs  - var/cinder/

Comment 3 bkopilov 2014-09-14 20:28:05 UTC
When tried to check ceph health from openstack with ceph command :

[root@test233 ~]# ceph -w
2014-09-14 16:26:48.393110 7f5f2014b700  0 -- :/1025465 >> 10.35.161.127:6789/0 pipe(0x7f5f1c0242e0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f5f1c024550).fault
2014-09-14 16:26:51.392301 7f5f1affd700  0 -- :/1025465 >> 10.35.161.127:6789/0 pipe(0x7f5f10000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f5f10000e70).fault
2014-09-14 16:26:54.392630 7f5f2014b700  0 -- :/1025465 >> 10.35.161.127:6789/0 pipe(0x7f5f10003010 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f5f10003280).fault
2014-09-14 16:26:57.392988 7f5f1affd700  0 -- :/1025465 >> 10.35.161.127:6789/0 pipe(0x7f5f10003890 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f5f10003b00).fault
2014-09-14 16:27:00.393256 7f5f2014b700  0 -- :/1025465 >> 10.35.161.127:6789/0 pipe(0x7f5f10005510 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f5f10005780).fault
2014-09-14 16:27:03.393587 7f5f1affd700  0 -- :/1025465 >> 10.35.161.127:6789/0 pipe(0x7f5f10007050 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f5f100072c0).fault
2014-09-14 16:27:06.393884 7f5f2014b700  0 -- :/1025465 >> 10.35.161.127:6789/0 pipe(0x7f5f10009090 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f5f10009300).fault

Comment 5 bkopilov 2014-09-14 20:48:34 UTC
[root@vm-161-127 ceph]# ceph osd tree
# id	weight	type name	up/down	reweight
-1	0.6	root default
-2	0.2		host vm-161-127
0	0.2			osd.0	up	1	
-3	0.2		host vm-161-138
1	0.2			osd.1	down	0	
-4	0.2		host dhcp163-57
2	0.2			osd.2	down	0

Comment 6 bkopilov 2014-09-15 09:26:57 UTC
Solved after fixing dns name stuff .
OSDs were down .

Up and running now .

http://jenkins.rhev.lab.eng.brq.redhat.com/view/RHOS-STORAGE-QE/view/RHOS-5.0-RHEL7.0/job/rhos-5-rhel-7.0-cinder-ceph/56/
Benny