Bug 985871 - cinder: when creating a volume with --availability-zone internal the volume is created in status error with no error in the logs
cinder: when creating a volume with --availability-zone internal the volume i...
Status: CLOSED UPSTREAM
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder (Show other bugs)
unspecified
x86_64 Linux
unspecified Severity medium
: ---
: 6.0 (Juno)
Assigned To: Sergey Gotliv
Dafna Ron
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-18 08:07 EDT by Dafna Ron
Modified: 2016-04-26 16:19 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-11-26 06:22:07 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logs (6.65 MB, application/x-gzip)
2013-07-18 08:07 EDT, Dafna Ron
no flags Details
logs with debug (31.89 KB, application/x-gzip)
2013-08-28 06:20 EDT, Dafna Ron
no flags Details

  None (edit)
Description Dafna Ron 2013-07-18 08:07:01 EDT
Created attachment 775295 [details]
logs

Description of problem:

I have a 2 compute deployment (two availability zones: internal and nova). 
when I tried creating volume with --availability-zone internal, the volume is created with status error but there is nothing logged on why. 

Version-Release number of selected component (if applicable):

openstack-cinder-2013.1.2-3.el6ost.noarch

How reproducible:

100%

Steps to Reproduce:
1. deploy a two computes setup (using packstack)
2. create a volume with --availability-zone internal 
3.

Actual results:

the volume is created with status error 

Expected results:

1. not sure if this is the correct behaviour (I may want to create a volume for internal zone) 
2. there is no information in the log on why it failed to be created. 

Additional info: logs

** please note that I tried the same with nova zone and was able to create a volume so its not the --availability-zone which is failing, just when giving internal zone param with it**

grep on logs only show one entry for the volume: 

[root@opens-vdsb ~(keystone_admin)]# egrep b6f5250b-54de-4877-bb53-bcc189c1762a /var/log/*/*
/var/log/cinder/api.log:2013-07-18 14:39:38    AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'internal', 'terminated_at': None, 'updated_at': None, 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': 'b6f5250b-54de-4877-bb53-bcc189c1762a', 'size': 10, 'user_id': u'4e8268c19c2143a0b3cf978afab45fea', 'attach_time': None, 'display_description': None, 'project_id': u'f372ca53f0484f589413148b6c9ad39c', 'launched_at': None, 'scheduled_at': None, 'status': 'creating', 'volume_type_id': None, 'deleted': False, 'provider_location': None, 'host': None, 'source_volid': None, 'provider_auth': None, 'display_name': u'internal1', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 7, 18, 11, 39, 38, 392982), 'attach_status': 'detached', 'volume_type': None, 'metadata': {}}
[root@opens-vdsb ~(keystone_admin)]# 


[root@opens-vdsb ~(keystone_admin)]# cinder create --availability-zone internal --display-name internal1 10
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |               internal               |
|       bootable      |                false                 |
|      created_at     |      2013-07-18T11:39:38.392982      |
| display_description |                 None                 |
|     display_name    |              internal1               |
|          id         | b6f5250b-54de-4877-bb53-bcc189c1762a |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@opens-vdsb ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 21580a83-0853-45bb-a82f-404a4055b80a | available |     None     |  10  |     None    |  false   |             |
| 674ce4f6-907f-481d-bc17-c8f88db67e94 | available |     nova     |  10  |     None    |  false   |             |
| b6f5250b-54de-4877-bb53-bcc189c1762a |   error   |  internal1   |  10  |     None    |  false   |             |
| dc5861da-1c08-46df-bb23-f8ff282acb16 |   error   |   internal   |  10  |     None    |  false   |             |
| efff96a6-aaf9-4ed6-accb-4b2aa0b8407d |   error   |     None     |  10  |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+


[root@opens-vdsb ~(keystone_admin)]# nova availability-zone-list 
+-------------------------------------+----------------------------------------+
| Name                                | Status                                 |
+-------------------------------------+----------------------------------------+
| internal                            | available                              |
| |- opens-vdsb.xx.xx.xx.xxx |                                        |
| | |- nova-cert                      | enabled :-) 2013-07-18T11:41:34.000000 |
| | |- nova-conductor                 | enabled :-) 2013-07-18T11:41:34.000000 |
| | |- nova-consoleauth               | enabled :-) 2013-07-18T11:41:34.000000 |
| | |- nova-network                   | enabled :-) 2013-07-18T11:41:34.000000 |
| | |- nova-scheduler                 | enabled :-) 2013-07-18T11:41:34.000000 |
| | |- nova-console                   | enabled XXX 2013-07-14T12:08:56.000000 |
| nova                                | available                              |
| |- nott-vdsa..xx.xx.xx.xxx ||                                        |
| | |- nova-compute                   | enabled :-) 2013-07-18T11:41:41.000000 |
| |- opens-vdsb..xx.xx.xx.xxx | |                                        |
| | |- nova-compute                   | enabled :-) 2013-07-18T11:41:41.000000 |
+-------------------------------------+----------------------------------------+
Comment 4 Dafna Ron 2013-08-28 06:20:30 EDT
Created attachment 791293 [details]
logs with debug

debug logs attached
Comment 5 Flavio Percoco 2013-09-23 10:27:29 EDT
The issue here is that there's no valid host for cinder in the availability-zone internal. The behavior is correct, in terms of the status the volume runs into, however, a more explicit warning could be propagated back to the user. 

The information about this error can be found in the cinder-scheduler logs, a "No Valid Host" exception is raised by the AvailabilityZone filter.
Comment 7 Sergey Gotliv 2014-11-26 06:22:07 EST
I agree that AvailabilityZoneFilter has to be more descriptive (maybe create a log printing), however another suggestion - to propagate a more specific error to the user in this case is not an easy to implement. 

Imagine you have 10 hosts and all of them don't match a criteria. Some of them don't have enough capacity, others located in the different availability zone, so it can be a different reason for each host, so how you suggest to propagate 10 different reasons. What if we have 1000 hosts or even more?

Note You need to log in before you can comment on or make changes to this bug.