Bug 1118319 - failed to attach volumes to instances after configuration change & services restart
Summary: failed to attach volumes to instances after configuration change & services r...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 5.0 (RHEL 7)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 5.0 (RHEL 7)
Assignee: Nikola Dipanov
QA Contact: Ami Jeain
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-10 12:44 UTC by Yogev Rabl
Modified: 2019-09-09 14:11 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-07-22 09:34:45 UTC


Attachments (Terms of Use)
nova-compute logs (1.37 MB, text/x-log)
2014-07-10 12:44 UTC, Yogev Rabl
no flags Details


Links
System ID Priority Status Summary Last Updated
Launchpad 1340169 None None None Never

Description Yogev Rabl 2014-07-10 12:44:00 UTC
Created attachment 917054 [details]
nova-compute logs

Description of problem:
The attachment of volumes failed with the errors that are available in the log file attached. Prior to the error I was running 8 active instances, made a configuration change - increased the number of workers in the Cinder, Nova & Glance services, then restarted the services.   

Ran the command:
# nova volume-attach 6aac6fb6-ef22-48b0-b6ac-99bc94787422 57edbc5c-8a1f-49f2-b8bf-280ab857222d auto

+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdc                             |
| id       | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
| serverId | 6aac6fb6-ef22-48b0-b6ac-99bc94787422 |
| volumeId | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
+----------+--------------------------------------+

cinder list output:
 +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |  Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
| 57edbc5c-8a1f-49f2-b8bf-280ab857222d | available |   dust-bowl   | 100  |     None    |  false   |             |
| 731a118d-7bd6-4538-a3b2-60543179281e | available | bowl-the-dust | 100  |     None    |  false   |             |
+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+


Version-Release number of selected component (if applicable):
python-cinder-2014.1-7.el7ost.noarch
openstack-nova-network-2014.1-7.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
openstack-cinder-2014.1-7.el7ost.noarch
openstack-nova-common-2014.1-7.el7ost.noarch
python-cinderclient-1.0.9-1.el7ost.noarch
openstack-nova-compute-2014.1-7.el7ost.noarch
openstack-nova-conductor-2014.1-7.el7ost.noarch
openstack-nova-scheduler-2014.1-7.el7ost.noarch
openstack-nova-api-2014.1-7.el7ost.noarch
openstack-nova-cert-2014.1-7.el7ost.noarch
openstack-nova-novncproxy-2014.1-7.el7ost.noarch
python-nova-2014.1-7.el7ost.noarch
openstack-nova-console-2014.1-7.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Launch instances
2. Increase the number of workers for the Cinder, Nova & Glance
3. Create a volume
4. Attach the volume to the instance.

Actual results:
The attachment process fail.

Expected results:
The volume should be attached to the instance.

Additional info:

Comment 2 Nikola Dipanov 2014-07-14 16:59:11 UTC
I was not able to reproduce this in any way with the latest puddle.

[root@ndipanov-rhel7-test ~(keystone_demo)]# rpm -q openstack-nova-compute
openstack-nova-compute-2014.1.1-1.el7ost.noarch

I fire up an instance and attach one volume to it. After that I increased the number of API workers in /etc/{nova,cinder}/{nova,cinder}.conf using osapi_compute_workers and osapi_volume_workers options and also in /etc/glance/glance-api.conf using workers option (none of these should even remotely affect anything that is related to attaching volumes thought), and followed it up with 

$ systemctl status openstack-{nova,cinder,glance}-api   

After that I can attach the second volume to the instance without any problems.

What makes me even more suspicious that this is a real bug is that nothing in the attached log suggests any attach failures. There are several detach stack traces in the log though?

Could you please try and reproduce this once again? Otherwise I will close the bug.

Comment 4 Russell Bryant 2014-07-15 13:44:35 UTC
Given Nikola's feedback, I think we need to drop the blocker flag for now.

Comment 5 Yogev Rabl 2014-07-17 06:11:21 UTC
unfortunately, I wasn't able to reproduce it with current version, as well.


Note You need to log in before you can comment on or make changes to this bug.