Bug 1340589
Summary: | rhel-osp-director: after rebooting the overcloud with cinder node, not able to create cinder volume.the target service is down on the cinder node, | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Alexander Chuzhoy <sasha> |
Component: | rhosp-director | Assignee: | Angus Thomas <athomas> |
Status: | CLOSED WONTFIX | QA Contact: | Arik Chernetsky <achernet> |
Severity: | low | Docs Contact: | |
Priority: | medium | ||
Version: | 8.0 (Liberty) | CC: | dbecker, jschluet, jslagle, mburns, morazi, ohochman, rhel-osp-director-maint |
Target Milestone: | async | ||
Target Release: | 8.0 (Liberty) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-02-28 22:07:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Alexander Chuzhoy
2016-05-28 05:19:07 UTC
The issue reproduced when deployed without a cinder node. Deployment command: openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 0 --swift-storage-scale 0 --block-storage-scale 0 --neutron-tunnel-types vxlan,gre --neutron-network-type vxlan,gre --neutron-network-vlan-ranges datacentre:118:143 --neutron-bridge-mappings datacentre:br-ex --ntp-server clock.redhat.com --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e network-environment.yaml -e ~/ssl-heat-templates/environments/enable-tls.yaml -e ~/ssl-heat-templates/environments/inject-trust-anchor.yaml The target service is failed on controllers this time. May 28 14:56:06 overcloud-controller-0 cinder-volume: 2016-05-28 14:56:06.076 19727 ERROR cinder.service [-] Manager for service cinder-volume hostgroup@tripleo_iscsi is reporting problems, not sending heartbeat. Service will appear "down Do we need to include the storage-environment.yaml without ceph? Reproduced on clean deployment of ospd-9 on Bare-Metal ( it was Not update/upgrade ) : environment: ------------- openstack-cinder-8.0.0-4.el7ost.noarch python-cinderclient-1.6.0-1.el7ost.noarch python-cinder-8.0.0-4.el7ost.noarch openstack-heat-engine-6.0.0-4.el7ost.noarch openstack-heat-api-6.0.0-4.el7ost.noarch openstack-tripleo-heat-templates-liberty-2.0.0-9.el7ost.noarch openstack-tripleo-heat-templates-kilo-2.0.0-9.el7ost.noarch heat-cfntools-1.3.0-2.el7ost.noarch openstack-heat-common-6.0.0-4.el7ost.noarch openstack-heat-templates-0-0.8.20150605git.el7ost.noarch openstack-heat-api-cfn-6.0.0-4.el7ost.noarch openstack-tripleo-heat-templates-2.0.0-9.el7ost.noarch python-heatclient-1.2.0-1.el7ost.noarch scenario: ---------- (1) deploy setup with ceph nodes using ospd9 (2) reboot undercloud + overcloud (3) attempt to create cinder-volume and attach to instance results: -------- cinder list --> shows volume with ERROR /var/log/cinder/volume.log : ------------------------------ 2016-06-16 02:34:39.511 15069 INFO cinder.volume.manager [req-63f7580d-434a-4843-a6c5-6069a68f638d - - - - -] Determined volume DB was empty at startup. 2016-06-16 02:34:39.835 15069 INFO cinder.volume.manager [req-63f7580d-434a-4843-a6c5-6069a68f638d - - - - -] Image-volume cache disabled for host hostgroup@t ripleo_iscsi. 2016-06-16 02:34:39.838 15069 INFO oslo_service.service [req-63f7580d-434a-4843-a6c5-6069a68f638d - - - - -] Starting 1 workers 2016-06-16 02:34:39.844 15249 INFO cinder.service [-] Starting cinder-volume node (version 8.0.0) 2016-06-16 02:34:39.846 15249 INFO cinder.volume.manager [req-e00abecd-4556-456a-8d08-eddff08a3398 - - - - -] Starting volume driver LVMVolumeDriver (3.0.0) 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager [req-e00abecd-4556-456a-8d08-eddff08a3398 - - - - -] Failed to initialize driver. 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager Traceback (most recent call last): 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 426, in init_host 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager self.driver.check_for_setup_error() 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 283, in check_for_setup_error 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager lvm_conf=lvm_conf_file) 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 95, in __init__ 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager if self._vg_exists() is False: 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 128, in _vg_exists 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager run_as_root=True) 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 148, in execute 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs) 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 371, in execute 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager cmd=sanitized_cmd) 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager ProcessExecutionError: Unexpected error while running command. 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o name cinder-volumes 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager Exit code: 5 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager Stdout: u'' 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager Stderr: u'File descriptor 10 (/dev/urandom) leaked on vgs invocation. Parent PID 15253: /usr/bin/python2\n Volume group "cinder-volumes" not found\n Cannot process volume group cinder-volumes\n' 2016-06-16 02:34:40.060 15249 ERROR cinder.volume.manager 2016-06-16 02:34:40.177 15249 INFO cinder.volume.manager [req-e00abecd-4556-456a-8d08-eddff08a3398 - - - - -] Initializing RPC dependent components of volume further investigation showed that the deployment command on my setup (with Ceph) there was a missing argument for the storage-environment.yaml, which causes to create an LV for cinder-volume, which is known as not being re-mounted post reboot ( afaik this use case is more for POCs ) . removing blocker-flags and lower the bz priority. the cinder-volumes lvm group is not persisted across reboot. that is not planning on being fixed as no one should be using cinder with the lvm driver backed by a loopback device anyway, nor is it supported. |