Bug 1067718 - Foreman deployed environment times out creating cinder snapshot using gluster fuse
Summary: Foreman deployed environment times out creating cinder snapshot using gluster...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: z5
: 4.0
Assignee: Jiri Stransky
QA Contact: Ami Jeain
URL:
Whiteboard:
Depends On:
Blocks: 1040649
TreeView+ depends on / blocked
 
Reported: 2014-02-20 22:51 UTC by Steve Reichard
Modified: 2016-04-26 17:42 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-08 17:00:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Steve Reichard 2014-02-20 22:51:21 UTC
Description of problem:

I deployed a nova network configuration using Foreman. 
Cinder backend is RHS.

David Kranz ran tempest tests against it.

One failure David recreate by hand was:
The tempest code for this test is in here https://github.com/openstack/tempest/blob/stable/havana/tempest/scenario/test_volume_boot_pattern.py
The test also does a few checks not shown below.
To make it easier I have reproduced this issue from the cli. Here is what the test is doing:

cinder create --display-name volfromimage --image-id c116131d-8e2a-4a81-9458-bf38d66febb9 1
(returns volume id of 70b4bb79-be4e-473e-8db9-87096cefe42f)


nova boot --flavor 42 --image  c116131d-8e2a-4a81-9458-bf38d66febb9 --block-device-mapping vda=70b4bb79-be4e-473e-8db9-87096cefe42f:::0 volume-boot

nova delete volume-boot

nova boot --flavor 42 --image  c116131d-8e2a-4a81-9458-bf38d66febb9 --block-device-mapping vda=70b4bb79-be4e-473e-8db9-87096cefe42f:::0 volume-boot-2

cinder snapshot-create --force True --display-name snapshot 70b4bb79-be4e-473e-8db9-87096cefe42f

At this point the test waits for the snapshot to become available but it never does. It stays in in-use.
I don't fully understand what this test is doing since I am not very familiar with this api. I don't know if this is a bug or config problem.

 -David 



Eric Harney looked at the config:


Oh, I think I know what this is.

in cinder/volume.log:

2014-02-13 18:23:56.042 20093 ERROR cinder.volume.drivers.glusterfs
[req-db9fa55a-6178-4e38-b63a-fd3df4abbc4e
7ac6d58f6cfd4d5e811e716181a6246f 005d728e807543a59240975ba75b31d6] Call
to Nova to create snapshot failed
2014-02-13 18:23:56.042 20093 ERROR cinder.volume.drivers.glusterfs
[req-db9fa55a-6178-4e38-b63a-fd3df4abbc4e
7ac6d58f6cfd4d5e811e716181a6246f 005d728e807543a59240975ba75b31d6]
Policy doesn't allow
compute_extension:os-assisted-volume-snapshots:create to be performed.
(HTTP 403) (Request-ID: req-f919a907-6c4a-4189-9f09-be5d56f999cb)

This is because Nova's policy.json requires the assisted-snapshots
extension to be accessed over the Nova admin API:

"compute_extension:os-assisted-volume-snapshots:create": "rule:admin_api",


You have to either change policy.json to allow it via the regular
compute API, or deploy a Nova admin API, which requires a nova.conf
setting and a keystone endpoint.  (I'll have to look up how to do this
again, I usually change policy.json on my dev configurations.)


>>




I assume rhos-pm would need to determine if changing the policy is
better or configuring the admin API is better.



Version-Release number of selected component (if applicable):

[root@rhos-foreman ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.5 (Santiago)
[root@rhos-foreman ~]# uname -a
Linux rhos-foreman.cloud.lab.eng.bos.redhat.com 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Dec 13 06:58:20 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@rhos-foreman ~]# yum list installed | grep -i -e foreman -e puppet
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
foreman.noarch                    1.3.0.2-1.el6sat   @rhel-x86_64-server-6-ost-4
foreman-installer.noarch          1:1.3.0-1.el6sat   @rhel-x86_64-server-6-ost-4
foreman-mysql.noarch              1.3.0.2-1.el6sat   @rhel-x86_64-server-6-ost-4
foreman-mysql2.noarch             1.3.0.2-1.el6sat   @rhel-x86_64-server-6-ost-4
foreman-proxy.noarch              1.3.0-3.el6sat     @rhel-x86_64-server-6-ost-4
foreman-selinux.noarch            1.3.0-1.el6sat     @rhel-x86_64-server-6-ost-4
openstack-foreman-installer.noarch
packstack-modules-puppet.noarch   2013.2.1-0.22.dev956.el6ost
puppet.noarch                     3.2.4-3.el6_5      @rhel-x86_64-server-6-ost-4
puppet-server.noarch              3.2.4-3.el6_5      @rhel-x86_64-server-6-ost-4
ruby193-rubygem-foreman_openstack_simplify.noarch
rubygem-foreman_api.noarch        0.1.6-1.el6sat     @rhel-x86_64-server-6-ost-4
[root@rhos-foreman ~]# 




How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Hugh Brock 2014-03-05 18:01:38 UTC
We do not have enough information to fix this bug yet. We need to determine whether the test is broken and should be using the admin API, or whether Nova's policy.json is broken. Pushing to A4.


Note You need to log in before you can comment on or make changes to this bug.