Bug 1237150

Summary: Need support for NFS for glance
Product: Red Hat OpenStack Reporter: Mike Burns <mburns>
Component: rhosp-directorAssignee: Jiri Stransky <jstransk>
Status: CLOSED ERRATA QA Contact: nlevinki <nlevinki>
Severity: urgent Docs Contact:
Priority: high    
Version: DirectorCC: cwolfe, dmacpher, dnavale, kbasil, mburns, rhel-osp-director-maint, sasha, sclewis
Target Milestone: asyncKeywords: Triaged, ZStream
Target Release: Director   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-tripleo-heat-templates-0.8.6-31.el7ost Doc Type: Enhancement
Doc Text:
The glance backend was hard coded to use swift. This meant you could not use other file system types, such as NFS, for glance. This release adds a new enhancement that provides a NFS backend for glance. You can now configure the Overcloud's glance service an NFS backend using the following parameters: * GlanceBackend * GlanceFilePcmkManage * GlanceFilePcmkDevice * GlanceFilePcmkOptions
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-08-05 13:57:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1247585    
Bug Blocks:    

Description Mike Burns 2015-06-30 13:32:16 UTC
Description of problem:
Currently director only deploys glance with local file, Ceph, or swift backends.  

We need to have NFS support as well for glance

Additional info:

* We essentially lock out the larger storage vendors who support NFS but are not already integrated.
* We hurt POC cases where setting up a random NFS server is easy, but Ceph deployment is more complicated/resource intensive/time consuming
* We essentially require swift or ceph for HA use case.

Comment 3 Jiri Stransky 2015-07-08 08:14:14 UTC
On review upstream: https://review.openstack.org/#/c/199152/

How i tested:

Create a NFS export reachable from the overcloud. I'm using virtual
setup, so i'm exporting from the bare metal host.

yum -y install nfs-utils
systemctl start rpcbind
systemctl start nfs-server
setsebool -P nfs_export_all_rw 1

mkdir -p /export/glance
chown -R nfsnobody:nfsnobody /export
echo "/export/glance *(rw,sync,no_root_squash)" > /etc/exports
exportfs -rav

showmount -e


Set these parameters on the overcloud heat stack (i used an extra
environment file, but the workflow with Tuskar might be different):

parameters:
  GlanceBackend: file
  GlanceFilePcmkDevice: 192.168.122.1:/export/glance
  GlanceFilePcmkManage: true
  GlanceFilePcmkOptions: retry=1


You can verify that everything works as expected by uploading a glance
image and launching an instance in the overcloud.

And verify that on the bare metal the volume appeared in the exported
directory:

You can see the mount on the controllers:

mount | grep glance


And any image files present on the NFS host:

ls /export/glance

Comment 4 Jiri Stransky 2015-07-09 11:42:31 UTC
Typo in the testing steps above -- use all_squash instead of no_root_squash when exporting the directory on NFS host:

echo "/export/glance *(rw,sync,all_squash)" > /etc/exports

Comment 6 Alexander Chuzhoy 2015-07-31 16:33:36 UTC
Verified: FailedQA

Environment:
instack-undercloud-2.1.2-22.el7ost.noarch
openstack-tripleo-heat-templates-0.8.6-45.el7ost.noarch


Wasn't able to create an image with glance command.

In order to overcome, had to:
1. Apply this patch:  https://gist.github.com/jistr/08d3d6ae82f1e99773d1 prior to the overcloud deployment.
2. Re-mount the NFS share with these args: "mount -t nfs -o context=system_u:object_r:glance_var_lib_t:s0"

* This can be achieved with placing the following lines in the parameters section of the yaml file:
  GlanceBackend: file
  GlanceFilePcmkDevice: [IP:/share]
  GlanceFilePcmkManage: true
  GlanceFilePcmkOptions: context=system_u:object_r:glance_var_lib_t:s0

Comment 7 Mike Burns 2015-07-31 16:37:18 UTC
This is resolved with the fix for bug 1247585

Comment 10 errata-xmlrpc 2015-08-05 13:57:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2015:1549