Bug 1433404
| Summary: | Creating a Cinder Volume using NFS Backend fails | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | David Peacock <dpeacock> | ||||||||
| Component: | openstack-tripleo-heat-templates | Assignee: | Alan Bishop <abishop> | ||||||||
| Status: | CLOSED ERRATA | QA Contact: | Tzach Shefi <tshefi> | ||||||||
| Severity: | unspecified | Docs Contact: | |||||||||
| Priority: | unspecified | ||||||||||
| Version: | 10.0 (Newton) | CC: | abishop, akarlsso, aschultz, dpeacock, jjoyce, jzaher, mburns, molasaga, pbandark, pgrist, rhel-osp-director-maint, samccann, srevivo | ||||||||
| Target Milestone: | z4 | Keywords: | Triaged, ZStream | ||||||||
| Target Release: | 10.0 (Newton) | ||||||||||
| Hardware: | Unspecified | ||||||||||
| OS: | Unspecified | ||||||||||
| Whiteboard: | |||||||||||
| Fixed In Version: | openstack-tripleo-heat-templates-5.2.0-24.el7ost, puppet-tripleo-5.6.0-4.el7ost, puppet-cinder-9.5.0-2.el7ost | Doc Type: | Bug Fix | ||||||||
| Doc Text: |
Cause: The NFS backend driver for Cinder implements enhanced NAS security features that default to being enabled. However, the features require non-standard configuration changes in Nova's libvirt, and without those changes some cinder volume operations fail.
Consequence: Some cinder volume operations fail when using the NFS backend.
Fix: Add TripleO settings to control the NFS driver's NAS secure features, and disable the features by default.
Result: Cinder volume operations no longer fail when using the NFS backend.
|
Story Points: | --- | ||||||||
| Clone Of: | Environment: | ||||||||||
| Last Closed: | 2017-09-06 17:09:30 UTC | Type: | Bug | ||||||||
| Regression: | --- | Mount Type: | --- | ||||||||
| Documentation: | --- | CRM: | |||||||||
| Verified Versions: | Category: | --- | |||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||
| Embargoed: | |||||||||||
| Bug Depends On: | |||||||||||
| Bug Blocks: | 1381612 | ||||||||||
| Attachments: |
|
||||||||||
|
Description
David Peacock
2017-03-17 15:07:13 UTC
Ping - Any traction on this? Customer is Verizon, and I believe this problem is starting to be a blocker for them. Please let me know if there's anything I can do to answer any questions or provide more data. Thank you, David Peacock Hi engineering, Is there anything you need here? Thanks, David Please set the following settings in the nfs backend section of cinder.conf: nas_secure_file_permissions=False nas_secure_file_operations=False and restart the cinder volume service. This should get the NFS driver working. Thanks Eric, I'll give that a go. :-) I'm afraid this wasn't the special sauce.
Please let me know what you need from me; are we looking at a legitimate bug here?
stack@btrhlagrnce-h-ucld-01 ~]$ . overcloudrc
[stack@btrhlagrnce-h-ucld-01 ~]$ openstack volume type create NFS
+---------------------------------+--------------------------------------+
| Field | Value |
+---------------------------------+--------------------------------------+
| description | None |
| id | 0e2568a2-453f-45a3-ad00-efcf85b08bbc |
| is_public | True |
| name | NFS |
| os-volume-type-access:is_public | True |
+---------------------------------+--------------------------------------+
[stack@btrhlagrnce-h-ucld-01 ~]$ openstack volume type set NFS --property volume_backend_name=tripleo_nfs
[stack@btrhlagrnce-h-ucld-01 ~]$ openstack volume create --size 1 --type NFS nfs_workaround
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-03-27T18:12:21.580279 |
| description | None |
| encrypted | False |
| id | 4cc6102b-c79a-4b78-935a-a69cbea99704 |
| migration_status | None |
| multiattach | False |
| name | nfs_workaround |
| properties | |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | NFS |
| updated_at | None |
| user_id | f2383e92683a4615a165641b3c4ca69f |
+---------------------+--------------------------------------+
[stack@btrhlagrnce-h-ucld-01 ~]$ openstack volume list
+--------------------------------------+----------------+--------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+----------------+--------+------+-------------+
| 4cc6102b-c79a-4b78-935a-a69cbea99704 | nfs_workaround | error | 1 | |
+--------------------------------------+----------------+--------+------+-------------+
[stack@btrhlagrnce-h-ucld-01 ~]$
[root@btrhlagrnce-h-pe4dloc-003 ~]# cat /etc/cinder/cinder.conf | grep 'nas_secure_file'
#nas_secure_file_operations = auto
#nas_secure_file_permissions = auto
nas_secure_file_permissions=False
nas_secure_file_operations=False
[root@btrhlagrnce-h-pe4dloc-003 ~]#
[root@btrhlagrnce-h-pe4dloc-003 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: btrhlagrnce-h-pe4dloc-003 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Mon Mar 27 18:11:11 2017 Last change: Mon Mar 27 18:10:24 2017 by root via crm_resource on btrhlagrnce-h-pe4dloc-003
3 nodes and 19 resources configured
Online: [ btrhlagrnce-h-pe4dloc-001 btrhlagrnce-h-pe4dloc-002 btrhlagrnce-h-pe4dloc-003 ]
Full list of resources:
ip-192.168.11.254 (ocf::heartbeat:IPaddr2): Started btrhlagrnce-h-pe4dloc-001
Clone Set: haproxy-clone [haproxy]
Started: [ btrhlagrnce-h-pe4dloc-001 btrhlagrnce-h-pe4dloc-002 btrhlagrnce-h-pe4dloc-003 ]
Master/Slave Set: galera-master [galera]
Masters: [ btrhlagrnce-h-pe4dloc-001 btrhlagrnce-h-pe4dloc-002 btrhlagrnce-h-pe4dloc-003 ]
ip-192.168.12.9 (ocf::heartbeat:IPaddr2): Started btrhlagrnce-h-pe4dloc-002
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ btrhlagrnce-h-pe4dloc-001 btrhlagrnce-h-pe4dloc-002 btrhlagrnce-h-pe4dloc-003 ]
ip-192.168.8.5 (ocf::heartbeat:IPaddr2): Started btrhlagrnce-h-pe4dloc-003
ip-192.168.12.7 (ocf::heartbeat:IPaddr2): Started btrhlagrnce-h-pe4dloc-001
Master/Slave Set: redis-master [redis]
Masters: [ btrhlagrnce-h-pe4dloc-003 ]
Slaves: [ btrhlagrnce-h-pe4dloc-001 btrhlagrnce-h-pe4dloc-002 ]
ip-2001.4888.a42.3101.420.fe0.0.2000 (ocf::heartbeat:IPaddr2): Started btrhlagrnce-h-pe4dloc-002
openstack-cinder-volume (systemd:openstack-cinder-volume): Started btrhlagrnce-h-pe4dloc-003
ip-10.217.162.138 (ocf::heartbeat:IPaddr2): Started btrhlagrnce-h-pe4dloc-001
(In reply to David Peacock from comment #5) > [root@btrhlagrnce-h-pe4dloc-003 ~]# cat /etc/cinder/cinder.conf | grep > 'nas_secure_file' > #nas_secure_file_operations = auto > #nas_secure_file_permissions = auto > nas_secure_file_permissions=False > nas_secure_file_operations=False Can you confirm that these settings are in the driver backend section of cinder.conf and not the default section? I can't tell from this output. (Just grabbing cinder.conf may be easiest.) We should also get a cinder volume log to see what's really going on. I'll check that Eric. Thanks. Working through some logistical issues with the customer in getting these details. Can you confirm the specific section of cinder.conf you're talking about, Eric? `[BACKEND]`? Or does it have another name? Thank you, David The customer advises me that the lines were added in the [DEFAULT] section of the cinder.conf. Please find attached the log files from the last test. Thank you, David Created attachment 1267667 [details]
cinder volume log and others
Created attachment 1267711 [details] Attached example cinder.conf I've attached an example cinder.conf (nfs) file from my RFE mentioned below. Look at Eric's #6 on this bug. Review a recent NFS RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1161413#c16 Notice initially I also changed stuff under default section - not good :) The correct method later on #19. Under Cinder.conf's default section you should only change this: enabled_backend=nfs (nfs was my chosen name, use anything you like) Then at the bottom of cinder.conf create a new section [nfs] (same name as ^) volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_shares_config=/etc/cinder/ .. NFS mount tips I used suggest you try: If you run the #mount command on controller, does the cinder mount show up? If it doesn't show up, can you successfully mount the share manually? Can you R/W a temp file on that mount ? Another NFS issue hit on my RFE, not sure it's relevant to your case probably isn't but just in case. If your NFS clients (controller compute nodes) are behind a NAT before they reach the NFS server you need to allow insecure nfs share option. /export/ins_cinder *(rw,insecure,no_root_squash) @Tzach, Thank you for this information; I'm working with our customer to see how this works out for them. I'll be back in touch. David Hi guys, I have confirmation back that with the settings in the correct section of the cinder.conf, the mountpoints do work correctly. What I need from engineering next is two fold: 1) An understanding of why when the customer configures the templates as attached, which looks at face-value to be the idiomatic way as recommended by comments, these crucial cinder settings aren't put in the right place in the ultimate cinder.conf and 2) Advise on how best to templatize this so that it does work out of the box on a fresh deployment without any post-deployment workaround. Thank you very much indeed for your help; I (and our customer) really appreciate it. David Created attachment 1271537 [details]
Custom templates from customer with Cinder / NFS issues
These are the customer's current templates; they look to me correct but result in our known issue as has been worked in this BZ.
I'd like to know the best practise for modifying these so that the crucial settings are introduced in the correct (non-default) section of the resulting cinder.conf
Hi David, we hope to fully resolve this soon. See bug #1393924. It's a now a priority for OSP-11. Thank you Alan. What's the timeline for OSP-11 at this point? I have advised Verizon to continue with their work-around for now pending a bugfix. Thank you, David The current OSP-11 schedule is RC on Apr-27 and GA on May-18. However, I do not yet know the timeline for bug fix. That is, I cannot say whether it will be pre or post GA. That's good enough as a guesstimate. Thanks a lot Alan; much appreciated. David *** Bug 1327616 has been marked as a duplicate of this bug. *** Verified on:
openstack-tripleo-heat-templates-5.3.0-2.el7ost.noarch
puppet-tripleo-5.6.1-1.el7ost.noarch
puppet-cinder-9.5.0-2.el7ost.noarch
Configured Cinder with NFS backend via THT template
Cinder create worked
[stack@undercloud-0 ~]$ cinder show 11209f55-5cbf-41f5-b0c7-af059d43f000
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-08-16T19:50:02.000000 |
.. |
| encrypted | False |
| id | 11209f55-5cbf-41f5-b0c7-af059d43f000 |
.. |
| multiattach | False |
| name | one |
| os-vol-host-attr:host | hostgroup@tripleo_nfs#tripleo_nfs |
|
...
| status | available |
storage yaml
_______________________
## Whether to enable iscsi backend for Cinder.
CinderEnableIscsiBackend: false
## Whether to enable rbd (Ceph) backend for Cinder.
CinderEnableRbdBackend: false
## Cinder Backup backend can be either 'ceph' or 'swift'.
CinderBackupBackend: false
## Whether to enable NFS backend for Cinder.
CinderEnableNfsBackend: true
## Whether to enable rbd (Ceph) backend for Nova ephemeral storage.
NovaEnableRbdBackend: false
## Glance backend can be either 'rbd' (Ceph), 'swift' or 'file'.
GlanceBackend: swift
## Gnocchi backend can be either 'rbd' (Ceph), 'swift' or 'file'.
GnocchiBackend: swift
#### CINDER NFS SETTINGS ####
## NFS mount options
CinderNfsMountOptions: ''
## NFS mount point, e.g. '192.168.122.1:/export/cinder'
CinderNfsServers: '10.35.160.111:/export/ins_cinder'
_____________________
Cinder nfs config section
enabled_backends = tripleo_nfs
[tripleo_nfs]
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nas_secure_file_permissions=False
nfs_shares_config=/etc/cinder/shares-nfs.conf
nfs_mount_options=
nas_secure_file_operations=False
volume_backend_name=tripleo_nfs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2654 |