Bug 1371911 - Cinder create volume from snapshot chmod permission error
Summary: Cinder create volume from snapshot chmod permission error
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: z3
: 11.0 (Ocata)
Assignee: Alan Bishop
QA Contact: Amit Ugol
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-31 12:20 UTC by Ondrej
Modified: 2022-08-16 13:58 UTC (History)
19 users (show)

Fixed In Version: openstack-tripleo-heat-templates-6.1.0-1.el7ost, puppet-tripleo-6.5.0-1.el7ost, puppet-cinder-10.3.1-1.el7ost
Doc Type: Bug Fix
Doc Text:
Previously, some cinder volume operations would fail when using the NFS backend. This was because cinder's NFS backend driver implements enhanced NAS security features that are enabled by default. These features require non-standard configuration changes in nova's libvirt, and without these changes, some cinder volume operations would fail. This update introduces TripleO settings to control the NFS driver's NAS secure features, and disables the features by default. As a result, cinder volume operations no longer fail when using the NFS backend.
Clone Of:
Environment:
Last Closed: 2017-10-31 17:37:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1688332 0 None None None 2017-05-16 14:54:24 UTC
OpenStack gerrit 462663 0 'None' MERGED Add support for Cinder "NAS secure" driver params 2020-11-09 14:15:16 UTC
OpenStack gerrit 462665 0 'None' MERGED Add support for Cinder "NAS secure" driver params 2020-11-09 14:15:16 UTC
OpenStack gerrit 462667 0 'None' MERGED Add support for Cinder "NAS secure" driver params 2020-11-09 14:15:16 UTC
Red Hat Bugzilla 1327616 0 unspecified CLOSED backup-create fails if I mount the NFS volume on instance 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1360424 0 high CLOSED Cinder backup raises DeviceUnavailable for NFS volumes 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1433404 0 unspecified CLOSED Creating a Cinder Volume using NFS Backend fails 2022-08-05 14:55:18 UTC
Red Hat Issue Tracker OSP-4567 0 None None None 2022-08-16 13:58:08 UTC
Red Hat Knowledge Base (Article) 2594651 0 None None None 2016-08-31 12:58:52 UTC
Red Hat Knowledge Base (Solution) 2594651 0 None None None 2016-09-23 06:37:17 UTC
Red Hat Product Errata RHBA-2017:3098 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 11.0 director Bug Fix Advisory 2017-10-31 21:33:28 UTC

Internal Links: 1433404

Description Ondrej 2016-08-31 12:20:01 UTC
Description of problem:
Cinder with Netapp volume backend.
Creating a volume from snapshot fails with chmod permission error as
cinder doesn't run the command as root with wrapper.

/var/log/cinder/volume.log
2016-08-26 09:54:06.488 17049 ERROR oslo_messaging.rpc.dispatcher [req-af9a6f2b-46f1-4cf2-b4d7-c31b9dad5879 0bb110d62bc14529b98c99c43ecff1b5 8c9e0ad42a5c4deb90c69608604558cf - - -] Exception during message handling: Unexpected error while running command.
Command: None
Exit code: -
Stdout: u"Unexpected error while running command.\nCommand: chmod 660 /var/lib/cinder/mnt/afc4b156a118a2e0550d529e141763dc/volume-c9ddafc5-9215-457e-a846-0bcf8ed215ac\nExit code: 1\nStdout: u''\nStderr: 'chmod: changing permissions of \\xe2\\x80\\x98/var/lib/cinder/mnt/afc4b156a118a2e0550d529e141763dc/volume-c9ddafc5-9215-457e-a846-0bcf8ed215ac\\xe2\\x80\\x99: Permission denied\\n'"
Stderr: None
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     executor_callback))
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     executor_callback)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 471, in create_volume
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     _run_flow_locked()
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in inner
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     return f(*args, **kwargs)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 461, in _run_flow_locked
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     _run_flow()
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 457, in _run_flow
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     flow_engine.run()
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 96, in run
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     for _state in self.run_iter():
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 153, in run_iter
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     failure.Failure.reraise_if_any(failures.values())
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 244, in reraise_if_any
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     failures[0].reraise()
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/types/failure.py", line 251, in reraise
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     six.reraise(*self._exc_info)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 67, in _execute_task
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     result = task.execute(**arguments)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 643, in execute
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     **volume_spec)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 421, in _create_from_snapshot
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     snapshot)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 103, in create_volume_from_snapshot
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     self._set_rw_permissions(path)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 309, in _set_rw_permissions
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     run_as_root=self._execute_as_root)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 143, in execute
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     return processutils.execute(*cmd, **kwargs)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 266, in execute
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher     cmd=sanitized_cmd)
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher ProcessExecutionError: Unexpected error while running command.
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher Command: chmod 660 /var/lib/cinder/mnt/afc4b156a118a2e0550d529e141763dc/volume-c9ddafc5-9215-457e-a846-0bcf8ed215ac
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher Exit code: 1
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher Stdout: u''
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher Stderr: 'chmod: changing permissions of \xe2\x80\x98/var/lib/cinder/mnt/afc4b156a118a2e0550d529e141763dc/volume-c9ddafc5-9215-457e-a846-0bcf8ed215ac\xe2\x80\x99: Permission denied\n'
2016-08-26 09:54:06.488 17049 TRACE oslo_messaging.rpc.dispatcher


Workaround is to set nas_secure_file_operations to false in cinder.conf under [mydriversection]:

[NetApp_backend]
nas_secure_file_operations=false


Version-Release number of selected component (if applicable):


How reproducible:
everytime

Steps to Reproduce:
1.create volume
2.attach to instance
3.detach from instance
4.create snapshot of the volume
5.create volume from the snapshot

Actual results:
create volume as step 5 fails with chmod 660 permission error

Expected results:
volume is created

Additional info:
openstack-cinder-2015.1.3-5.el7ost.noarch
python-cinder-2015.1.3-5.el7ost.noarch
python-cinderclient-1.2.1-1.el7ost.noarch

Comment 12 Alan Bishop 2017-03-24 20:22:30 UTC
Fix has been merged into stable/ocata.

Comment 13 Alan Bishop 2017-03-24 20:23:03 UTC
Whoops, sorry, wrong bug.

Comment 16 Alan Bishop 2017-03-28 15:50:50 UTC
The most straightforward way of resolving this for OSP-11 is to enhance the
OSP deployment documentation. Section 3 of the NetApp Block Storage Back End
Guide [1] describes how to deploy a NetApp back end using a user-customized
copy of the cinder-netapp-config.yaml environment file. The guide can be
enhanced to direct the user to add the following additional lines to the file:

  ControllerExtraConfig:
    cinder::config::cinder_config:
      tripleo_netapp/nas_secure_file_operations:
        value: false

The lines should be appended at the bottom, under the existing
parameter_defaults stanza. Leading whitespace is critical. This will trigger
the director to add a "nas_secure_file_operations=False" setting under in the
appropriate section of the /etc/cinder/cinder.conf file on each controller.

Note: The "tripleo_netapp" substring needs to match the
CinderNetappBackendName specified by the user ("tripleo_netapp" is the default
name). See the first entry in Table 1. of the guide [1].

[1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/netapp_block_storage_back_end_guide/

Comment 17 Alan Bishop 2017-05-16 14:53:10 UTC
Assigning back to myself ON_DEV. The Director's heat templates are being updated to support the controlling the NAS secure settings without resorting to the workaround described in comment #16. This is targeted for an early OSP-11z release, and so there's no need to document the workaround.

To be clear, the resolution will be an updated heat template that supports disabling the NAS secure feature. Customers that require the NAS secure feature be enabled should track bug #1393924, but that work won't be ready any time soon.

Comment 18 Lucy Bopf 2017-05-17 04:32:09 UTC
Hi Alan,

This bug is still against the 'documentation' component. Can we move it back to 'rhosp-director' (or something else) now that there's no docs requirement?

Comment 19 Alan Bishop 2017-05-17 11:55:16 UTC
Thanks, Lucy, I meant to do that.

Comment 20 Lon Hohberger 2017-09-06 19:57:33 UTC
According to our records, this should be resolved by openstack-tripleo-heat-templates-6.1.0-2.el7ost.  This build is available now.

Comment 21 Lon Hohberger 2017-09-06 19:57:39 UTC
According to our records, this should be resolved by puppet-tripleo-6.5.0-5.el7ost.  This build is available now.

Comment 22 Lon Hohberger 2017-09-06 19:57:45 UTC
According to our records, this should be resolved by puppet-cinder-10.3.1-1.el7ost.  This build is available now.

Comment 23 Tzach Shefi 2017-09-11 13:38:41 UTC
I don't have access to a netapp system to actually test this. 
Gave fake details, cinder conf netapp section

[tripleo_netapp]
volume_backend_name=tripleo_netapp
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_login=admin
netapp_password=admin
netapp_server_hostname=10.35.160.11
netapp_server_port=80
netapp_size_multiplier=1.2
netapp_storage_family=ontap_cluster
netapp_storage_protocol=nfs
netapp_transport_type=http
netapp_vfiler=
netapp_vserver=
netapp_partner_backend_name=
expiry_thres_minutes=720
thres_avl_size_perc_start=20
thres_avl_size_perc_stop=60
nfs_shares_config=/etc/cinder/shares.conf
netapp_copyoffload_tool_path=
netapp_controller_ips=
netapp_sa_password=
netapp_pool_name_search_pattern=()
netapp_webservice_path=/devmgr/v2
nas_secure_file_operations=False      -> these two were added. 
nas_secure_file_permissions=False

Let me know if sufficient to verify, if not all I can do it OtherQA it and ignore.

Comment 24 Tzach Shefi 2017-09-12 06:32:46 UTC
Verified based on #23,
Noticed two below were added as default values, when I used Cinder netapp yaml template. 

nas_secure_file_operations=False      
nas_secure_file_permissions=False

As I don't have access to netapp, can't reproduce actual steps.
This is best I can do to verify.

Comment 27 errata-xmlrpc 2017-10-31 17:37:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3098


Note You need to log in before you can comment on or make changes to this bug.