Bug 1161413 - [RFE][cinder]: NFS driver snapshot support
Summary: [RFE][cinder]: NFS driver snapshot support
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: Upstream M3
: 11.0 (Ocata)
Assignee: Eric Harney
QA Contact: Tzach Shefi
Don Domingo
URL: https://blueprints.launchpad.net/cind...
Whiteboard: upstream_milestone_none upstream_defi...
: 1155504 (view as bug list)
Depends On:
Blocks: 1515672
TreeView+ depends on / blocked
 
Reported: 2014-11-07 05:06 UTC by RHOS Integration
Modified: 2023-02-22 23:02 UTC (History)
16 users (show)

Fixed In Version: openstack-cinder-10.0.0-0.20170131044149.25a3765.el7ost
Doc Type: Enhancement
Doc Text:
The NFS back end driver for the Block Storage service now supports snapshots.
Clone Of:
: 1515672 (view as bug list)
Environment:
Last Closed: 2017-05-17 19:22:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Packstack answer file ocata.conf, cinder.conf, cinder logs (261.01 KB, application/x-gzip)
2017-03-07 12:21 UTC, Tzach Shefi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 133074 0 None MERGED NFS driver snapshot support 2020-10-02 16:58:32 UTC
OpenStack gerrit 147186 0 None MERGED NFS snapshots 2020-10-02 16:58:32 UTC
Red Hat Product Errata RHEA-2017:1245 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 11.0 Bug Fix and Enhancement Advisory 2017-05-17 23:01:50 UTC

Description RHOS Integration 2014-11-07 05:06:58 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/cinder/+spec/nfs-snapshots.

Description:

Add support for snapshots to the NFS volume driver

Specification URL (additional information):

None

Comment 1 Sean Cohen 2014-11-26 03:33:03 UTC
The feature is under review for Kilo milestone 2 
https://review.openstack.org/#/c/133074/
Sean

Comment 2 Sean Cohen 2014-11-26 03:35:03 UTC
*** Bug 1155504 has been marked as a duplicate of this bug. ***

Comment 3 Sean Cohen 2015-08-24 20:06:53 UTC
Full Spec available at: http://specs.openstack.org/openstack/cinder-specs/specs/liberty/nfs-snapshots.html

Comment 4 Sergey Gotliv 2015-08-27 13:52:04 UTC
Eric, is leading this effort upstream. The spec file is already merged, but it probably will be implemented only in the M cycle.

Comment 5 Eric Harney 2015-08-27 14:26:41 UTC
This is currently blocking on a required fix in Nova:
  https://bugs.launchpad.net/nova/+bug/1416132

Comment 10 Christian Schwede (cschwede) 2017-01-26 15:03:54 UTC
Upstream patches merged, moving from ON_DEV to POST.

Adding a needinfo - need to document some config change and limitations for anyone to actually be able to use it there too

Comment 11 Eric Harney 2017-01-26 16:17:51 UTC
Deployments must set "nfs_snapshot_support = True" in the volume driver backend section of cinder.conf for this feature to be enabled.

Comment 13 Tzach Shefi 2017-03-07 10:30:39 UTC
Eric, need you insight on this. 
There is a temp issue with some undercloud bug can't install with Infra, started testing this with Packstack. 

Simple AIO server 
openstack-cinder-10.0.1-0.20170222155052.ea70d55.el7ost.noarch
puppet-cinder-10.3.0-0.20170220170133.0192e52.el7ost.noarch
python-cinderclient-1.11.0-0.20170208174522.7d140d0.el7ost.noarch
python-cinder-10.0.1-0.20170222155052.ea70d55.el7ost.noarch


On packstack answer file section:
CONFIG_CINDER_BACKEND=nfs
CONFIG_CINDER_NFS_MOUNTS=XXXXXXXX:/export/cinder

On cinder.conf also enabled
# grep nfs_snapshot /etc/cinder/cinder.conf 
nfs_snapshot_support = True

Cinder create works fine, vol created on nfs backend
# cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| e84c299d-fafd-4c4f-9a23-5ec3f1337057 | available | -    | 1    | nfs         | false  


Cinder snapshot create fails

[root@panther13 ~(keystone_admin)]# cinder snapshot-list
+--------------------------------------+--------------------------------------+--------+-----------+------+
| ID                                   | Volume ID                            | Status | Name      | Size |
+--------------------------------------+--------------------------------------+--------+-----------+------+
| e1c82a7c-3729-4098-998f-aeb620b27bce | e84c299d-fafd-4c4f-9a23-5ec3f1337057 | error  | snapshot1 | 1    |
| f3da162b-5c51-425b-afa8-874a14928ec6 | e84c299d-fafd-4c4f-9a23-5ec3f1337057 | error  | snap2     | 1    |


I've since added another volume/snapshots none work. 
Shouldn't I see some errors on volume log (Debug is set to true) ?

Comment 15 Tzach Shefi 2017-03-07 12:21:56 UTC
Created attachment 1260780 [details]
Packstack answer file ocata.conf, cinder.conf, cinder logs

Forgot to add before, logs files of my system

Comment 16 Eric Harney 2017-03-07 15:32:03 UTC
You need to set nfs_snapshot_support=True in the [nfs] backend section of cinder.conf instead of in [DEFAULT].

Comment 17 Tzach Shefi 2017-03-07 16:34:58 UTC
You do have a good point, sorry my bad. 
So i fixed it restarted services. 

[nfs]
volume_backend_name=nfs
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfs_shares.conf
nfs_snapshot_support=True     ( #out the one I touched om  default section)


Now hitting something else, can't even create a volume



lib/python2.7/site-packages/cinder/volume/drivers/nfs.py:431
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.drivers.nfs [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] Snapshots are not supported with nas_secure_file_operations enabled ('true' or 'auto'). Please set it to 'false' if you intend to  have it enabled.
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] Failed to initialize driver.
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager Traceback (most recent call last):
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 431, in init_host
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager     self.driver.do_setup(ctxt)
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line 199, in do_setup
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager     self._check_snapshot_support(setup_checking=True)
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line 551, in _check_snapshot_support
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager     raise exception.VolumeDriverException(message=msg)
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager VolumeDriverException: Volume driver reported an error: Snapshots are not supported with nas_secure_file_operations enabled ('true' or 'auto'). Please set it to 'false' if you intend to  have it enabled.
2017-03-07 18:25:16.772 17511 ERROR cinder.volume.manager 
2017-03-07 18:25:16.773 17511 DEBUG cinder.service [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] Creating RPC server for service cinder-volume start /usr/lib/python2.7/site-packages/cinder/service.py:243
2017-03-07 18:25:16.776 17511 DEBUG oslo_db.sqlalchemy.engines [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:261
2017-03-07 18:25:16.780 17511 DEBUG cinder.service [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] Pinning object versions for RPC server serializer to 1.21 start /usr/lib/python2.7/site-packages/cinder/service.py:250
2017-03-07 18:25:16.780 17511 WARNING py.warnings [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] /usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py:200: FutureWarning: The access_policy argument is changing its default value to <class 'oslo_messaging.rpc.dispatcher.DefaultRPCAccessPolicy'> in version '?', please update the code to explicitly set None as the value: access_policy defaults to LegacyRPCAccessPolicy which exposes private methods. Explicitly set access_policy to DefaultRPCAccessPolicy or ExplicitRPCAccessPolicy.
  access_policy)

2017-03-07 18:25:16.810 17511 INFO cinder.volume.manager [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] Initializing RPC dependent components of volume driver NfsDriver (1.4.0)
2017-03-07 18:25:16.811 17511 ERROR cinder.utils [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] Volume driver NfsDriver not initialized
2017-03-07 18:25:16.811 17511 ERROR cinder.volume.manager [req-44f4716d-9400-413f-997a-b26c737b1da6 - - - - -] Cannot complete RPC initialization because driver isn't initialized properly.
2017-03-07 18:25:26.812 17511 ERROR cinder.service [-] Manager for service cinder-volume panther13.qa.lab.tlv.redhat.com@nfs is reporting problems, not sending heartbeat. Service will appear "down".


I'm guessing this isn't good
 Snapshots are not supported with nas_secure_file_operations enabled ('true' or 'auto'). Please set it to 'false' if you intend to  have it enabled.


While looking at cinder.conf 
nas_secure_file_operations 
Also noticed this one #nas_secure_file_permissions = auto

Guess needs a bit more fiddling around  
Thanks Eric.

Comment 18 Eric Harney 2017-03-07 16:39:49 UTC
(In reply to Tzach Shefi from comment #17)

> 
> While looking at cinder.conf 
> nas_secure_file_operations 
> Also noticed this one #nas_secure_file_permissions = auto
> 
> Guess needs a bit more fiddling around  
> Thanks Eric.

Correct, you'll have to set "nas_secure_file_permissions = False" for this feature to work.

Comment 19 Tzach Shefi 2017-03-15 08:36:23 UTC
Verified on version:
openstack-cinder-10.0.1-0.20170222155052.ea70d55.el7ost.noarch

This what I had set on cinder.conf:
enabled_backend=nfs

[nfs]
volume_backend_name=nfs
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfs_shares.conf
nfs_snapshot_support=True
nas_secure_file_operations=False  (without which driver fails, default -> true)
nas_secure_file_permissions=False 

Run this twice on a packstack as well as OPSD deployment. 
There is an open OSPD NFS bug bz1416356, had to configure NFS manually post deploy. 

Tempest filtered out 60 snapshot related tests, 47 of them passed. 
Failing tests are same ones for v1 v2, they failed cause I had no connectivity to instance which was expected. 

Tempest summery: 
Ran 60 (+47) tests in 639.747s (+598.497s)
FAILED (id=2, failures=7 (+7), skips=3)

FAIL: tempest.api.volume.test_volumes_snapshots.VolumesV1SnapshotTestJSON.test_snapshot_create_offline_delete_online[compute,id-5210a1de-85a0-11e6-bb21-641c676a5d61]
FAIL: tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshot_create_offline_delete_online[compute,id-5210a1de-85a0-11e6-bb21-641c676a5d61]

FAIL: tempest.api.volume.test_volumes_snapshots.VolumesV1SnapshotTestJSON.test_snapshot_create_with_volume_in_use[compute,id-b467b54c-07a4-446d-a1cf-651dedcc3ff1]
FAIL: tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshot_create_with_volume_in_use[compute,id-b467b54c-07a4-446d-a1cf-651dedcc3ff1]

FAIL: tempest.api.volume.test_volumes_snapshots.VolumesV1SnapshotTestJSON.test_snapshot_delete_with_volume_in_use[compute,id-8567b54c-4455-446d-a1cf-651ddeaa3ff2]
FAIL: tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshot_delete_with_volume_in_use[compute,id-8567b54c-4455-446d-a1cf-651ddeaa3ff2]

FAIL: tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern[compute,id-608e604b-1d63-4a82-8e3e-91bc665c90b4,image,network]

Comment 22 errata-xmlrpc 2017-05-17 19:22:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1245


Note You need to log in before you can comment on or make changes to this bug.