Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1637030

Summary: Cinder c-vol ignores nfs mount issue ends up creating volumes locally when it should fail.
Product: Red Hat OpenStack Reporter: Tzach Shefi <tshefi>
Component: python-os-brickAssignee: Eric Harney <eharney>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: high Docs Contact: Kim Nylander <knylande>
Priority: high    
Version: 14.0 (Rocky)CC: abishop, apevec, eharney, jschluet, lhh, pgrist
Target Milestone: z3Keywords: Triaged, ZStream
Target Release: 14.0 (Rocky)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: python-os-brick-2.5.5-1.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-02 20:08:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Cinder logs none

Description Tzach Shefi 2018-10-08 13:30:54 UTC
Created attachment 1491664 [details]
Cinder logs

Description of problem: Related hit during selinux Glance+Cinder same NFS server issue: https://bugzilla.redhat.com/show_bug.cgi?id=1637014

c-vol ignores the fact the Cinder's nfs mount failed, reports nfs back end as up and even creates an available volume. 
Volume is created locally rather than on NFS share. 
The fact that this happens is bad enough it's also misleading. I was confused about it when I asked Alan who raised this second issue. 

Version-Release number of selected component (if applicable):
python-cinder-13.0.1-0.20180917193045.c56591a.el7ost.noarch
puppet-cinder-13.3.1-0.20180917145846.550e793.el7ost.noarch
openstack-cinder-13.0.1-0.20180917193045.c56591a.el7ost.noarch
python2-cinderclient-4.0.1-0.20180809133302.460229c.el7ost.noarchopenstack-selinux-0.8.15-0.20180823061238.b63283a.el7ost.noarch
RHEL 7.5 

How reproducible:
Everytime 

Steps to Reproduce:
1. I've used OPSD to deploy nfs as back end for both Glance+Cinder via
Adding these on overcloud_deploy.sh:
-e /usr/share/openstack-tripleo-heat-templates/environments/storage/cinder-nfs.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/storage/glance-nfs.yaml \
-e /home/stack/virt/extra_templates.yaml \


[stack@undercloud-0 ~]$ cat /home/stack/virt/extra_templates.yaml
parameter_defaults:
  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: false
  CinderEnableNfsBackend: true
  CinderNfsMountOptions: 'retry=1'
  CinderNfsServers: '10.35.160.111:/export/ins_cinder'

  GlanceBackend: 'file'
  GlanceNfsEnabled: true
  GlanceNfsShare: '10.35.160.111:/export/ins_glance'


Deployment completed successfully, "looks" fine.

2. Create a volume check service status, despite Cinder nfs mount failing, the volume is successfully created "available".
Service status is also up. See below


Actual results:
Service and volume are up despite the fact that Cinder nfs mount failed and both should be down/error 

cinder service-list
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host                  | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller-0          | nova | enabled | up    | 2018-10-08T12:46:47.000000 | -               |
| cinder-volume    | hostgroup@tripleo_nfs | nova | enabled | up    | 2018-10-08T12:46:55.000000 | -               |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status    | Name         | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| 6e909b3f-96b3-4777-8092-28867dbb6f16 | available | -            | 1    | tripleo     | false    |                                      |


Expected results:
Service state should be down and volume should be in error state.

Comment 1 Alan Bishop 2018-10-09 19:58:25 UTC
Targeting 13z as 14 hasn't been released yet.

Comment 6 Tzach Shefi 2019-06-02 11:10:56 UTC
Verified on:
python-os-brick-2.5.5-1.el7ost

Again the deployment completed without errors, however as apposed to my original comment #1
this time around NFS backend's state is correctly reported as down (what this bz fixes).

Due note down state is expected
as root cause (bz1637014) for NFS being down has not been resolved yet. 


(overcloud) [stack@undercloud-0 ~]$ cinder service-list
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host                  | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller-0          | nova | enabled | up    | 2019-06-02T10:23:52.000000 | -               |
| cinder-volume    | hostgroup@tripleo_nfs | nova | enabled | down  | 2019-06-02T10:19:32.000000 | -               |


Cinder create also fails with error state due NFS backend being down, also expected.

Comment 10 errata-xmlrpc 2019-07-02 20:08:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:1672