Created attachment 1491664 [details] Cinder logs Description of problem: Related hit during selinux Glance+Cinder same NFS server issue: https://bugzilla.redhat.com/show_bug.cgi?id=1637014 c-vol ignores the fact the Cinder's nfs mount failed, reports nfs back end as up and even creates an available volume. Volume is created locally rather than on NFS share. The fact that this happens is bad enough it's also misleading. I was confused about it when I asked Alan who raised this second issue. Version-Release number of selected component (if applicable): python-cinder-13.0.1-0.20180917193045.c56591a.el7ost.noarch puppet-cinder-13.3.1-0.20180917145846.550e793.el7ost.noarch openstack-cinder-13.0.1-0.20180917193045.c56591a.el7ost.noarch python2-cinderclient-4.0.1-0.20180809133302.460229c.el7ost.noarchopenstack-selinux-0.8.15-0.20180823061238.b63283a.el7ost.noarch RHEL 7.5 How reproducible: Everytime Steps to Reproduce: 1. I've used OPSD to deploy nfs as back end for both Glance+Cinder via Adding these on overcloud_deploy.sh: -e /usr/share/openstack-tripleo-heat-templates/environments/storage/cinder-nfs.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/storage/glance-nfs.yaml \ -e /home/stack/virt/extra_templates.yaml \ [stack@undercloud-0 ~]$ cat /home/stack/virt/extra_templates.yaml parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: false CinderEnableNfsBackend: true CinderNfsMountOptions: 'retry=1' CinderNfsServers: '10.35.160.111:/export/ins_cinder' GlanceBackend: 'file' GlanceNfsEnabled: true GlanceNfsShare: '10.35.160.111:/export/ins_glance' Deployment completed successfully, "looks" fine. 2. Create a volume check service status, despite Cinder nfs mount failing, the volume is successfully created "available". Service status is also up. See below Actual results: Service and volume are up despite the fact that Cinder nfs mount failed and both should be down/error cinder service-list +------------------+-----------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-----------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | controller-0 | nova | enabled | up | 2018-10-08T12:46:47.000000 | - | | cinder-volume | hostgroup@tripleo_nfs | nova | enabled | up | 2018-10-08T12:46:55.000000 | - | +------------------+-----------------------+------+---------+-------+----------------------------+-----------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 6e909b3f-96b3-4777-8092-28867dbb6f16 | available | - | 1 | tripleo | false | | Expected results: Service state should be down and volume should be in error state.
Targeting 13z as 14 hasn't been released yet.
Verified on: python-os-brick-2.5.5-1.el7ost Again the deployment completed without errors, however as apposed to my original comment #1 this time around NFS backend's state is correctly reported as down (what this bz fixes). Due note down state is expected as root cause (bz1637014) for NFS being down has not been resolved yet. (overcloud) [stack@undercloud-0 ~]$ cinder service-list +------------------+-----------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-----------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | controller-0 | nova | enabled | up | 2019-06-02T10:23:52.000000 | - | | cinder-volume | hostgroup@tripleo_nfs | nova | enabled | down | 2019-06-02T10:19:32.000000 | - | Cinder create also fails with error state due NFS backend being down, also expected.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1672