Description of problem: Using manila-provisioner the NFS allow access is 0.0.0.0/0, causing the possibility to access to the data from outside OCP cluster. Version-Release number of selected component (if applicable): OCP 3.11 with latest image registry.redhat.io/openshift3/manila-provisioner:latest How reproducible: Steps to Reproduce: 1. Create a PVC using Manila-provisioner with CephFS+NFS 2. Wait till PV is ready 3. Check manila access-list for the share Actual results: (overcloud) [stack@undercloud ~]$ manila access-list pvc-88ce671a-d2bb-11e8-83ca-fa163eaa3d72 +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+ | id | access_type | access_to | access_level | state | access_key | created_at | updated_at | +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+ | fb81e653-f058-4e08-ad81-d538c7d91753 | ip | 0.0.0.0/0 | rw | active | None | 2018-10-18T09:52:39.000000 | None | +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+ Expected results: Only the nodes from OCP will be allowed to access. Or at least to be able to specify when the PVC is defined Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info:
Here is the code where 0.0.0.0/0 (BTW is not working, only 0.0.0.0 does!): https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/share/manila/sharebackends/nfs.go#L52
The code referenced in comment #1 is not the Manila provisioner we have in 3.11 -- there is still the old version from external-storage repo. The old one has the same bug though (on a different place). I will have to fix this just by adding patch to the dist-git with external-storage rpm.
Upstream PR: https://github.com/kubernetes/cloud-provider-openstack/pull/370
This bug https://bugzilla.redhat.com/show_bug.cgi?id=1616343 used to track above issue.