Bug 1929136
| Summary: | openshift isn't able to mount nfs manila shares to pods | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Martin Simka <msimka> |
| Component: | Storage | Assignee: | Eric Duen <eduen> |
| Storage sub component: | OpenStack CSI Drivers | QA Contact: | rlobillo |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | medium | CC: | aos-bugs, gouthamr, ksedmako, mbooth, mbridges, mfedosin, ppitonak, pprinett, rlobillo |
| Version: | 4.6 | Keywords: | Triaged |
| Target Milestone: | --- | ||
| Target Release: | 4.8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-07-27 22:44:45 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Hello! I think your worker nodes are not connected to Manila network (172.16.32.0/24). Therefore there is no route, and the mounting fails. To connect your workers at the time of installation you can use `additionalNetworkIDs` parameter in the install config https://github.com/openshift/installer/blob/master/docs/user/openstack/customization.md#additional-networks As day2 operation you need to add new network at `networks` section of your machineset's provider spec: https://github.com/openshift/installer/blob/master/docs/user/openstack/README.md#defining-a-machineset-that-uses-multiple-networks Hi Mike, thanks for looking into this and for documentation PR. I can confirm that it works fine when Manila network is included in `additionalNetworkIDs`. *** Bug 1972294 has been marked as a duplicate of this bug. *** Verified on
OSP D/S CI Reference Architecture job inserts on the install-config the StorageNFS network on the workers, so it enables the connection towards OSP manila endpoint:
$ openstack network show StorageNFS
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2021-06-27T16:29:17Z |
| description | |
| dns_domain | |
| id | 07f03886-aba1-4b27-b90d-7f8c00f5f10d |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='6681626c47db4b039e3cd66cfd11197e', project.name='admin', region_name='regionOne', zone= |
| mtu | 1500 |
| name | StorageNFS |
| port_security_enabled | True |
| project_id | 6681626c47db4b039e3cd66cfd11197e |
| provider:network_type | vlan |
| provider:physical_network | tenant |
| provider:segmentation_id | 205 |
| qos_policy_id | None |
| revision_number | 2 |
| router:external | Internal |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | 8b323c28-220c-4920-8641-b08f464189fd |
| tags | |
| updated_at | 2021-06-27T16:29:21Z |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
$ cat install-config.yaml
# This file is autogenerated by infrared openshift plugin
apiVersion: v1
baseDomain: "shiftstack.com"
clusterID: "27d75fbb-4398-5256-a9cc-e57726a5d520"
compute:
- name: worker
platform:
openstack:
zones: ['AZ-2', 'AZ-1', 'AZ-0']
additionalNetworkIDs: ['07f03886-aba1-4b27-b90d-7f8c00f5f10d']
replicas: 3
controlPlane:
name: master
platform:
openstack:
zones: ['AZ-2', 'AZ-1', 'AZ-0']
replicas: 3
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438 |
Description of problem: openshift isn't able to mount nfs manila shares to pods Version-Release number of selected component (if applicable): Client Version: 4.6.9 Server Version: 4.6.16 Kubernetes Version: v1.19.0+e49167a How reproducible: Steps to Reproduce: 1. install clean openshift 4.6 2. change default storage class to manila $oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' $oc patch storageclass csi-manila-ceph -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' 3. create pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem storageClassName: resources: requests: storage: 5Gi $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-pvc Bound pvc-6b632043-2580-4ada-a634-ae696db4b96c 5Gi RWX csi-manila-ceph 20m $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-6b632043-2580-4ada-a634-ae696db4b96c 5Gi RWX Delete Bound default/test-pvc csi-manila-ceph 20m 4. create pod using that pvc apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeMounts: - name: data mountPath: /tmp/test volumes: - name: data persistentVolumeClaim: claimName: test-pvc Actual results: The pod is stuck in ContainerCreating $ oc get pods NAME READY STATUS RESTARTS AGE test-pod 0/1 ContainerCreating 0 17m Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 17m default-scheduler Successfully assigned default/test-pod to msimka-b3nb-hh84n-worker-0-gd7cw Warning FailedMount 4m28s (x3 over 15m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[default-token-svpfp data]: timed out waiting for the condition Warning FailedMount 2m10s (x4 over 11m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data default-token-svpfp]: timed out waiting for the condition Warning FailedMount 44s (x8 over 15m) kubelet MountVolume.SetUp failed for volume "pvc-6b632043-2580-4ada-a634-ae696db4b96c" : rpc error: code = DeadlineExceeded desc = context deadline exceeded Expected results: Pod is created and volume mounted Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info: $ oc logs pod/csi-nodeplugin-nfsplugin-j6sx9 -n openshift-manila-csi-driver I0212 16:02:19.694344 1 nfs.go:49] Driver: nfs.csi.k8s.io version: 2.0.0 I0212 16:02:19.694454 1 nfs.go:99] Enabling volume access mode: SINGLE_NODE_WRITER I0212 16:02:19.694458 1 nfs.go:99] Enabling volume access mode: SINGLE_NODE_READER_ONLY I0212 16:02:19.694461 1 nfs.go:99] Enabling volume access mode: MULTI_NODE_READER_ONLY I0212 16:02:19.694464 1 nfs.go:99] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER I0212 16:02:19.694466 1 nfs.go:99] Enabling volume access mode: MULTI_NODE_MULTI_WRITER I0212 16:02:19.694470 1 nfs.go:110] Enabling controller service capability: UNKNOWN I0212 16:02:19.740611 1 server.go:92] Listening for connections on address: &net.UnixAddr{Name:"/plugin/csi.sock", Net:"unix"} I0212 16:26:27.069108 1 nfs.go:49] Driver: nfs.csi.k8s.io version: 2.0.0 I0212 16:26:27.069238 1 nfs.go:99] Enabling volume access mode: SINGLE_NODE_WRITER I0212 16:26:27.069266 1 nfs.go:99] Enabling volume access mode: SINGLE_NODE_READER_ONLY I0212 16:26:27.069270 1 nfs.go:99] Enabling volume access mode: MULTI_NODE_READER_ONLY I0212 16:26:27.069283 1 nfs.go:99] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER I0212 16:26:27.069286 1 nfs.go:99] Enabling volume access mode: MULTI_NODE_MULTI_WRITER I0212 16:26:27.069293 1 nfs.go:110] Enabling controller service capability: UNKNOWN I0212 16:26:27.114982 1 server.go:92] Listening for connections on address: &net.UnixAddr{Name:"/plugin/csi.sock", Net:"unix"} E0216 09:27:41.696572 1 mount_linux.go:139] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs 172.16.32.1:/volumes/_nogroup/891cb5d9-a417-43a5-9d1c-5b160a16e7be /var/lib/kubelet/pods/c36e7573-853a-44f9-99bd-0de630edb3b9/volumes/kubernetes.io~csi/pvc-6b632043-2580-4ada-a634-ae696db4b96c/mount Output: mount.nfs: Connection timed out $ manila list +--------------------------------------+------------------------------------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+------------------------------------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ | dc26767b-414b-4018-9295-4e55117a7815 | pvc-6b632043-2580-4ada-a634-ae696db4b96c | 5 | NFS | available | False | ceph | | nova | +--------------------------------------+------------------------------------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ $ manila share-export-location-list dc26767b-414b-4018-9295-4e55117a7815 +--------------------------------------+--------------------------------------------------------------------+-----------+ | ID | Path | Preferred | +--------------------------------------+--------------------------------------------------------------------+-----------+ | bbde2801-3923-4090-ab13-5c8e51217265 | 172.16.32.1:/volumes/_nogroup/891cb5d9-a417-43a5-9d1c-5b160a16e7be | False | +--------------------------------------+--------------------------------------------------------------------+-----------+