Bug 1987036 - Manilacsi becomes degraded even though it is not available with the underlying Openstack [NEEDINFO]
Summary: Manilacsi becomes degraded even though it is not available with the underlyin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.z
Assignee: egarcia
QA Contact: Jon Uriarte
URL:
Whiteboard:
Depends On: 1987020
Blocks: 2000097
TreeView+ depends on / blocked
 
Reported: 2021-07-28 18:48 UTC by OpenShift BugZilla Robot
Modified: 2021-09-15 09:17 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2000097 (view as bug list)
Environment:
Last Closed: 2021-09-15 09:16:49 UTC
Target Upstream Version:
pbertera: needinfo? (egarcia)
pbertera: needinfo? (egarcia)
pbertera: needinfo? (egarcia)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift csi-driver-manila-operator pull 110 0 None None None 2021-08-30 01:01:11 UTC
Red Hat Product Errata RHBA-2021:3422 0 None None None 2021-09-15 09:17:06 UTC

Comment 9 rlobillo 2021-09-02 15:31:40 UTC
Verified on 4.7.0-0.nightly-2021-09-02-010750 on top of OSP16.1 (RHOS-16.1-RHEL-8-20210818.n.0)

After IPI installation on restricted network using proxy, the storage clusteroperator is available and operative:

(shiftstack) [cloud-user@installer-host ~]$ oc get clusteroperator storage -o yaml 
[...]
status:
  conditions:
  - lastTransitionTime: "2021-09-02T08:18:29Z"
    message: |-
      ManilaCSIDriverOperatorCRDegraded: All is well
      OpenStackCinderCSIDriverOperatorCRDegraded: All is well
    reason: AsExpected
    status: "False"
    type: Degraded
  - lastTransitionTime: "2021-09-02T08:22:06Z"
    message: |-
      ManilaCSIDriverOperatorCRProgressing: All is well
      OpenStackCinderCSIDriverOperatorCRProgressing: All is well
    reason: AsExpected
    status: "False"
    type: Progressing
  - lastTransitionTime: "2021-09-02T08:22:06Z"
    message: |-
      ManilaCSIDriverOperatorCRAvailable: All is well
      OpenStackCinderCSIDriverOperatorCRAvailable: All is well
    reason: AsExpected
    status: "True"
    type: Available
  - lastTransitionTime: "2021-09-02T08:19:07Z"
    message: All is well
    reason: AsExpected
    status: "True"
    type: Upgradeable

$ oc get proxy cluster -o json | jq .status
{
  "httpProxy": "http://dummy:dummy@172.16.0.3:3128/",
  "httpsProxy": "https://dummy:dummy@172.16.0.3:3130/",
  "noProxy": ".cluster.local,.svc,10.128.0.0/14,127.0.0.1,169.254.169.254,172.16.0.0/24,172.30.0.0/16,api-int.ostest.shiftstack.com,localhost"
}

$ oc rsh -n openshift-cluster-csi-drivers                      manila-csi-driver-operator-77458b96f8-sdwh7
sh-4.4$ env | grep HTTP       
HTTP_PROXY=http://dummy:dummy@172.16.0.3:3128/
KUBERNETES_SERVICE_PORT_HTTPS=443
HTTPS_PROXY=https://dummy:dummy@172.16.0.3:3130/
sh-4.4$ 

PVC using cinder is created successfully:

$ cat standard-csi-immediate.yaml 
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard-csi-immediate
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate
$ cat pvc_immediate.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cinder-claim-immediate
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: standard-csi-immediate

$ oc get pvc/cinder-claim-immediate
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
cinder-claim-immediate   Bound    pvc-d30f6274-e8be-4aab-8451-069292b98706   1Gi        RWO            standard-csi-immediate   91s

$ openstack volume list
+--------------------------------------+------------------------------------------+-----------+------+-------------+
| ID                                   | Name                                     | Status    | Size | Attached to |
+--------------------------------------+------------------------------------------+-----------+------+-------------+
| 4da73e37-3f03-4474-83ce-2e8986a30dd4 | pvc-d30f6274-e8be-4aab-8451-069292b98706 | available |    1 |             |
+--------------------------------------+------------------------------------------+-----------+------+-------------+

PVC using manila is created successfully:

$ cat pvc_immediate_manila.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cinder-claim-immediate-manila
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-manila-default

$ oc get pvc/cinder-claim-immediate-manila
NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
cinder-claim-immediate-manila   Bound    pvc-45847824-45ad-4096-88fe-417b51600570   1Gi        RWO            csi-manila-default   5m42s

$ manila list
+--------------------------------------+------------------------------------------+------+-------------+-----------+-----------+-----------------+------+-------------------+
| ID                                   | Name                                     | Size | Share Proto | Status    | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+------------------------------------------+------+-------------+-----------+-----------+-----------------+------+-------------------+
| 38e04abf-c0f5-4f1a-9e3b-a0c4df5c8e87 | pvc-45847824-45ad-4096-88fe-417b51600570 | 1    | NFS         | available | False     | default         |      | nova              |
+--------------------------------------+------------------------------------------+------+-------------+-----------+-----------+-----------------+------+-------------------+


However, the IPI installation does not finish successfully due to https://bugzilla.redhat.com/show_bug.cgi?id=2000551. No workers are created and machine controller is reporting below error:

$ oc logs -n openshift-machine-api machine-api-controllers-586cfb9578-4zljz -c machine-controller | tail -2
Error getting a new instance service from the machine (machine/actuator.go 467): Failed to authenticate provider client: Get "https://10.46.43.75:13000/": dial tcp 10.46.43.75:13000: connect: no route to host
E0902 15:25:58.675963       1 controller.go:237] controller "msg"="Reconciler error" "error"="Error checking if instance exists (machine/actuator.go 346): \nError getting a new instance service from the machine (machine/actuator.go 467): Failed to authenticate provider client: Get \"https://10.46.43.75:13000/\": dial tcp 10.46.43.75:13000: connect: no route to host" "controller"="machine_controller" "name"="ostest-qrvrl-master-2" "namespace"="openshift-machine-api"

Comment 13 errata-xmlrpc 2021-09-15 09:16:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.7.30 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3422


Note You need to log in before you can comment on or make changes to this bug.