Bug 1784095 - Operator fails in noobaa installation
Summary: Operator fails in noobaa installation
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Migration Toolkit for Containers
Classification: Red Hat
Component: General
Version: 1.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 1.4.z
Assignee: Jason Montleon
QA Contact: Xin jiang
Avital Pinnick
URL:
Whiteboard:
Depends On:
Blocks: 1784063
TreeView+ depends on / blocked
 
Reported: 2019-12-16 17:24 UTC by Sergio
Modified: 2021-04-07 20:50 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1784063
Environment:
Last Closed: 2021-04-07 20:50:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Sergio 2019-12-16 17:24:38 UTC
+++ This bug was initially created as a clone of Bug #1784063 +++

Description of problem:
When "noobaa: true" is configured in the migration controller, the operator fails if "s3" service has no "hostname" field present in its status because of the LoadBalancer.



Version-Release number of selected component (if applicable):
CLUSTER (Azure) OCP4:
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-12-15-230238   True        False         3h4m    Cluster version is 4.2.0-0.nightly-2019-12-15-230238

Operator: 1.0.1 osbs images
image: image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-rhel7-operator@sha256:f41484cbe7dbc4e4522fbcd63adc0dc926d463e3516c82ec4360a91441f84fd4

Controller:  1.0.1 osbs images
    image: image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-controller-rhel8@sha256:e2c3cbb61157605d8246496f77c76b9b2950eb951bd0a63d4f8e3ae6f1884c2c


How reproducible:
Always

Steps to Reproduce:
1. Install a OCP4 4.2 cluster in Azure
2. Install the cam operator (and ocs4 operator) from 1.0.1 osbs.
3. Create a MigrationController with noobaa: true
4. Verify that an external load balancer is being used, with no "hostname" attribute in the "s3" service
 
  oc get svc s3 -o yaml -n openshift-migration


Actual results:
There is an erro in CAM operator:

 oc get pods -l app=migration-operator -o name  | xargs oc logs -c ansible

task path: /opt/ansible/roles/migrationcontroller/tasks/mcg.yml:85
ok: [localhost] => {"attempts": 1, "changed": false, "resources": [{"apiVersion": "v1", "kind": "Service", "metadata": {"annotations": {"service.alpha.openshift.io/serving-cert-signed-by": "openshift-service-serving-signer@1576494604", "service.beta.openshift.io/serving-cert-secret-name": "noobaa-s3-serving-cert", "service.beta.openshift.io/serving-cert-signed-by": "openshift-service-serving-signer@1576494604"}, "creationTimestamp": "2019-12-16T13:52:42Z", "labels": {"app": "noobaa"}, "name": "s3", "namespace": "openshift-migration", "ownerReferences": [{"apiVersion": "noobaa.io/v1alpha1", "blockOwnerDeletion": true, "controller": true, "kind": "NooBaa", "name": "noobaa", "uid": "542f7c51-200b-11ea-b8c9-000d3aa44358"}], "resourceVersion": "90036", "selfLink": "/api/v1/namespaces/openshift-migration/services/s3", "uid": "546a1f38-200b-11ea-abf2-000d3a9fda97"}, "spec": {"clusterIP": "172.30.177.161", "externalTrafficPolicy": "Cluster", "ports": [{"name": "s3", "nodePort": 31142, "port": 80, "protocol": "TCP", "targetPort": 6001}, {"name": "s3-https", "nodePort": 32539, "port": 443, "protocol": "TCP", "targetPort": 6443}], "selector": {"noobaa-s3": "noobaa"}, "sessionAffinity": "None", "type": "LoadBalancer"}, "status": {"loadBalancer": {"ingress": [{"ip": "52.238.254.172"}]}}}]}

TASK [migrationcontroller : set_fact] ******************************************
task path: /opt/ansible/roles/migrationcontroller/tasks/mcg.yml:99
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'hostname'\n\nThe error appears to be in '/opt/ansible/roles/migrationcontroller/tasks/mcg.yml': line 99, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n    - set_fact:\n      ^ here\n"}

PLAY RECAP *********************************************************************



Expected results:
There should be no error in the operator's logs, and the noobaa bucket should be added without problems

Additional info:

Ansible is looking for a hostname in the service's status, and there is no hostname in it

oc get svc s3 -o yaml -n openshift-migration

"status": {"loadBalancer": {"ingress": [{"ip": "52.238.254.172"}]}}}]}

Comment 1 Pranav Gaikwad 2020-04-01 18:36:06 UTC
Fixed with : https://github.com/konveyor/mig-operator/pull/194

Comment 6 John Matthews 2020-05-18 12:54:19 UTC
Aligning to CAM 1.3.0 when we plan to integrate with deploying MCG

Comment 11 Erik Nelson 2021-04-07 20:50:29 UTC
Closing as stale, please re-open if issue persists.


Note You need to log in before you can comment on or make changes to this bug.