Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1784063

Summary: Operator fails in noobaa installation
Product: OpenShift Container Platform Reporter: Sergio <sregidor>
Component: Migration ToolingAssignee: John Matthews <jmatthew>
Status: CLOSED WONTFIX QA Contact: Sergio <sregidor>
Severity: high Docs Contact:
Priority: high    
Version: 4.2.0CC: chezhang, rpattath, xjiang
Target Milestone: ---   
Target Release: 4.2.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1784095 (view as bug list) Environment:
Last Closed: 2019-12-17 12:57:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1784095    
Bug Blocks:    

Description Sergio 2019-12-16 15:33:23 UTC
Description of problem:
When "noobaa: true" is configured in the migration controller, the operator fails if "s3" service has no "hostname" field present in its status because of the LoadBalancer.



Version-Release number of selected component (if applicable):
CLUSTER (Azure) OCP4:
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-12-15-230238   True        False         3h4m    Cluster version is 4.2.0-0.nightly-2019-12-15-230238

Operator: 1.0.1 osbs images
image: image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-rhel7-operator@sha256:f41484cbe7dbc4e4522fbcd63adc0dc926d463e3516c82ec4360a91441f84fd4

Controller:  1.0.1 osbs images
    image: image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-controller-rhel8@sha256:e2c3cbb61157605d8246496f77c76b9b2950eb951bd0a63d4f8e3ae6f1884c2c


How reproducible:
Always

Steps to Reproduce:
1. Install a OCP4 4.2 cluster in Azure
2. Install the cam operator (and ocs4 operator) from 1.0.1 osbs.
3. Create a MigrationController with noobaa: true
4. Verify that an external load balancer is being used, with no "hostname" attribute in the "s3" service
 
  oc get svc s3 -o yaml -n openshift-migration


Actual results:
There is an erro in CAM operator:

 oc get pods -l app=migration-operator -o name  | xargs oc logs -c ansible

task path: /opt/ansible/roles/migrationcontroller/tasks/mcg.yml:85
ok: [localhost] => {"attempts": 1, "changed": false, "resources": [{"apiVersion": "v1", "kind": "Service", "metadata": {"annotations": {"service.alpha.openshift.io/serving-cert-signed-by": "openshift-service-serving-signer@1576494604", "service.beta.openshift.io/serving-cert-secret-name": "noobaa-s3-serving-cert", "service.beta.openshift.io/serving-cert-signed-by": "openshift-service-serving-signer@1576494604"}, "creationTimestamp": "2019-12-16T13:52:42Z", "labels": {"app": "noobaa"}, "name": "s3", "namespace": "openshift-migration", "ownerReferences": [{"apiVersion": "noobaa.io/v1alpha1", "blockOwnerDeletion": true, "controller": true, "kind": "NooBaa", "name": "noobaa", "uid": "542f7c51-200b-11ea-b8c9-000d3aa44358"}], "resourceVersion": "90036", "selfLink": "/api/v1/namespaces/openshift-migration/services/s3", "uid": "546a1f38-200b-11ea-abf2-000d3a9fda97"}, "spec": {"clusterIP": "172.30.177.161", "externalTrafficPolicy": "Cluster", "ports": [{"name": "s3", "nodePort": 31142, "port": 80, "protocol": "TCP", "targetPort": 6001}, {"name": "s3-https", "nodePort": 32539, "port": 443, "protocol": "TCP", "targetPort": 6443}], "selector": {"noobaa-s3": "noobaa"}, "sessionAffinity": "None", "type": "LoadBalancer"}, "status": {"loadBalancer": {"ingress": [{"ip": "52.238.254.172"}]}}}]}

TASK [migrationcontroller : set_fact] ******************************************
task path: /opt/ansible/roles/migrationcontroller/tasks/mcg.yml:99
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'hostname'\n\nThe error appears to be in '/opt/ansible/roles/migrationcontroller/tasks/mcg.yml': line 99, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n    - set_fact:\n      ^ here\n"}

PLAY RECAP *********************************************************************



Expected results:
There should be no error in the operator's logs, and the noobaa bucket should be added without problems

Additional info:

Ansible is looking for a hostname in the service's status, and there is no hostname in it

oc get svc s3 -o yaml -n openshift-migration

"status": {"loadBalancer": {"ingress": [{"ip": "52.238.254.172"}]}}}]}

Comment 1 John Matthews 2019-12-17 12:57:09 UTC
As we are removing NooBaa dependency in 1.0.1 I am closing this as WONT FIX and we will track fixing the issue in 1784095