Bug 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster
Summary: MigCluster CR remains in "unready" state and source registry is inaccessible ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Migration Toolkit for Containers
Classification: Red Hat
Component: Controller
Version: 1.5.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 1.5.2
Assignee: Scott Seago
QA Contact: Xin jiang
Avital Pinnick
URL:
Whiteboard:
Depends On:
Blocks: 2007375 2008946
TreeView+ depends on / blocked
 
Reported: 2021-09-22 14:07 UTC by John Matthews
Modified: 2021-11-29 14:32 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2007375 2008946 (view as bug list)
Environment:
Last Closed: 2021-11-29 14:32:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github konveyor mig-controller pull 1210 0 None open Bug 2006842: requeue cluster reconcile if registry status changes 2021-09-28 16:24:42 UTC
Github konveyor mig-controller pull 1211 0 None open Bug 2006842: requeue cluster reconcile if registry status changes 2021-09-29 21:34:37 UTC
Red Hat Product Errata RHSA-2021:4848 0 None None None 2021-11-29 14:32:44 UTC

Description John Matthews 2021-09-22 14:07:07 UTC
Description of problem:

As part of working MTC 1.5 lab updates we ran into situation that the source OCP 3 cluster was shutdown on us.  When this happened the MigCluster source went into an unready state as expected.

We brought the source cluster back up and saw the MigCluster change state to show that it could not communicate with the source API server, yet the registry remained as unreachable.

We waited ~15+ minutes and didn't see the MigCluster change, was using the UI and clicked Check Connection multiple times, it would look like it was trying to connect and come back to say it failed due to the registry.

Eventually we got it working by:
1) From the UI, update the MigCluster and delete the registry URL
2) Save the MigCluster
3) See the MigCluster go healthy
4) Edit the MigCluster, enter the same registry URL back in
5) Save the MigCluster, see it go healthy

It looks like we may have a bug somewhere to detect when a source cluster goes away and comes back, the logic for the docker-registry readiness is not correct.  




Version-Release number of selected component (if applicable):

MTC 1.5

How reproducible:
Unsure, but I would suspect reproducible

Steps to Reproduce:
1. Provision a cluster pair via our lab system (opentlc)
2. Setup and perform a migration of a sample app to prove things are working correctly, ensure you add the source docker-registry so you can test a Direct Image Migration workflow
3. From opentlc, temporarily shutdown the source cluster
4. See the MigCluster go to unready
5. Turn the source cluster back on and observe the MigCluster

Comment 1 Scott Seago 2021-09-28 13:18:58 UTC
Do we have recorded anywhere the actual conditions on the migcluster status:
1) after the cluster went down
2) after the cluster came back up
3) after running "check connection"
4) after removing registry path
5) after restoring it

That would probably help determine whether we're dealing with failure to validate or whether there's something wrong with the validation logic itself.

Comment 2 John Matthews 2021-09-28 13:31:07 UTC
All I have is the below saved:

From our controller logs
{"level":"info","ts":1632167779.3204465,"logger":"cluster","msg":"CR","migCluster":"ocp3","conditions":{"conditions":[{"type":"InvalidRegistryRoute","status":"True","reason":"RouteTestFailed","category":"Critical","message":"Exposed registry route is invalid, Error : \"Get \\\"https://docker-registry-default.apps.94v7r.sandbox1352.xxx.com/v2/\\\": dial tcp 54.197.182.103:443: connect: connection refused\"","lastTransitionTime":"2021-09-20T19:56:19Z"}]}}

Connection failed. Message: "Exposed registry route is invalid, Error : "Get \"https://docker-registry-default.apps.94v7r.sandbox1352.xxx.com/v2/\": dial tcp 54.197.182.103:443: connect: connection refused"", Reason: "RouteTestFailed"


I did have the above saved in a message, but I think this was when the cluster was down.
Sorry I don't have more info gathered of when the cluster was brought back up and the exact messages.

When this is ready to be worked we will need someone to recreate the

Comment 14 errata-xmlrpc 2021-11-29 14:32:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Migration Toolkit for Containers (MTC) 1.5.2 security update and bugfix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4848


Note You need to log in before you can comment on or make changes to this bug.