Description of problem: As part of working MTC 1.5 lab updates we ran into situation that the source OCP 3 cluster was shutdown on us. When this happened the MigCluster source went into an unready state as expected. We brought the source cluster back up and saw the MigCluster change state to show that it could not communicate with the source API server, yet the registry remained as unreachable. We waited ~15+ minutes and didn't see the MigCluster change, was using the UI and clicked Check Connection multiple times, it would look like it was trying to connect and come back to say it failed due to the registry. Eventually we got it working by: 1) From the UI, update the MigCluster and delete the registry URL 2) Save the MigCluster 3) See the MigCluster go healthy 4) Edit the MigCluster, enter the same registry URL back in 5) Save the MigCluster, see it go healthy It looks like we may have a bug somewhere to detect when a source cluster goes away and comes back, the logic for the docker-registry readiness is not correct. Version-Release number of selected component (if applicable): MTC 1.5 How reproducible: Unsure, but I would suspect reproducible Steps to Reproduce: 1. Provision a cluster pair via our lab system (opentlc) 2. Setup and perform a migration of a sample app to prove things are working correctly, ensure you add the source docker-registry so you can test a Direct Image Migration workflow 3. From opentlc, temporarily shutdown the source cluster 4. See the MigCluster go to unready 5. Turn the source cluster back on and observe the MigCluster
Do we have recorded anywhere the actual conditions on the migcluster status: 1) after the cluster went down 2) after the cluster came back up 3) after running "check connection" 4) after removing registry path 5) after restoring it That would probably help determine whether we're dealing with failure to validate or whether there's something wrong with the validation logic itself.
All I have is the below saved: From our controller logs {"level":"info","ts":1632167779.3204465,"logger":"cluster","msg":"CR","migCluster":"ocp3","conditions":{"conditions":[{"type":"InvalidRegistryRoute","status":"True","reason":"RouteTestFailed","category":"Critical","message":"Exposed registry route is invalid, Error : \"Get \\\"https://docker-registry-default.apps.94v7r.sandbox1352.xxx.com/v2/\\\": dial tcp 54.197.182.103:443: connect: connection refused\"","lastTransitionTime":"2021-09-20T19:56:19Z"}]}} Connection failed. Message: "Exposed registry route is invalid, Error : "Get \"https://docker-registry-default.apps.94v7r.sandbox1352.xxx.com/v2/\": dial tcp 54.197.182.103:443: connect: connection refused"", Reason: "RouteTestFailed" I did have the above saved in a message, but I think this was when the cluster was down. Sorry I don't have more info gathered of when the cluster was brought back up and the exact messages. When this is ready to be worked we will need someone to recreate the
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Migration Toolkit for Containers (MTC) 1.5.2 security update and bugfix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:4848