Bug 1734492 - Failure in migration plan registry on the source cluster during cakephp-mysql app - NFS copy
Summary: Failure in migration plan registry on the source cluster during cakephp-mysql...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Migration Tooling
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.3.0
Assignee: Dylan Murray
QA Contact: Roshni
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-30 17:11 UTC by Roshni
Modified: 2020-02-06 20:20 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-06 20:20:44 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:0440 None None None 2020-02-06 20:20:55 UTC

Comment 2 Roshni 2019-08-06 23:47:56 UTC
I was able to get past this issue during one of my testing attempts using 
https://quay.io/repository/ocpmigrate/mig-operator/manifest/sha256:1df62f5ce345f56520a8d0b9795fa9bc55fcac9c04a029f6ddf4da638b055a32 
https://quay.io/repository/ocpmigrate/mig-ui/manifest/sha256:ba3024efc9f61f0392780659a007989deca3d0bcdae65f2d38499079968014d4
and 4.2.0-0.nightly-2019-08-06-072822. 

But the pod failed after migration. Let me know if I have to file a different bug.

root@ip-172-31-61-189: ~/go/src/github.com/fusor/mig-controller # oc logs pods/mysql-1-882f7 -n rpattath
=> sourcing 20-validate-variables.sh ...
=> sourcing 25-validate-replication-variables.sh ...
=> sourcing 30-base-config.sh ...
---> 23:39:09     Processing basic MySQL configuration files ...
=> sourcing 60-replication-config.sh ...
=> sourcing 70-s2i-config.sh ...
---> 23:39:09     Processing additional arbitrary  MySQL configuration provided by s2i ...
=> sourcing 40-paas.cnf ...
=> sourcing 50-my-tuning.cnf ...
---> 23:39:09     Starting MySQL server with disabled networking ...
---> 23:39:09     Waiting for MySQL to start ...
2019-08-06T23:39:10.198791Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
2019-08-06T23:39:10.204799Z 0 [System] [MY-010116] [Server] /opt/rh/rh-mysql80/root/usr/libexec/mysqld (mysqld 8.0.13) starting as process 29
2019-08-06T23:39:10.399600Z 1 [ERROR] [MY-013090] [InnoDB] Unsupported redo log format (0). The redo log was created before MySQL 5.7.9
2019-08-06T23:39:10.399641Z 1 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error.
2019-08-06T23:39:10.900054Z 1 [ERROR] [MY-011013] [Server] Failed to initialize DD Storage Engine.
2019-08-06T23:39:10.901486Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2019-08-06T23:39:10.901508Z 0 [ERROR] [MY-010119] [Server] Aborting
2019-08-06T23:39:10.902688Z 0 [System] [MY-010910] [Server] /opt/rh/rh-mysql80/root/usr/libexec/mysqld: Shutdown complete (mysqld 8.0.13)  Source distribution.

Comment 3 Dylan Murray 2019-08-07 11:23:49 UTC
This might have something to do with all of the recent staging errors that have been documented upstream. I have submitted a couple fixes and tested cakephp-mysql application bouncing the page count and running migrate with no errors in the registry pod and the application active with no errors on the OCP4 side.

Scott, please retest this with: 
https://github.com/fusor/mig-controller/pull/248
https://github.com/fusor/mig-controller/pull/246

merged and tell me if you can reproduce this bug. Otherwise I can take over the bug and we can move this to QA.

Comment 4 Dylan Murray 2019-12-16 16:13:18 UTC
I have tested the migration of cakephp-mysql app extensively and have not been able to reproduce this. There has been a significant amount of work done on the stage workflow so please retest this bug and confirm whether this is still valid.

I am clearing NEEDINFO since the assignee has changed to myself. Moving to ON_QA.

Comment 6 Roshni 2020-01-21 21:44:50 UTC
Unable to reproduce the issue using the following images

# oc describe pod/migration-operator-5f88d89f44-7jnnx -n openshift-migration | grep Image
                    containerImage:
    Image:         registry.stage.redhat.io/rhcam-1-1/openshift-migration-rhel7-operator@sha256:c45fcaa551bbcab26441e7701e7e70f6784a19cb64b17d0138814ffe2b7d8fda

[root@rpattath nfs-client]# oc describe pod/migration-controller-5964fcfc9b-z2fsq -n openshift-migration | grep Image
    Image:         registry.stage.redhat.io/rhcam-1-1/openshift-migration-controller-rhel8@sha256:ec015e65da93e800a25e522473793c277cd0d86436ab4637d73e2673a5f0083d

Comment 8 errata-xmlrpc 2020-02-06 20:20:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0440


Note You need to log in before you can comment on or make changes to this bug.