Created attachment 1801054 [details] operator pod log MTC can not be deployed in ocp 3.9 ack:xjiang Description of problem: When we try to deploy MTC 1.5.0 in a 3.9 cluster the operator reports an error and fails. The MTC regular pods are not created. Version-Release number of selected component (if applicable): MTC 1.5.0 image: quay-enterprise-quay-enterprise.apps.cam-tgt-21090.qe.devcluster.openshift.com/admin/openshift-migration-rhel7-operator:v1.5.0-22 How reproducible: Always Steps to Reproduce: 1. Deploy MTC in a 3.9 cluster as normal Xjiang: 1. Deploy MTC operator on ocp3.9 and operator pod goes into Running status $ oc create -f operator.yml 2. Create Migration Controller instance $ oc create -f controller-3.yml Actual results: There is no controler pod and other pod produced, and We get this error in the operator pod {"level":"error","ts":1626148260.2264793,"logger":"runner","msg":"ansible-playbook 2.9.22\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/usr/share/ansible/openshift']\r\n ansible python module location = /usr/lib/python3.6/site-packages/ansible\r\n executable location = /usr/bin/ansible-playbook\r\n python version = 3.6.8 (default, Mar 18 2021, 08:58:41) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]\r\nUsing /etc/ansible/ansible.cfg as config file\r\nstatically imported: /opt/ansible/roles/migrationcontroller/tasks/mcg.yml\r\nSkipping callback 'actionable', as we already have a stdout callback.\nSkipping callback 'awx_display', as we already have a stdout callback.\nSkipping callback 'counter_enabled', as we already have a stdout callback.\nSkipping callback 'debug', as we already have a stdout callback.\nSkipping callback 'dense', as we already have a stdout callback.\nSkipping callback 'dense', as we already have a stdout callback.\nSkipping callback 'full_skip', as we already have a stdout callback.\nSkipping callback 'json', as we already have a stdout callback.\nSkipping callback 'minimal', as we already have a stdout callback.\nSkipping callback 'null', as we already have a stdout callback.\nSkipping callback 'oneline', as we already have a stdout callback.\nSkipping callback 'selective', as we already have a stdout callback.\nSkipping callback 'skippy', as we already have a stdout callback.\nSkipping callback 'stderr', as we already have a stdout callback.\nSkipping callback 'unixy', as we already have a stdout callback.\nSkipping callback 'yaml', as we already have a stdout callback.\n\r\nPLAYBOOK: 78e700bf82ef44b08eeecf2d81b468e9 *************************************\n1 plays in /tmp/ansible-operator/runner/migration.openshift.io/v1alpha1/MigrationController/openshift-migration/migration-controller/project/78e700bf82ef44b08eeecf2d81b468e9\n\r\nPLAY [localhost] ***************************************************************\n\r\nTASK [Gathering Facts] *********************************************************\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'getpwuid(): uid not found: 1000130000'\r\nfatal: [localhost]: FAILED! => {\"msg\": \"Unexpected failure during module execution.\", \"stdout\": \"\"}\n\r\nPLAY RECAP *********************************************************************\r\nlocalhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 \r\n\n","job":"6129484611666145821","name":"migration-controller","namespace":"openshift-migration","error":"exit status 2","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\toperator-sdk/vendor/github.com/go-logr/zapr/zapr.go:132\ngithub.com/operator-framework/operator-sdk/internal/ansible/runner.(*runner).Run.func1\n\toperator-sdk/internal/ansible/runner/runner.go:263"} --------------------------- Ansible Task Status Event StdOut ----------------- PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 Expected results: The deployment should be executed without errors and all MTC pods should be created successfully and the operator should report no errors. Additional info: Check the operator pod log in attachment.
verified with MTC 1.5.0. images: "registry.redhat.io/rhmtc/openshift-migration-log-reader-rhel8@sha256:dfb8b161286bcb7a9af516308cd980cf9ca83614a3e2bfe5adc86486d04f26d3", "registry.redhat.io/rhmtc/openshift-migration-rhel7-operator@sha256:0c90d79f4e08e7b1b7ba0cb7fa6148b77b1c698a88d4c8bff7c228f26300e57a", "registry.redhat.io/rhmtc/openshift-migration-velero-plugin-for-aws-rhel8@sha256:d61b3b716cbb0a292991f162d4746f23501a28a3bc8a0d74f95696be511df565", "registry.redhat.io/rhmtc/openshift-migration-velero-plugin-for-gcp-rhel8@sha256:d53e0dde26230682cb274e06a3e09740b7652ece9d4bd98651ea5c0580406dbe", "registry.redhat.io/rhmtc/openshift-migration-velero-plugin-for-microsoft-azure-rhel8@sha256:d1b910d3b635ca2ac2a488158188caa81112691764141e442ee0c7aee7372073", "registry.redhat.io/rhmtc/openshift-migration-velero-rhel8@sha256:97049365ed8c0dbe50d9a09cc16adaef9a13054260c9d4fb986af2042d6b09c2", "registry.redhat.io/rhmtc/openshift-velero-plugin-rhel8@sha256:54377da91cf8227ffeea7f86a14f690b68da66dbf06665c319412eaa5272af27"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Migration Toolkit for Containers (MTC) image release advisory 1.5.0), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2021:2929