Bug 1837562 - hooks don't work on 3.7 because configmaps in 3.7 don't support the binaryData field
Summary: hooks don't work on 3.7 because configmaps in 3.7 don't support the binaryDat...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Migration Tooling
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.5.0
Assignee: Jason Montleon
QA Contact: Xin jiang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-19 16:25 UTC by Xin jiang
Modified: 2020-09-30 18:42 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-30 18:42:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4148 0 None None None 2020-09-30 18:42:53 UTC

Description Xin jiang 2020-05-19 16:25:15 UTC
Description of problem:
Since configmaps in 3.7 don't support the binaryData field so hooks don't currently work on 3.7. That will cause migmigration with hooks is stuck in running status.

Version-Release number of selected component (if applicable):
CAM 1.2

How reproducible:
Always

Steps to Reproduce:
1. Create service account in target cluster

$ oc new-project robot-target
$ oc create sa robot-target -n robot-target
$ oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:robot-target:robot-target

2 Create a service account in source cluster

$ oc new-project robot-source
$ oc create sa robot-source -n robot-source
$ oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:robot-source:robot-source

3. In source cluster deploy an application
$ oc new-project ocp-29918-hooks
$ oc new-app cakephp-mysql-persistent

4. Create a migration plan to migrate ocp-29918-hooks namespace with default values.

Continue to step 5 when you reach the "Hooks" screen to add the hooks.

5. Add PreBackup Hook for source cluster
Add this playbook in the playbook screen when creating the migration plan

Name: sourceprebackuphook
Ansible Playbook:
- hosts: localhost
  tasks:
  - name: Get Pods
    k8s_facts:
      kind: Pod
      namespace: "ocp-29918-hooks"
      label_selectors: "name=mysql"
    register: pod
    retries: 20
    until: pod.resources | length > 0

  - debug: msg={{ item.metadata.name }}
    loop: "{{ pod.resources }}"
    loop_control:
      label: "{{ item.metadata.name }}"

  - name: Get all namespaces using oc binary
    shell: "oc get namespaces"
    register: ns
    until: ns.rc == 0

  - debug: msg={{ item }}
    loop: "{{ ns.stdout_lines }}"
    loop_control:
      label: "{{ item }}"



Ansible Runtime Image: Leave the default value
Source Cluster:
    Servcie Account Name: robot-source
    Service Account Namespace: robot-source
Phase: PreBackup

6. Add PostBackup Hook for source cluster

Add this playbook in the playbook screen when creating the migration plan

Name: sourcepostbackuphook
Ansible Playbook:
- hosts: localhost
  tasks:
  - name: Get Pods
    k8s_facts:
      kind: Pod
      namespace: "openshift-migration"
      label_selectors: "app=migration"
    register: pod
    retries: 20
    until: pod.resources | length > 0

  - debug: msg={{ item.metadata.name }}
    loop: "{{ pod.resources }}"
    loop_control:
      label: "{{ item.metadata.name }}"

  - name: Get all namespaces using oc binary
    shell: "oc get namespaces"
    register: ns
    until: ns.rc == 0

  - debug: msg={{ item }}
    loop: "{{ ns.stdout_lines }}"
    loop_control:
      label: "{{ item }}"



Ansible Runtime Image: Leave the default value
Source Cluster:
    Servcie Account Name: robot-source
    Service Account Namespace: robot-source
Phase: PostBackup   6. The hook should be added without errors.

6. Add PreRestore Hook for target cluster

Add this playbook in the playbook screen when creating the migration plan

Name: tagetprerestorehook
Ansible Playbook:
- hosts: localhost
  tasks:
  - name: Get Pods
    k8s_facts:
      kind: Pod
      namespace: "openshift-migration"
      label_selectors: "app=migration"
    register: pod
    retries: 20
    until: pod.resources | length > 0

  - debug: msg={{ item.metadata.name }}
    loop: "{{ pod.resources }}"
    loop_control:
      label: "{{ item.metadata.name }}"

  - name: Get all namespaces using oc binary
    shell: "oc get namespaces"
    register: ns
    until: ns.rc == 0

  - debug: msg={{ item }}
    loop: "{{ ns.stdout_lines }}"
    loop_control:
      label: "{{ item }}"



Ansible Runtime Image: Leave the default value
Target Cluster:
    Servcie Account Name: robot-target
    Service Account Namespace: robot-target
Phase: PreRestore   

7. Add PostRestore Hook for target cluster

Add this playbook in the playbook screen when creating the migration plan

Name: tagetpostrestorehook
Ansible Playbook:
- hosts: localhost
  tasks:
  - name: Get Pods
    k8s_facts:
      kind: Pod
      namespace: "ocp-29918-hooks"
      label_selectors: "name=mysql"
    register: pod
    retries: 20
    until: pod.resources | length > 0

  - debug: msg={{ item.metadata.name }}
    loop: "{{ pod.resources }}"
    loop_control:
      label: "{{ item.metadata.name }}"

  - name: Get all namespaces using oc binary
    shell: "oc get namespaces"
    register: ns
    until: ns.rc == 0

  - debug: msg={{ item }}
    loop: "{{ ns.stdout_lines }}"
    loop_control:
      label: "{{ item }}"



Ansible Runtime Image: Leave the default value
Target Cluster:
    Servcie Account Name: robot-target
    Service Account Namespace: robot-target
Phase: PostRestore

8. Execute the migration of the migration plan


Actual results:
On CAM console, the migration is stuck in running status. 

Expected results:
the migration should be executed successfully.


Additional info:

$ oc logs testhook-prebackup-nbtcj-6nh8g -n robot-source
ERROR! the playbook: /tmp/playbook/playbook.yml could not be found

$ oc get job  -n robot-source -o yaml
apiVersion: v1
items:
- apiVersion: batch/v1
  kind: Job
  metadata:
    creationTimestamp: 2020-05-19T15:01:33Z
    generateName: testhook-prebackup-
    labels:
      app.kubernetes.io/part-of: openshift-migration
      mighook: 26e70c2e-99e1-11ea-a806-0e4a2a109e83
      owner: 9bce6f20-99e1-11ea-a806-0e4a2a109e83
      phase: PreBackup
    name: testhook-prebackup-nbtcj
    namespace: robot-source
    resourceVersion: "106341"
    selfLink: /apis/batch/v1/namespaces/robot-source/jobs/testhook-prebackup-nbtcj
    uid: a0e51a5b-99e1-11ea-a806-0e4a2a109e83
  spec:
    completions: 1
    parallelism: 1
    selector:
      matchLabels:
        controller-uid: a0e51a5b-99e1-11ea-a806-0e4a2a109e83
    template:
      metadata:
        creationTimestamp: null
        labels:
          controller-uid: a0e51a5b-99e1-11ea-a806-0e4a2a109e83
          job-name: testhook-prebackup-nbtcj
      spec:
        activeDeadlineSeconds: 1800
        containers:
        - command:
          - /bin/entrypoint
          - ansible-runner
          - -p
          - /tmp/playbook/playbook.yml
          - run
          - /tmp/runner
          image: quay.io/konveyor/hook-runner:latest
          imagePullPolicy: Always
          name: testhook-prebackup
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /tmp/playbook
            name: playbook
        dnsPolicy: ClusterFirst
        restartPolicy: OnFailure
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: robot-source
        serviceAccountName: robot-source
        terminationGracePeriodSeconds: 30
        volumes:
        - configMap:
            defaultMode: 420
            name: testhook-prebackup-jf2f7
          name: playbook
  status:
    active: 1
    startTime: 2020-05-19T15:01:33Z
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


$ oc get cm testhook-prebackup-jf2f7 -n robot-source -o yaml
apiVersion: v1
data:
  playbook.yml: |
    - hosts: localhost
      tasks:
      - name: Get Pods
        k8s_facts:
          kind: Pod
          namespace: "openshift-migration"
          label_selectors: "app=migration"
        register: pod
        retries: 20
        until: pod.resources | length > 0

      - debug: msg={{ item.metadata.name }}
        loop: "{{ pod.resources }}"
        loop_control:
          label: "{{ item.metadata.name }}"

      - name: Get all namespaces using oc binary
        shell: "oc get namespaces"
        register: ns
        until: ns.rc == 0

      - debug: msg={{ item }}
        loop: "{{ ns.stdout_lines }}"
        loop_control:
          label: "{{ item }}"
kind: ConfigMap
metadata:
  creationTimestamp: 2020-05-19T15:01:33Z
  generateName: testhook-prebackup-
  labels:
    app.kubernetes.io/part-of: openshift-migration
    mighook: 26e70c2e-99e1-11ea-a806-0e4a2a109e83
    owner: 9bce6f20-99e1-11ea-a806-0e4a2a109e83
    phase: PreBackup
  name: testhook-prebackup-jf2f7
  namespace: robot-source
  resourceVersion: "106163"
  selfLink: /api/v1/namespaces/robot-source/configmaps/testhook-prebackup-jf2f7
  uid: a0e30e95-99e1-11ea-a806-0e4a2a109e83

Comment 1 Jason Montleon 2020-06-17 15:00:59 UTC
This was fixed for a prior release with https://github.com/konveyor/mig-controller/pull/540

Comment 4 Sergio 2020-09-18 13:29:00 UTC
Verified using MTC 1.3

openshift-migration-rhel7-operator@sha256:233af9517407e792bbb34c58558346f2424b8b0ab54be6f12f9f97513e391a6a


In 3.7 hooks could be executed without problems

The playbook used for the verification was: (avoiding "oc" binary since "oc" binary in the default image is not right for OCP 3.7 and can give problems)

- hosts: localhost
  tasks:
  - name: Get Pods
    k8s_facts:
      kind: Pod
      namespace: "ocp-29918-hooks"
      label_selectors: "app=nginx"
    register: pod
    retries: 20
    until: pod.resources | length > 0

  - debug: msg={{ item.metadata.name }}
    loop: "{{ pod.resources }}"
    loop_control:
      label: "{{ item.metadata.name }}"


Moved to VERIFIED status.

Comment 8 errata-xmlrpc 2020-09-30 18:42:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Migration Toolkit for Containers (MTC) Tool image release advisory 1.3.0), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4148


Note You need to log in before you can comment on or make changes to this bug.