Bug 1576658 - oc adm migrate storage fails if pods deleted during the migration
Summary: oc adm migrate storage fails if pods deleted during the migration
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Master
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: 3.7.z
Assignee: Maciej Szulik
QA Contact: Wang Haoran
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-10 05:10 UTC by Kenjiro Nakayama
Modified: 2021-09-09 14:00 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-15 11:30:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Kenjiro Nakayama 2018-05-10 05:10:50 UTC
Description of problem:
- When we run oc adm migrate storage, we got following error:

```
    "stdout": "E0507 17:21:02.645139 error:     -n xxx pods/config-rest-2-thzt8: pods \"config-rest-2-thzt8\" not found\nE0507 17:21:02.708902 error:     -n xxx pods/end-to-end-test-1-668xr: pods \"end-to-end-test-1-668xr\" not found\nE0507 17:21:02.741192 error:     -n xxx pods/kafka-1-zlc9b: pods \"kafka-1-zlc9b\" not found\nsummary: total=38931 errors=3 ignored=0 unchanged=28861 migrated=10067\ninfo: to rerun only failing resources, add --include=pods\nerror: 3 resources failed to migrate",
    "stdout_lines": [
        "E0507 17:21:02.645139 error:     -n xxx pods/config-rest-2-thzt8: pods \"config-rest-2-thzt8\" not found",
        "E0507 17:21:02.708902 error:     -n xxx pods/end-to-end-test-1-668xr: pods \"end-to-end-test-1-668xr\" not found",
        "E0507 17:21:02.741192 error:     -n xxx pods/kafka-1-zlc9b: pods \"kafka-1-zlc9b\" not found",
        "summary: total=38931 errors=3 ignored=0 unchanged=28861 migrated=10067",
        "info: to rerun only failing resources, add --include=pods",
        "error: 3 resources failed to migrate"
    ]
```

- Root cause of this issue is that "# oc adm --config=/etc/origin/master/admin.kubeconfig migrate storage --include=* --confirm" in "TASK [Upgrade all storage]" fails if pods are re-started(re-created).

Version-Release number of the following components:
- OCP 3.7

How reproducible: 10%

Steps to Reproduce:
1. Delete or re-create pods during oc adm migrate storage. (It happens usually by CrashLooping pods)

Actual results:
- Please refer to the error above.

Expected results:
- Success upgrade playbook without error.
- We would like to request to increase success rate of "oc adm migrate storage" (TASK [Upgrade all storage]), since upgrade playbook took long time and re-run the playbook is bothering.

Comment 2 Seth Jennings 2018-05-10 19:12:42 UTC
I think this is just due to a delay between when the pod resources are listed and when the pods are actually migrated.  Pods that are removed between the two times cause errors but are harmless.

Maybe we not returning errors in that case.


Note You need to log in before you can comment on or make changes to this bug.