Bug 1390900
| Summary: | Output info is strange when run 'oadm manage-node' command | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | DeShuai Ma <dma> |
| Component: | oc | Assignee: | Juan Vallejo <jvallejo> |
| Status: | CLOSED ERRATA | QA Contact: | Xingxing Xia <xxia> |
| Severity: | medium | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.4.0 | CC: | aos-bugs, ffranz, jokerman, jvallejo, mmccomas, smunilla |
| Target Milestone: | --- | Keywords: | Rebase |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Cause:
1. Pod headers were only being printed once for all sets of pods when listing pods from multiple nodes.
2. Executing `oadm manage-node <node-1> <node-2> ... --evacuate --dry-run` with multiple nodes would print the same output multiple times (once per each specified node)
Consequence: Users would see inconsistent or duplicate pod information.
Fix:
1. Pod headers are now printed for each set of pods per node
2. Output of `oadm manage-node <node-1> <node-2> ... --evacuate --dry-run` is now printed once, regardless of the number of nodes that were specified.
Result: Pod data is consistent when listing sets of pods for multiple nodes, and output of `oadm manage-node <node-1> <node-2> ... --evacuate --dry-run` is no longer duplicated.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-08-10 05:15:47 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
DeShuai Ma
2016-11-02 08:34:23 UTC
Related PR: https://github.com/openshift/origin/pull/12528 Related upstream PR: https://github.com/kubernetes/kubernetes/pull/40111 origin PR will merge after rebase with kube 1.6. Adding UpcomingRelease tag to this bug Waiting for the 1.6 Kube rebase. Fixed by the 1.6 Kubernetes rebase and https://github.com/openshift/origin/pull/12528. Verify on openshift v3.6.94 [root@qe-dma36-master-1 ~]# oc get node NAME STATUS AGE VERSION qe-dma36-master-1 Ready 6h v1.6.1+5115d708d7 qe-dma36-node-registry-router-1 Ready 6h v1.6.1+5115d708d7 [root@qe-dma36-master-1 ~]# oadm manage-node --list-pods --pod-selector='name=hello-pod' --selector='role=node' Listing matched pods on node: qe-dma36-master-1 NAME READY STATUS RESTARTS AGE Listing matched pods on node: qe-dma36-node-registry-router-1 NAME READY STATUS RESTARTS AGE hello-pod 0/1 Running 0 15m [root@qe-dma36-master-1 ~]# oadm manage-node qe-dma36-node-registry-router-1 --evacuate --dry-run --pod-selector='name=hello-pod' Flag --evacuate has been deprecated, use 'oadm drain NODE' instead Listing matched pods on node: qe-dma36-node-registry-router-1 NAME READY STATUS RESTARTS AGE hello-pod 0/1 Running 0 15m Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1716 |