Bug 1758345 - downloads pod doesn't respond to drain
Summary: downloads pod doesn't respond to drain
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.1.z
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.1.z
Assignee: Antonio Murdaca
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On: 1745772
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-03 21:15 UTC by Antonio Murdaca
Modified: 2020-11-26 03:11 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1745772
Environment:
Last Closed: 2020-02-13 06:14:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1166 0 'None' closed Bug 1758345: [release-4.1] pkg/daemon: default drain grace period to -1 2020-02-12 13:55:25 UTC
Red Hat Product Errata RHBA-2020:0399 0 None None None 2020-02-13 06:14:11 UTC

Comment 1 Antonio Murdaca 2020-01-30 13:17:32 UTC
This has been effectively fixed and closed via https://github.com/openshift/machine-config-operator/pull/1166 but for some reason the BZ hasn't moved to MODIFIED->ON_QA

Comment 2 W. Trevor King 2020-01-30 16:26:58 UTC
The PR was not automatically moved to MODIFIED because [1] remains open.  Manually moving it to MODIFIED as you've done says "we don't actually need [1] to fix this issue", but there's no way for the robots to figure that out on their own (yet! ;).

[1]: https://github.com/openshift/cluster-api/pull/132

Comment 4 Michael Nguyen 2020-02-05 02:14:07 UTC
Verified on  4.1.0-0.nightly-2020-02-04-094220

$ oc get nodes
NAME                           STATUS   ROLES    AGE   VERSION
ip-10-0-130-234.ec2.internal   Ready    worker   53m   v1.13.4+125a3c441
ip-10-0-135-69.ec2.internal    Ready    master   58m   v1.13.4+125a3c441
ip-10-0-145-208.ec2.internal   Ready    master   58m   v1.13.4+125a3c441
ip-10-0-155-197.ec2.internal   Ready    worker   53m   v1.13.4+125a3c441
ip-10-0-169-8.ec2.internal     Ready    worker   53m   v1.13.4+125a3c441
ip-10-0-174-89.ec2.internal    Ready    master   58m   v1.13.4+125a3c441
$ oc adm upgrade --force --to-image=quay.io/openshift-release-dev/ocp-release:4.2.16-x86_64
Updating to release image quay.io/openshift-release-dev/ocp-release:4.2.16-x86_64
$ watch oc get clusterversion
$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.2.16    True        False         False      160m
cloud-credential                           4.2.16    True        False         False      170m
cluster-autoscaler                         4.2.16    True        False         False      170m
console                                    4.2.16    True        False         False      80m
dns                                        4.2.16    True        False         False      170m
image-registry                             4.2.16    True        False         False      80m
ingress                                    4.2.16    True        False         False      165m
insights                                   4.2.16    True        False         False      103m
kube-apiserver                             4.2.16    True        False         False      167m
kube-controller-manager                    4.2.16    True        False         False      168m
kube-scheduler                             4.2.16    True        False         False      167m
machine-api                                4.2.16    True        False         False      170m
machine-config                             4.2.16    True        False         False      169m
marketplace                                4.2.16    True        False         False      75m
monitoring                                 4.2.16    True        False         False      58m
network                                    4.2.16    True        False         False      170m
node-tuning                                4.2.16    True        False         False      76m
openshift-apiserver                        4.2.16    True        False         False      59m
openshift-controller-manager               4.2.16    True        False         False      169m
openshift-samples                          4.2.16    True        False         False      102m
operator-lifecycle-manager                 4.2.16    True        False         False      169m
operator-lifecycle-manager-catalog         4.2.16    True        False         False      169m
operator-lifecycle-manager-packageserver   4.2.16    True        False         False      75m
service-ca                                 4.2.16    True        False         False      170m
service-catalog-apiserver                  4.2.16    True        False         False      166m
service-catalog-controller-manager         4.2.16    True        False         False      166m
storage                                    4.2.16    True        False         False      103m


$ oc get node
NAME                           STATUS   ROLES    AGE    VERSION
ip-10-0-130-234.ec2.internal   Ready    worker   173m   v1.14.6+97c81d00e
ip-10-0-135-69.ec2.internal    Ready    master   178m   v1.14.6+97c81d00e
ip-10-0-145-208.ec2.internal   Ready    master   178m   v1.14.6+97c81d00e
ip-10-0-155-197.ec2.internal   Ready    worker   173m   v1.14.6+97c81d00e
ip-10-0-169-8.ec2.internal     Ready    worker   173m   v1.14.6+97c81d00e
ip-10-0-174-89.ec2.internal    Ready    master   178m   v1.14.6+97c81d00e
$ oc debug node/ip-10-0-130-234.ec2.internal -- chroot /host journalctl | grep -i drain
Starting pod/ip-10-0-130-234ec2internal-debug ...
To use host binaries, run `chroot /host`
Feb 05 00:45:28 ip-10-0-130-234 root[13301]: machine-config-daemon[135215]: Update prepared; beginning drain
Feb 05 00:46:29 ip-10-0-130-234 root[16064]: machine-config-daemon[135215]: drain complete

Removing debug pod ...
$ oc debug node/ip-10-0-135-69.ec2.internal -- chroot /host journalctl | grep -i drain
Starting pod/ip-10-0-135-69ec2internal-debug ...
To use host binaries, run `chroot /host`
Feb 05 00:42:29 ip-10-0-135-69 root[6780]: machine-config-daemon[143824]: Update prepared; beginning drain
Feb 05 00:43:17 ip-10-0-135-69 root[11871]: machine-config-daemon[143824]: drain complete

Removing debug pod ...

Comment 6 errata-xmlrpc 2020-02-13 06:14:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0399


Note You need to log in before you can comment on or make changes to this bug.