Bug 1935174
Summary: | [4.7.z] RHCOS boot image bump (LUKS growfs, NM initrd carrier-timeout) | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Micah Abbott <miabbott> |
Component: | RHCOS | Assignee: | Micah Abbott <miabbott> |
Status: | CLOSED ERRATA | QA Contact: | Michael Nguyen <mnguyen> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.7 | CC: | bbreard, bgilbert, bsmitley, dornelas, fgiloux, imcleod, jlebon, jligon, keyoung, lucab, miabbott, mnguyen, mstaeble, nstielau, wking |
Target Milestone: | --- | ||
Target Release: | 4.7.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1934557 | Environment: | |
Last Closed: | 2021-04-20 18:52:40 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1934557 | ||
Bug Blocks: | 1922417, 1934863, 1940966, 1941760, 1942706, 1971038 |
Description
Micah Abbott
2021-03-04 13:46:48 UTC
Shouldn't hold a release; just important to get it shipped sooner than later. Dropping the `blocker` flag. It looks like this is taking a bit longer than expected, and the PR is currently on hold pending further investigation/validation. In the meanwhile another bump request came in at https://bugzilla.redhat.com/show_bug.cgi?id=1922417, related to NetworkManager >= 1:1.26.0-14.1.rhaos4.7.el8. I'm lumping that request in to this same bootimages bump, please update the artifacts to be at least as recent as RHCOS 47.83.202103181343. RHCOS build pipeline is down due to infrastructure issues; once builds are being made again, I'll update the release-4.7 installer branch to the new version of RHCOS 4.7 Installer boot image bump for release-4.7 - https://github.com/openshift/installer/pull/4791 Bumping the severity to high due to a number of customers waiting for RHCOS fixes in 4.7 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-04-13-144216 True False 13m Cluster version is 4.7.0-0.nightly-2021-04-13-144216 $ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-152-189.us-west-2.compute.internal Ready worker 28m v1.20.0+c8905da ip-10-0-156-101.us-west-2.compute.internal Ready master 42m v1.20.0+c8905da ip-10-0-164-3.us-west-2.compute.internal Ready master 41m v1.20.0+c8905da ip-10-0-176-163.us-west-2.compute.internal Ready worker 30m v1.20.0+c8905da ip-10-0-211-31.us-west-2.compute.internal Ready worker 31m v1.20.0+c8905da ip-10-0-214-55.us-west-2.compute.internal Ready master 41m v1.20.0+c8905da $ oc debug node/ip-10-0-152-189.us-west-2.compute.internal Starting pod/ip-10-0-152-189us-west-2computeinternal-debug ... To use host binaries, run `chroot /host` If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# cryptsetup luksDump /dev/disk/by-partlabel/root LUKS header information Version: 2 Epoch: 6 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: a23fe7f9-5aa2-408f-a631-d45d26fcf61f Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-cbc-essiv:sha256 sector: 512 [bytes] Keyslots: 1: luks2 Key: 256 bits Priority: normal Cipher: aes-cbc-essiv:sha256 Cipher key: 256 bits PBKDF: argon2i Time cost: 4 Memory: 837764 Threads: 2 Salt: 41 e2 57 15 c7 bc 11 48 21 90 c3 c5 f6 7d 57 ea 02 61 da 04 dc b3 65 f0 2e 4a be c2 7e eb ee 3e AF stripes: 4000 AF hash: sha256 Area offset:163840 [bytes] Area length:131072 [bytes] Digest ID: 0 Tokens: 0: clevis Keyslot: 1 Digests: 0: pbkdf2 Hash: sha256 Iterations: 217366 Salt: 35 ab 2d 55 4d 5e f1 5c fe d1 28 eb 3c 7b 51 01 b8 ea ff 4f 53 25 6c 5e 2a 77 7b b3 f7 6a e6 20 Digest: cd 8c c6 ec 10 45 7a 13 4c 23 e5 27 c1 d4 18 d7 b0 e0 37 fe ba a4 f8 41 51 57 84 6d 32 cf 7d 5d sh-4.4# clevis luks list -d /dev/disk/by-partlabel/root 1: sss '{"t":1,"pins":{"tang":[{"url":"http://34.221.223.65"}]}}' sh-4.4# findmnt /var | more TARGET SOURCE FSTYPE OPTIONS /var /dev/mapper/root[/ostree/deploy/rhcos/var] xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota sh-4.4# exit exit sh-4.2# exit exit Removing debug pod ... $ oc -n openshift-machine-api get machinesets NAME DESIRED CURRENT READY AVAILABLE AGE mnguyen47boot-jrwqr-worker-us-west-2a 1 1 1 1 51m mnguyen47boot-jrwqr-worker-us-west-2b 1 1 1 1 51m mnguyen47boot-jrwqr-worker-us-west-2c 1 1 1 1 51m mnguyen47boot-jrwqr-worker-us-west-2d 0 0 51m $ oc -n openshift-machine-api get machinesets/mnguyen47boot-jrwqr-worker-us-west-2a -o yaml | grep ami f:ami: {} ami: id: ami-0617611237b58ac93 Removing debug pod ... Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.7.7 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1149 |