Bug 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks
Summary: (s390x) Upgrade from old LUKS to new not working with DASD disks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: RHCOS
Version: 4.7
Hardware: s390x
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.0
Assignee: slowrie
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-23 14:03 UTC by Dan Li
Modified: 2021-02-24 15:35 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:35:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift os issues 448 0 None open s390x: Upgrade from old luks to new not working with DASD disks 2021-01-18 12:54:25 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:35:55 UTC

Description Dan Li 2020-11-23 14:03:22 UTC
Description of problem:

On systems with DASD disks, the upgrade from old luks to new does not work as intended. The investigation that followed revealed that the method for upgrading LUKS devices would have to significantly change to support DASD disks and would need to be moved into the initrd (rather than the real root).

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

specifically this will not work: https://github.com/openshift/os/blob/master/overlay.d/05rhcos/usr/libexec/rhcos-upgrade-root-filesystem-container#L39.

As github issue has been created #448
https://github.com/openshift/os/issues/448

Comment 1 Dan Li 2020-11-23 14:05:13 UTC
Re-assigning to Prashanth as he initially discovered the bug and created a github ticket to investigate into the problem.

Comment 2 Dan Li 2020-11-23 14:09:14 UTC
Setting target release as 4.7 and blocker+ for now. We are waiting for the x86 team to investigate further as a workaround has been proposed. This bug may change to a non-blocker if the workaround is accepted and implemented

Comment 3 Dan Li 2020-12-01 14:30:17 UTC
Keeping the "Blocker+" flag on until we can confirm that this bug does not prevent upgrade.

Comment 4 Dan Li 2020-12-02 18:47:16 UTC
Hi Prashanth, if this bug will not be resolved before the end of this sprint, can we add "UpcomingSprint"?

Comment 5 Dan Li 2020-12-04 22:05:02 UTC
Adding "UpcomingSprint" as this will not be resolved during this sprint. Will re-evaluate blocker status next week

Comment 6 Dan Li 2020-12-10 18:44:14 UTC
Hi Prashanth, just checking in once again. Is this bug still a blocker. If we are still able to upgrade, then this bug shouldn't be a blocker.

Comment 7 Dan Li 2020-12-14 14:04:30 UTC
After chatting with Prashanth, we think this bug should be reassigned to the RHCOS team as the fix for this bug is being worked on by their team. 

Hi RHCOS team, I wanted to bring up this blocker bug to your attention. Since your team is working on the "DASD disks + LUKS in RHCOS/OCP 4.7" fix, I am re-assigning this bug from Multi-Arch to your sub-component. Please re-assign as needed. Thank you.

Comment 8 Micah Abbott 2020-12-14 14:59:41 UTC
I believe Nikita and/or Benjamin are working on a fix for this.

Comment 10 Benjamin Gilbert 2021-01-11 15:33:45 UTC
We've reverted the upgrade code for 4.7, and we will not reintroduce it without properly handling DASD disks.

Comment 11 Dan Li 2021-01-11 15:56:35 UTC
Prashanth and I chatted and he has tested rhcos upgrade from 4.6->4.7 and it has worked fine. In addition, we would prefer for our IBM Power and Z colleagues to perform the basic testing on this too.

Hi @aprabhak and @wvoesch would your teams have the cycle to test an OCP upgrade from 4.6->4.7 on DASD systems from your (P & Z) side to confirm that this bug would not prevent rhcos upgrade? We want to make sure that this bug is not blocking anything.

Comment 12 wvoesch 2021-01-12 08:15:25 UTC
On the Z side we test that on a regular basis. 
We have tested the upgrade from from 4.6 to 4.7.0-fc.2.

Comment 14 Archana Prabhakar 2021-01-18 09:29:45 UTC
On P, we tested the following paths.
Upgrade from 4.6.10 --> 4.7.0-fc-2 ( libvirt environment )
Upgrade from 4.6.10 --> 4.7.0-fc-2 ( powervm environment )

Comment 15 Dan Li 2021-01-18 12:56:34 UTC
Since our P & Z colleagues have tested this bug and ensured that this bug is not blocking upgrade, I am de-escalating this bug to "Blocker-" as it is still a valid bug but will not cause blocking of rhcos upgrade

Comment 16 Prashanth Sundararaman 2021-01-18 14:49:51 UTC
The fc2 image has the removal of the upgrade scripts which should fix this issue and looks like from the comments above it has been tested. Marking it verified.

Comment 19 errata-xmlrpc 2021-02-24 15:35:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.