Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1884750

Summary: Vsphere - adding node fails to the cluster when using rhcos 4.6.0-0 image with ignition config generated by OCP 4.4 openshift install
Product: OpenShift Container Platform Reporter: rehan <rekhan>
Component: DocumentationAssignee: Bob Furu <bfuru>
Status: CLOSED CURRENTRELEASE QA Contact: Michael Nguyen <mnguyen>
Severity: high Docs Contact: Vikram Goyal <vigoyal>
Priority: high    
Version: 4.6CC: alchan, aos-bugs, bbreard, bfuru, bgilbert, dbasant, dornelas, fshaikh, imcleod, jligon, jokerman, k-keiichi, kkii, mas-hatada, mfuruta, miabbott, nbhatt, nstielau, rh-container, vjaypurk, walters
Target Milestone: ---Keywords: Reopened
Target Release: 4.6.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-04-22 13:39:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1186913    

Description rehan 2020-10-02 18:36:26 UTC
Description of problem:

Adding node to the cluster fails with the following error when testing the combination of rhcos-4.6.0-0.nightly-2020-08-26-093617-x86_64 and ignition config generated by OCP4.4's openshift-install.

~~~ ERROR SNIP ~~~

  Sep 18 01:02:31 ignition[723]: failed to fetch config: unsupported config version
  Sep 18 01:02:31 ignition[723]: failed to acquire config: unsupported config version
  Sep 18 01:02:31 systemd[1]: ignition-fetch-offline.service: Main process exited, code=exited, status=1/FAILURE
  Sep 18 01:02:31 ignition[723]: Ignition failed: unsupported config version
  Sep 18 01:02:31 systemd[1]: ignition-fetch-offline.service: Failed with result 'exit-code'.
  Sep 18 01:02:31 systemd[1]: Failed to start Ignition (fetch-offline).
  Sep 18 01:02:31 systemd[1]: ignition-fetch-offline.service: Triggering OnFailure= dependencies.

~~~~~~~

On the other hand, succeeded to add a new worker node to the OCPv4.6.0-0.nightly-2020-07-31-225620 cluster when testing rhcos-4.4.3 and ignition config generated by OCP4.4's openshift-install.

So due to which we will need to continue to use old RHCOS image even after upgrading.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:

Fails to add the node
Expected results:

Node should get added successfully without any error
Additional info:

Comment 2 Micah Abbott 2020-10-02 19:59:15 UTC
This is expected behavior.

RHCOS 4.6 moved to supporting only Ignition spec v3 for initial installs.  New cluster installs should be done using the same versioned installer/RHCOS artifacts and supported Ignition specification version.

Clusters "born" previous to 4.6 should continue to operate normally after upgrade.

If nodes are added to the cluster via the Machine API Operator, nodes will be created using the original version of the RHCOS artifacts and the original version of the Ignition config that were used during cluster installation.  If nodes are added to the cluster using the RHCOS 4.6 artifacts, an Ignition config using spec v3 must be provided.

Comment 4 Micah Abbott 2020-10-06 19:32:17 UTC
There is higher priority work that we are focusing on right now; targeting for 4.7

Comment 5 rehan 2020-10-07 04:58:20 UTC
(In reply to Micah Abbott from comment #4)
> There is higher priority work that we are focusing on right now; targeting
> for 4.7

This is something on priority as this issue will come across multiple customers. This problem can be seen for OCP 4.6, can we look into this as a priority. 

Regards,
Rehan

Comment 6 Masaki Hatada 2020-10-07 11:18:37 UTC
> There is higher priority work that we are focusing on right now; targeting for 4.7

Please consider that all OCP4.x users would face this issue after upgradeing their cluster to OCP4.6.

For example, OCP4.5's openshift-installer generates an ignition config file in v2 format.
It's not converted to v3 automatically if OCP cluster itself was upgraded to OCP4.6.

RHCOS4.6 cannot handle ignition config v2, so user cannot use the same file for adding a new node to OCP4.6.
But Red Hat doesn't have a plan to provide a convert tool for old ignition config file. So, how should we do...?
Should we use old RHCOS image for handling ignition config v2? If so, Red Hat should continue to support old RHCOS image even with OCP4.7, 4.8, 4.9...

Please let us know the official way to avoid this issue for OCP4.6.

Comment 7 Colin Walters 2020-10-07 20:50:22 UTC
Yes, we will continue to support old bootimages for the forseeable future - at a minimum we need to implement
https://github.com/openshift/enhancements/pull/201
so that we *consistently* update bootimages across all platforms and footprints.

I know this may seem strange; this also relates to
https://github.com/openshift/machine-config-operator/blob/master/docs/OSUpgrades.md
Bear in mind the bootimage is mostly a "shell" that is updated in place - its only purpose is to update.
Or to state this another way, in a cluster "born" in e.g. 4.4 that is upgraded in place to 4.6,
the only relevant trace of 4.4 on newly scaled up worker nodes (or the in-place upgraded nodes)
is the bootloader - everything from the kernel up to kubelet is upgraded to 4.6 before the node
joins the cluster or does much of anything.

Comment 8 Masaki Hatada 2020-10-08 04:40:10 UTC
Dear Colin,

Thank you for your explanation.
However, it's difficult for me to understand it...

https://github.com/openshift/enhancements/pull/201 mentions bootloader image, but what's the problem for me is about ignition config.
OCP4 running on vsphere cannot use MachineSet. So an old ignition config file(worker.ign) generated during installation process is needed to add a new worker node.
The old ignition config file has no compatibility with new bootloader images. We have to convert the file but currently there is no official convert tool.

How can https://github.com/openshift/enhancements/pull/201 resolve this?

Comment 9 Masaki Furuta ( RH ) 2020-10-09 09:11:09 UTC
(In reply to Masaki Hatada from comment #8)
...
> The old ignition config file has no compatibility with new bootloader
> images. We have to convert the file but currently there is no official
> convert tool.
...

Hello Micah Abbott, 

I am sorry for jumping in, I am RH TAM for NEC RHOCP Team, and thank you for your time and effort to resolve this issue.
This is really great help.

On a regular TAM conference call with NEC on this Thursday, I have received requst from NEC to clarify the question from NEC at comment #8, so may I gert your help to answer to their question (#8) ?

As far as I can see the changes (from 2.2.0 to 3.1.0) at [1862924 – Migrate to ignition config spec v3.1](https://bugzilla.redhat.com/show_bug.cgi?id=1862924) / [BZ#1862924 Ignition config v3.1 migration by codyhoag · Pull Request #24805 · openshift/openshift-docs](https://github.com/openshift/openshift-docs/pull/24805), I think we have to modify/fix something few in old 2.2.0 worker.ign to fit it to new 3.1.0 format, other than version number itself.

But I also think that we already have something useful for this here ( [coreos/ign-converter: Mechanical translator to/from Ignition config spec 3.x](https://github.com/coreos/ign-converter) / [Upgrading Configs - coreos/ignition](https://coreos.github.io/ignition/migrating-configs/)).

Won't we provide something like this as a part of this enhancement to assist our customers on vSphere to migrate from their old v2.2.0 ign to v3.1.0 , is my understanding correct  ?

I am grateful for your help and clarification.

Thank you,

BR,
Masaki

Comment 11 Micah Abbott 2020-10-12 19:52:52 UTC
(In reply to Masaki Furuta from comment #9)
> (In reply to Masaki Hatada from comment #8)
> ...
> > The old ignition config file has no compatibility with new bootloader
> > images. We have to convert the file but currently there is no official
> > convert tool.
> ...
> 
> Hello Micah Abbott, 
> 
> I am sorry for jumping in, I am RH TAM for NEC RHOCP Team, and thank you for
> your time and effort to resolve this issue.
> This is really great help.
> 
> On a regular TAM conference call with NEC on this Thursday, I have received
> requst from NEC to clarify the question from NEC at comment #8, so may I
> gert your help to answer to their question (#8) ?
> 
> As far as I can see the changes (from 2.2.0 to 3.1.0) at [1862924 – Migrate
> to ignition config spec
> v3.1](https://bugzilla.redhat.com/show_bug.cgi?id=1862924) / [BZ#1862924
> Ignition config v3.1 migration by codyhoag · Pull Request #24805 ·
> openshift/openshift-docs](https://github.com/openshift/openshift-docs/pull/
> 24805), I think we have to modify/fix something few in old 2.2.0 worker.ign
> to fit it to new 3.1.0 format, other than version number itself.
> 
> But I also think that we already have something useful for this here (
> [coreos/ign-converter: Mechanical translator to/from Ignition config spec
> 3.x](https://github.com/coreos/ign-converter) / [Upgrading Configs -
> coreos/ignition](https://coreos.github.io/ignition/migrating-configs/)).
> 
> Won't we provide something like this as a part of this enhancement to assist
> our customers on vSphere to migrate from their old v2.2.0 ign to v3.1.0 , is
> my understanding correct  ?

After discussing this issue with engineering and support, we decided we needed to clarify our statements about supporting adding new nodes to clusters.

Specifically, we *do not* support adding new nodes to existing clusters using newer boot media.  While this is being discussed in the context of OCP 4.6 and Ignition spec 3, this limitation has existed for *all* versions of OCP.  We have never tested the ability to add additional nodes to a cluster using newer boot media, so we cannot rightly claim to support such an operation.

https://github.com/openshift/openshift-docs/pull/26215


This *does not* mean that customers are unable to add new nodes to clusters installed with older versions.  Customers may still add additional nodes to their clusters using boot media that matches the minor version of OCP that was used to install the cluster.

For example, if a customer installed a cluster using OCP 4.4 and then upgraded to OCP 4.6, they would add additional nodes to the cluster using RHCOS 4.4 boot media and compatible Ignition spec 2 configs.

This means there is no need to migrate the Ignition configs to the newer spec 3 version.


We did not provide any migration tool because it is not possible to successfully translate Ignition spec 2 configs to Ignition spec 3 all of the time.  The `ign-converter` repo specifically calls this out under the "Why is this not part of Ignition?" section:

https://github.com/coreos/ign-converter#why-is-this-not-part-of-ignition

```
This means Ignition can't be guaranteed to automatically translate an old config to an equivalent new config; it can fail at conversion. Since Ignition internally translates old configs to the latest config, this would mean old Ignition configs could stop working on newer versions of whatever OS included Ignition. Additionally, due to the change in how filesystems are handled (new configs require specifying the path relative to the sysroot that Ignition should mount the filesystem at), some configs require extra information to convert from the old versions to the new versions.
```

For customers that are installing OCP 4.6 for the first time, the installer will generate Ignition spec 3 configs and use RHCOS 4.6 boot media.  When these customers want to add additional nodes to their cluster, they will need to use RHCOS 4.6 boot media and Ignition spec 3 configs.


Until we have support for dynamically updating/creating boot media in the cluster itself (tracked at https://github.com/openshift/enhancements/pull/201), the limitation on using the boot media that matches the minor version of OCP that was used to install the cluster will exist.

Comment 12 Masaki Hatada 2020-10-13 02:45:06 UTC
Dear Micah,

Thank you for your update.

We understand as follows:
- Red Hat will continue to support old RHCOS image for adding a new node until https://github.com/openshift/enhancements/pull/201 is implemented
- After https://github.com/openshift/enhancements/pull/201 was implemented, we would no longer use old Ignition Config for adding a new node even in vSphere UPI env...???

We are still not sure if the latter is right, since https://github.com/openshift/enhancements/pull/201 mentions only things about boot image.
So we would like to receive the answer of this point from Red Hat.

And, even about the former point, Red Hat should describe this in OCP4 manual.
Red Hat has announced 3 minor versions support so customers has believed that old RHCOS image shouldn't be used anyway...

We hope that you or another Red Hat engineer will handle the above queries.

Best Regards,
Masaki Hatada

Comment 13 Micah Abbott 2020-10-15 19:04:03 UTC
(In reply to Masaki Hatada from comment #12)
> Dear Micah,
> 
> Thank you for your update.
> 
> We understand as follows:
> - Red Hat will continue to support old RHCOS image for adding a new node
> until https://github.com/openshift/enhancements/pull/201 is implemented
> - After https://github.com/openshift/enhancements/pull/201 was implemented,
> we would no longer use old Ignition Config for adding a new node even in
> vSphere UPI env...???
> 
> We are still not sure if the latter is right, since
> https://github.com/openshift/enhancements/pull/201 mentions only things
> about boot image.
> So we would like to receive the answer of this point from Red Hat.

The tricky thing about answering this accurately and correctly is that we don't know exactly how this will work yet.  

There is still ongoing discussion happening on the enhancement and privately about how to design and implement that enhancement.  This will involve coordination with RHCOS, Machine Config Operator, Machine API Operator and likely other components.

What is accurate, is that we do not want to put any of our customers in a position where they are unable to add new nodes to the cluster.  If you see value in retaining the ability to add additional nodes to the cluster with the version of the boot media used to install the cluster, then we need to consider that and figure out how best to address that need.

Ultimately, we want to make the addition of new nodes to the cluster easier and we are hopeful that the outcome of that enhancement will improve that kind of operation.

> 
> And, even about the former point, Red Hat should describe this in OCP4
> manual.
> Red Hat has announced 3 minor versions support so customers has believed
> that old RHCOS image shouldn't be used anyway...

I understand the confusion and I apologize for that.  I have made a PR to the OCP documentation that will be back-ported to older versions of the docs to be more clear about the current restrictions when adding new nodes to the cluster:

https://github.com/openshift/openshift-docs/pull/26215


If you do not have access to the version of the boot media that was used to install your cluster, we retain older versions of the boot media at mirror.openshift.com and access.redhat.com:

https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/
e.g. https://access.redhat.com/downloads/content/290/ver=4.3/rhel---8/4.3.0/x86_64/product-software


Additionally, the Ignition config that is being served by the Machine Config Server to new nodes can be retrieved in cluster with the instructions here:

https://github.com/openshift/machine-config-operator/blob/master/docs/HACKING.md#accessing-the-machineconfigserver-directly

Comment 14 Keiichi Kii (NEC) 2020-10-19 21:19:52 UTC
Hello Micah,

Thank you for your explanation.

> > And, even about the former point, Red Hat should describe this in OCP4
> > manual.
> > Red Hat has announced 3 minor versions support so customers has believed
> > that old RHCOS image shouldn't be used anyway...
> 
> I understand the confusion and I apologize for that.  I have made a PR to
> the OCP documentation that will be back-ported to older versions of the docs
> to be more clear about the current restrictions when adding new nodes to the
> cluster:
> 
> https://github.com/openshift/openshift-docs/pull/26215

Thank you for your work. We will wait for back-porting to the older versions ones.
And Please let me confirm just in case.

When OCP4.8 is released, the maintenance support for OCP4.5 will end.
However, our customers could ask Red Hat to troubleshoot the issues regarding adding nodes 
to OCP4.8 cluster by using the OCP4.5 Ignition Config and the RHCOS4.5 image that 
were used to install the existing cluster.

Is my understanding correct?
 

Thanks,
Keiichi

Comment 15 Colin Walters 2020-10-20 00:27:48 UTC
I'd say this even more strongly - likely even when OCP 4.8 is released, we will continue to support and debug issues caused by *all* RHCOS bootimage versions used to install a cluster, all the way back to 4.1.

There are clusters today that have been upgraded "in place" from 4.1 in e.g. AWS and until we implement the above-linked enhancement we *need* to support new workers being "pivoted" from a 4.1 bootimage into 4.6 (and beyond likely to 4.7 and probably 4.8) until such time as
https://github.com/openshift/enhancements/pull/201
is implemented.

And even after it's implemented, we are going to have cases like bare metal PXE setups where the cluster doesn't control the bootimage, so we will need to continue to support them.

At some point this may become difficult, but our CI at least does cover a lot of this.

So I would not worry about clusters installed as 4.5 because we still need to support even 4.1, 4.2 etc.

Comment 16 Micah Abbott 2020-10-21 14:08:35 UTC
Setting UpcomingSprint; while I think we have answered most (if not all) of the concerns about how to add additional nodes to a cluster, we haven't yet marked this resolved.

Comment 17 Keiichi Kii (NEC) 2020-10-21 19:12:12 UTC
(In reply to Colin Walters from comment #15)
> I'd say this even more strongly - likely even when OCP 4.8 is released, we
> will continue to support and debug issues caused by *all* RHCOS bootimage
> versions used to install a cluster, all the way back to 4.1.

Thank you for your help and great support.

> There are clusters today that have been upgraded "in place" from 4.1 in e.g.
> AWS and until we implement the above-linked enhancement we *need* to support
> new workers being "pivoted" from a 4.1 bootimage into 4.6 (and beyond likely
> to 4.7 and probably 4.8) until such time as
> https://github.com/openshift/enhancements/pull/201
> is implemented.

As Micah-san suggested, we can retrieve the current Ignition Config for adding worker nodes from the existing cluster with the following command:

  curl -H "Accept: application/vnd.coreos.ignition+json; version=3.1.0" -k https://<api-server-url>/config/worker

And then we can also add new worker nodes to the existing cluster with the retrieved Ignition Config and the corresponding RHCOS image.

We understand mismatching between Ignition specification version and RHCOS image may cause some issues
and it's not supported by Red Hat. But we think the above way won't cause any this sort of issues.

Is this way supportable for Red Hat?

If it is also supported, it makes the steps to add worker nodes simpler and easier.
For example, when our customers deploy RHOCP4.5 clusters, they will keep upgrading to newer RHOCP versions like 
RHOCP4.8, 4,9, 4.10 and so on(maybe RHOCP5.x, RHOCP6.x and later if possible). 
If they try to add new worker nodes to the upgraded RHOCP cluster, they prefer the way to retrieve the Ignition Config 
as well as the RHCOS image from the existing cluster.

FYI, we tested if we can add worker nodes with this way and confirmed it works fine.

Thanks,
Keiichi

Comment 18 Colin Walters 2020-10-23 22:05:52 UTC
https://github.com/openshift/installer/pull/4300

Comment 19 Keiichi Kii (NEC) 2020-11-10 22:51:52 UTC
Hello Colin-san,

Please let me confirm.

> https://github.com/openshift/installer/pull/4300

Moving to https://github.com/openshift/oc/pull/628

(Quoted from openshift/oc#628)
> The main stumbling block here is the pointer ignition config
> which is generated by openshift-install. Since the idea is
> openshift-install should in theory be disposable after a cluster
> is provisioned, let's add this to oc which admins will need anyways.

We understand Red Hat has a plan to be able to add new nodes without old Ignition Config and base images that were used to install the existing cluster.

At this point, our customers need to keep their old Ignition Config and base images to add new nodes.
And it's supported by Red Hat because: 
  
  "Red Hat will continue to support and debug issues caused by *all* RHCOS bootimage versions used to install a cluster, all the way back to 4.1."

In the future that this tool is available and https://github.com/openshift/enhancements/pull/201 is implemented,
our customers won't need to keep them anymore because the Ignition Config and base image can be retrieved from the existing cluster.

Is my understanding correct?

Thanks,
Keiichi

Comment 20 Micah Abbott 2020-11-11 19:10:10 UTC
Marking for UpcomingSprint; we have internal documentation explaining how to add additional nodes that is currently under review.

Comment 21 Keiichi Kii (NEC) 2020-11-23 23:23:06 UTC
Hello,

We heard that the new document explaining how to add additional nodes is on-going.
As we asked in comment #17, is adding new nodes with the retrieved Ignition Config and the corresponding RHCOS image supportable?

Thanks,
Keiichi

Comment 22 Micah Abbott 2020-12-04 21:57:54 UTC
(In reply to Keiichi Kii (NEC) from comment #21)
> Hello,
> 
> We heard that the new document explaining how to add additional nodes is
> on-going.
> As we asked in comment #17, is adding new nodes with the retrieved Ignition
> Config and the corresponding RHCOS image supportable?
> 
> Thanks,
> Keiichi

Yes, the general idea described in comment#17 is what is captured in the internal document.  We are still waiting additional review.

Comment 23 Derrick Ornelas 2021-01-06 21:42:18 UTC
The following kbase solution covers the steps needed to add new nodes to Bare Metal and VMWare vSphere UPI clusters that have been upgraded to OCP 4.6:

  Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ 
  https://access.redhat.com/solutions/5514051

Comment 24 Micah Abbott 2021-01-08 20:27:26 UTC
With the kbase article from comment #23 published, there is no further action to take on this BZ.  We will have an improved experience around this problem when https://github.com/openshift/enhancements/pull/201 is implemented in a future release.

Closing this as NOTABUG

Comment 25 Keiichi KII 2021-01-08 21:22:01 UTC
Hello Micah-san,

Thank you for releasing the kbase solution.

Please let me confirm the current situation for adding new nodes to the upgraded cluster from OCP4.5 to OCP4.6+.
We understand we have two supported ways to add new nodes:

1. Using the old Ignition Config and the base image that were used to originally deploy the existing cluster
   => We discussed and confirmed this way as mentioned in 1884750#c11. 

2. Using the Ignition Config that is manually migrated to Ignition spec v3 and the corresponding base image to the existing cluster
   => This way is mentioned in https://access.redhat.com/solutions/5514051
   => The boot media limitation is removed in https://github.com/openshift/openshift-docs/pull/26683

Is my understanding correct?

Thanks,
Keiichi

Comment 26 Keiichi Kii (NEC) 2021-01-20 00:25:14 UTC
Hello,

I would appreciate it if you could reply to me for 1884750#c25.

Thanks,
Keiichi

Comment 27 Keiichi Kii (NEC) 2021-02-02 21:33:52 UTC
Hello,

Any comments for 1884750#c25?

Thanks,
Keiichi

Comment 28 Micah Abbott 2021-02-02 21:41:44 UTC
(In reply to Keiichi Kii (NEC) from comment #27)
> Hello,
> 
> Any comments for 1884750#c25?
> 
> Thanks,
> Keiichi

Apolgies for the delay; your understanding is correct.

Though I would emphasize that Option 2 is only valid for UPI Bare Metal + vSphere installs.

Comment 29 Keiichi Kii (NEC) 2021-02-10 00:08:34 UTC
Hello Micah-san,

Thank you for the clarification.

Can you update the following document so that customers who want to update their cluster from OCP4.5- to OCP4.6+ can update their clusters without any issues?

https://docs.openshift.com/container-platform/4.6/machine_management/user_infra/adding-bare-metal-compute-user-infra.html

In the current situation, all of users will face this issue when adding new nodes to the upgraded cluster to OCP4.6+.
Because upgrading clusters is not a special operation and all of users will do that.
As a result, if the users faced this issue, they will escalate a support case to troubleshoot this issue or find https://access.redhat.com/solutions/5514051 in the Red Hat kbase.

Bug 1914429 is related to this issue.
In 1914429#c6, Red Hat plans to update the related document with the following change:

> Any example around creating a MachineSet from scratch that we do decide to include will need to include instructions for finding the correct cloud image for their platform (by looking at an existing master/worker MachineSet).

This kind of change for this bugzilla should be also user-friendly for customers.

Thanks,
Keiichi

Comment 30 Micah Abbott 2021-02-15 20:06:41 UTC
(In reply to Keiichi Kii (NEC) from comment #29)
> Hello Micah-san,
> 
> Thank you for the clarification.
> 
> Can you update the following document so that customers who want to update
> their cluster from OCP4.5- to OCP4.6+ can update their clusters without any
> issues?
> 
> https://docs.openshift.com/container-platform/4.6/machine_management/
> user_infra/adding-bare-metal-compute-user-infra.html
> 
> In the current situation, all of users will face this issue when adding new
> nodes to the upgraded cluster to OCP4.6+.
> Because upgrading clusters is not a special operation and all of users will
> do that.
> As a result, if the users faced this issue, they will escalate a support
> case to troubleshoot this issue or find
> https://access.redhat.com/solutions/5514051 in the Red Hat kbase.

I've suggested some minor changes to the docs here - https://github.com/openshift/openshift-docs/pull/29506

If you do not think this is sufficient, please suggest how to improve the changes.

Comment 31 Masaki Furuta ( RH ) 2021-03-01 07:22:12 UTC
(In reply to Micah Abbott from comment #30)
<...>
> I've suggested some minor changes to the docs here -
> https://github.com/openshift/openshift-docs/pull/29506
> 
> If you do not think this is sufficient, please suggest how to improve the
> changes.

According to comment 28 and 30, I am going to reopen BZ now to match the status and situiation. 

/Masaki

Comment 32 Micah Abbott 2021-03-01 14:42:20 UTC
(In reply to Masaki Furuta from comment #31)
> (In reply to Micah Abbott from comment #30)
> <...>
> > I've suggested some minor changes to the docs here -
> > https://github.com/openshift/openshift-docs/pull/29506
> > 
> > If you do not think this is sufficient, please suggest how to improve the
> > changes.
> 
> According to comment 28 and 30, I am going to reopen BZ now to match the
> status and situiation. 
> 
> /Masaki

Since there is nothing to be done from the RHCOS perspective and it is only tracking a doc change, I am moving this to the Documentation component.

Comment 33 Bob Furu 2021-03-19 20:11:49 UTC
Thanks for the PR edit, Micah. I made a minor suggestion. If you agree, please addd and squash and I can get this merged.

Comment 41 Bob Furu 2021-03-27 00:52:29 UTC
https://github.com/openshift/openshift-docs/pull/29506 was merged and cherrypicked to 4.6+. Waiting for docs to be live on docs.openshift.com before closing this BZ.

Comment 42 Bob Furu 2021-03-28 00:33:25 UTC
The docs update PR (https://github.com/openshift/openshift-docs/pull/29506) was merged and is now available on docs.openshift.com for OCP 4.6+. For example: https://docs.openshift.com/container-platform/4.7/machine_management/user_infra/adding-bare-metal-compute-user-infra.html#prerequisites

Closing BZ.

Comment 43 Fatima 2021-04-19 23:36:16 UTC
In reference to this comment, https://bugzilla.redhat.com/show_bug.cgi?id=1884750#c29 and as per this bug, the same banner is requested on the vsphere doc as well.

As this bug was created for vsphere, customer has requested that the vsphere doc should be updated just like bm, https://docs.openshift.com/container-platform/4.7/machine_management/user_infra/adding-bare-metal-compute-user-infra.html#prerequisites

Will that be continued in the same bug or shall I create a new doc now?
Do let me know, thanks.

Comment 44 Bob Furu 2021-04-21 15:14:09 UTC
Hi Fatima,

Thank you for your request. It is fine to use this BZ for also getting the info added to the vSphere docs. I've opened a PR to get this addressed: https://github.com/openshift/openshift-docs/pull/31756

Moving to ON_QA for review.

Comment 45 Bob Furu 2021-04-22 13:28:18 UTC
Hi Fatima,

This has been SME reviewed, approved, and merged. It is now live in docs: https://docs.openshift.com/container-platform/4.7/machine_management/user_infra/adding-vsphere-compute-user-infra.html

Re-closing BZ.

Thanks again!