Bug 1835438

Summary: [vsphere][upi] if folder is defined installer exits that folder does not exist
Product: OpenShift Container Platform Reporter: Joseph Callen <jcallen>
Component: InstallerAssignee: Patrick Dillon <padillon>
Installer sub component: openshift-installer QA Contact: jima
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: bleanhar, bparees, chuffman, jima, padillon
Version: 4.5   
Target Milestone: ---   
Target Release: 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: the installer verified whether a specified folder exists when running create manifests or create install-configs Consequence: UPI workflows where the folder was created after those commands have been run failed Fix: move the check so that folder existence is only checked when running create cluster Result: create manifests and create ignition-configs can be run when the folder does not exist
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-13 17:38:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1833137    

Description Joseph Callen 2020-05-13 19:06:06 UTC
Description of problem:

In the recent PR:
https://github.com/openshift/installer/pull/3498

There are two validations that cause issues with UPI:
- Requiring a path e.g. /datacenter/vm/folderA/folderB
- Requiring that the folder exists before executing the installer

When not providing the folder parameter the folder will be created
by terraform but will be incorrect in the cloud-config as it 
will use the clusterid.


Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
ansible --version

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:
Please include the entire output from the last TASK line through the end of output if an error is generated

Expected results:

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 1 jima 2020-05-14 08:39:12 UTC
I tested CORS-1427, set folder to "/dc1/vm/jima/jima-upi/" and launch installation.
After installation is completed, I checked cloud-provider-config, folder is "jima/jima-upi", is it expected?
# ../oc describe cm cloud-provider-config -n openshift-config
Name:         cloud-provider-config
Namespace:    openshift-config
Labels:       <none>
Annotations:  <none>

Data
====
config:
----
[Global]
secret-name = "vsphere-creds"
secret-namespace = "kube-system"
insecure-flag = "1"

[Workspace]
server = "vcsa-qe.vmware.devcluster.openshift.com"
datacenter = "dc1"
default-datastore = "nvme-ds1"
folder = "jima/jima-upi"

[VirtualCenter "vcsa-qe.vmware.devcluster.openshift.com"]
datacenters = "dc1"

Events:  <none>

When trying to create pvc, error is reported:
# oc describe pvc mypvc01
Name:          mypvc01
Namespace:     default
StorageClass:  thin
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type     Reason              Age                            From                         Message
  ----     ------              ----                           ----                         -------
  Warning  ProvisioningFailed  <invalid> (x3 over <invalid>)  persistentvolume-controller  Failed to provision volume with StorageClass "thin": folder 'jima/jima-upi' not found

Comment 2 Joseph Callen 2020-05-14 12:51:53 UTC
Hi Jinyun,
The process has changed for IPI because of BZs.  Now the folder must be precreated for IPI.
Did you manually create the folder?

Comment 3 Joseph Callen 2020-05-14 12:56:19 UTC
I should clarify.

If folder in vsphere platform does not exist in the install-config the folder will be created based on cluster id.
If folder is provided it must exist prior to installation for IPI only.

Comment 6 jima 2020-05-18 10:31:41 UTC
On 4.5.0-0.nightly-2020-05-13-221558, checking folder existing on creating manifests stage.
# ./openshift-install create manifests --dir ./install
FATAL failed to fetch Master Machines: failed to load asset "Install Config": platform.vsphere.folder: Invalid value: "/dc1/vm/jima/jima-upi": folder '/dc1/vm/jima/jima-upi' not found 


On 4.5.0-0.nightly-2020-05-18-072156, checking folder existing on creating cluster stage, so the issue is fixed.
# ./openshift-install create manifests --dir ./install
INFO Consuming Install Config from target directory 

# ./openshift-install create ignition-configs --dir ./install
INFO Consuming Common Manifests from target directory 
INFO Consuming OpenShift Install (Manifests) from target directory 
INFO Consuming Master Machines from target directory 
INFO Consuming Openshift Manifests from target directory 
INFO Consuming Worker Machines from target directory 

# ./openshift-install create cluster --dir ./install
INFO Obtaining RHCOS image file from 'https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.4/44.81.202004250133-0/x86_64/rhcos-44.81.202004250133-0-vmware.x86_64.ova?sha256=453b4a14c95f565a500a5c34e7a181e126d59aa6b86dc448c6ac74ebdb6b5b13' 
INFO The file was found in cache: /root/.cache/openshift-installer/image_cache/79f058324d0c8ff740b84ab3326c4874. Reusing... 
INFO Consuming Master Ignition Config from target directory 
INFO Consuming Bootstrap Ignition Config from target directory 
INFO Consuming Worker Ignition Config from target directory 
FATAL failed to fetch Cluster: failed to fetch dependency of "Cluster": failed to generate asset "Platform Provisioning Check": [platform.vsphere.apiVIP: Required value: must specify a VIP for the API, platform.vsphere.ingressVIP: Required value: must specify a VIP for Ingress, platform.vsphere.folder: Invalid value: "/dc1/vm/jima/jima-upi": folder '/dc1/vm/jima/jima-upi' not found]

Comment 7 Christian Huffman 2020-05-18 12:38:40 UTC
*** Bug 1833137 has been marked as a duplicate of this bug. ***

Comment 8 errata-xmlrpc 2020-07-13 17:38:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409