Bug 1524342 - [CNS] standalone deployment fail to swap registry backend storage
Summary: [CNS] standalone deployment fail to swap registry backend storage
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.7.1
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Jose A. Rivera
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-11 09:58 UTC by Wenkai Shi
Modified: 2019-01-31 15:15 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-31 15:15:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Wenkai Shi 2017-12-11 09:58:00 UTC
Description of problem:
Standalone deployment failed to swap registry backend storage for a GlusterFS volume.

Version-Release number of the following components:
openshift-ansible-3.7.14-1.git.0.4b35b2d.el7
ansible-2.4.1.0-1.el7

How reproducible:
100%

Steps to Reproduce:
1. Run installation playbook to deploy an OCP cluster
# ansible-playbook -i hosts -vv /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
... playbook succeed...
2. Run standalone deployment playbook to deploy CNS at the existing OCP cluster, as registry backend storage.
# cat hosts
...
[OSEv3:children]
masters
nodes
etcd
glusterfs_registry
[OSEv3:vars]
...
openshift_hosted_registry_storage_kind=glusterfs
openshift_hosted_registry_storage_glusterfs_swap=true
...
[nodes]
glusterfs-1.example.com
glusterfs-2.example.com
glusterfs-3.example.com
...
[glusterfs_registry]
glusterfs-1.example.com  glusterfs_devices="['/dev/vsda']"
glusterfs-2.example.com  glusterfs_devices="['/dev/vsda']"
glusterfs-3.example.com  glusterfs_devices="['/dev/vsda']"
...
# ansible-playbook -i hosts -vv /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-glusterfs/registry.yml
...playbook succeed too...
3.

Actual results:
# oc describe pvc registry-glusterfs-claim
Name:		registry-glusterfs-claim
Namespace:	default
StorageClass:	standard
Status:		Pending
Volume:		
Labels:		<none>
Annotations:	volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Capacity:	
Access Modes:	
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  1m		8s		8	persistentvolume-controller			Warning		ProvisioningFailed	Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported


Expected results:
PVC should be bounding.

Additional info:
Installer create default storage class because environment is on GCP with cloud provider configuration. Seems related.
# oc get sc 
NAME                 TYPE
standard (default)   kubernetes.io/gce-pd   
# oc get pv 
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                                   STORAGECLASS   REASON    AGE
pvc-40ea35d0-de50-11e7-895b-42010af0001b   1Gi        RWO           Delete          Bound       openshift-ansible-service-broker/etcd   standard                 47m
registry-glusterfs-volume                  5Gi        RWX           Retain          Available                                                                    35m
registry-volume                            5Gi        RWX           Retain          Available                                                                    35m
# oc get pvc 
NAME                       STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
registry-claim             Pending                                      standard       35m
registry-glusterfs-claim   Pending                                      standard       35m

# oc get po -n glusterfs
NAME                                           READY     STATUS    RESTARTS   AGE
glusterblock-registry-provisioner-dc-1-jp58v   1/1       Running   0          54m
glusterfs-registry-b5f6r                       1/1       Running   0          59m
glusterfs-registry-fsls7                       1/1       Running   0          59m
glusterfs-registry-tx742                       1/1       Running   0          59m
heketi-registry-1-h56pz                        1/1       Running   0          55m

Comment 5 Scott Dodson 2019-01-31 15:15:10 UTC
There appear to be no active cases related to this bug. As such we're closing this bug in order to focus on bugs that are still tied to active customer cases. Please re-open this bug if you feel it was closed in error or a new active case is attached.


Note You need to log in before you can comment on or make changes to this bug.