Bug 1544387 - [CNS] docker registry back end storage keep pending with invalid AccessModes error when glusterfs_registry_block_storageclass as default storageclass
Summary: [CNS] docker registry back end storage keep pending with invalid AccessModes ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.9.0
Assignee: Jose A. Rivera
QA Contact: Wenkai Shi
URL:
Whiteboard:
Depends On:
Blocks: 1568260 1574382
TreeView+ depends on / blocked
 
Reported: 2018-02-12 10:58 UTC by Wenkai Shi
Modified: 2018-05-03 07:37 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
: 1568260 1574382 (view as bug list)
Environment:
Last Closed: 2018-03-28 14:28:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 0 None None None 2018-03-28 14:28:54 UTC

Description Wenkai Shi 2018-02-12 10:58:27 UTC
Description of problem:
Docker registry back end storage keep pending when glusterfs_registry_block_storageclass as default storageclass

Version-Release number of the following components:
openshift-ansible-3.9.0-0.42.0.git.0.1a9a61b.el7

How reproducible:
100%

Steps to Reproduce:
1. Install CNS with openshift_storage_glusterfs_registry_block_storageclass with glusterfs_registry group
# cat hosts
[OSEv3:children]
masters
nodes
etcd
glusterfs_registry
[OSEv3:vars]
...
openshift_hosted_registry_storage_kind=glusterfs
openshift_storage_glusterfs_registry_block_storageclass=true
openshift_storage_glusterfs_registry_block_storageclass_default=true
openshift_storageclass_default=false
...
[glusterfs_registry]
...
2.
3.

Actual results:
# ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
...
TASK [openshift_hosted : Poll for OpenShift pod deployment success] ************
Monday 12 February 2018  10:01:33 +0000 (0:00:00.845)       0:31:48.534 ******* 
failed: [qe-weshi-cns.qe.rhcloud.com] (item=[{u'namespace': u'default', u'name': u'docker-registry'}, {'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'oc', u'get', u'deploymentconfig', u'docker-registry', u'--namespace', u'default', u'--config', u'/etc/origin/master/admin.kubeconfig', u'-o', u'jsonpath={ .status.latestVersion }'], u'end': u'2018-02-12 05:01:33.154816', '_ansible_no_log': False, u'stdout': u'1', '_ansible_item_result': True, u'changed': True, 'item': {u'namespace': u'default', u'name': u'docker-registry'}, u'delta': u'0:00:00.431042', u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u"oc get deploymentconfig docker-registry --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .status.latestVersion }'", u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'1'], u'start': u'2018-02-12 05:01:32.723774', '_ansible_ignore_errors': None, 'failed': False}]) => {"attempts": 1, "changed": true, "cmd": ["oc", "get", "replicationcontroller", "docker-registry-1", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .metadata.annotations.openshift\\.io/deployment\\.phase }"], "delta": "0:00:00.354470", "end": "2018-02-12 05:01:33.959317", "failed_when_result": true, "item": [{"name": "docker-registry", "namespace": "default"}, {"_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": true, "cmd": ["oc", "get", "deploymentconfig", "docker-registry", "--namespace", "default", "--config", "/etc/origin/master/admin.kubeconfig", "-o", "jsonpath={ .status.latestVersion }"], "delta": "0:00:00.431042", "end": "2018-02-12 05:01:33.154816", "failed": false, "invocation": {"module_args": {"_raw_params": "oc get deploymentconfig docker-registry --namespace default --config /etc/origin/master/admin.kubeconfig -o jsonpath='{ .status.latestVersion }'", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true}}, "item": {"name": "docker-registry", "namespace": "default"}, "rc": 0, "start": "2018-02-12 05:01:32.723774", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}], "rc": 0, "start": "2018-02-12 05:01:33.604847", "stderr": "", "stderr_lines": [], "stdout": "Failed", "stdout_lines": ["Failed"]}
...

Expected results:
Installation success.

Additional info:

# oc get po 
NAME                                           READY     STATUS    RESTARTS   AGE
docker-registry-1-deploy                       0/1       Error     0          35m
glusterblock-registry-provisioner-dc-1-6zt4k   1/1       Running   0          36m
glusterfs-registry-bmjsf                       1/1       Running   0          43m
glusterfs-registry-fg74w                       1/1       Running   0          43m
glusterfs-registry-rjdnx                       1/1       Running   0          43m
heketi-registry-1-md4wf                        1/1       Running   0          38m
router-1-v8rbg                                 1/1       Running   0          35m

# oc get pvc 
NAME             STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS               AGE
registry-claim   Pending                                       glusterfs-registry-block   35m

# oc describe pvc registry-claim
Name:          registry-claim
Namespace:     default
StorageClass:  glusterfs-registry-block
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   control-plane.alpha.kubernetes.io/leader={"holderIdentity":"16a987d9-0fda-11e8-a788-0a580a800007","leaseDurationSeconds":15,"acquireTime":"2018-02-12T09:50:25Z","renewTime":"2018-02-12T10:07:14Z","lea...
               volume.beta.kubernetes.io/storage-provisioner=gluster.org/glusterblock
Finalizers:    []
Capacity:      
Access Modes:  
Events:
  Type     Reason                Age                  From                                                           Message
  ----     ------                ----                 ----                                                           -------
  Normal   Provisioning          18m (x15 over 35m)   gluster.org/glusterblock 16a987d9-0fda-11e8-a788-0a580a800007  External provisioner is provisioning volume for claim "default/registry-claim"
  Warning  ProvisioningFailed    18m (x15 over 35m)   gluster.org/glusterblock 16a987d9-0fda-11e8-a788-0a580a800007  Failed to provision volume with StorageClass "glusterfs-registry-block": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
  Normal   ExternalProvisioning  32s (x377 over 35m)  persistentvolume-controller                                    waiting for a volume to be created, either by external provisioner "gluster.org/glusterblock" or manually created by system administrator

Comment 1 Jose A. Rivera 2018-02-12 14:59:36 UTC
PR to resolve this has been started:

https://github.com/openshift/openshift-ansible/pull/7106

Comment 2 Jose A. Rivera 2018-02-13 17:40:50 UTC
PR is merged.

Comment 4 Wenkai Shi 2018-02-24 02:55:11 UTC
Will verify this once BZ #1547229 fix.

Comment 5 Wenkai Shi 2018-02-28 08:09:51 UTC
Verified with version openshift-ansible-3.9.1-1.git.0.9862628.el7, installation could succeed with above parameters.

Comment 8 errata-xmlrpc 2018-03-28 14:28:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489


Note You need to log in before you can comment on or make changes to this bug.