Description of problem: Install CRS as docker registry storage failed due to namespace "glusterfs" not found. Version-Release number of the following components: openshift-ansible-3.6.172.0.1-1.git.0.5bd2286.el7 How reproducible: 100% Steps to Reproduce: 1. Install OCP with CRS # cat hosts [OSEv3:children] masters nodes glusterfs [OSEv3:vars] ... openshift_hosted_registry_storage_kind=glusterfs openshift_storage_glusterfs_is_native=false openshift_storage_glusterfs_heketi_is_native=false openshift_storage_glusterfs_heketi_url=glusterfs-1.example.com openshift_storage_glusterfs_heketi_admin_key=redhat [masters] master-1.example.com [nodes] master-1.example.com node-1.example.com [glusterfs] glusterfs-1.example.com glusterfs_devices="['/dev/vsda']" glusterfs-2.example.com glusterfs_devices="['/dev/vsda']" glusterfs-3.example.com glusterfs_devices="['/dev/vsda']" 2. 3. Actual results: # ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml ... TASK [openshift_storage_glusterfs : Create heketi secret] ********************** Friday 28 July 2017 08:32:37 +0000 (0:00:00.050) 0:10:34.981 *********** fatal: [master-1.example.com]: FAILED! => { "changed": false, "failed": true } MSG: {u'returncode': 1, u'cmd': u'/usr/bin/oc secrets new heketi-storage-admin-secret --type=kubernetes.io/glusterfs --confirm key=/tmp/key-628Tqk -n glusterfs', u'results': {}, u'stderr': u'Error from server (NotFound): namespaces "glusterfs" not found\n', u'stdout': u''} ... Expected results: Seems "glusterfs" should be create even though CRS situation Additional info:
This block all CRS related testing.
PR is upstream: https://github.com/openshift/openshift-ansible/pull/4927
Failed to verify in openshift-ansible-3.6.172.0.3-1.git.0.8753f3b.el7. # ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml ... TASK [openshift_storage_glusterfs : set_fact] ************************************************************************************************************************************************ fatal: [master-1.example.com]: FAILED! => { "failed": true } MSG: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: {{ 'glusterfs' | quote if glusterfs_is_native or glusterfs_heketi_is_native else 'default' | quote }}: 'glusterfs_is_native' is undefined The error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_config.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - set_fact: ^ here ... Failure summary: 1. Host: master-1.example.com Play: Configure GlusterFS Task: openshift_storage_glusterfs : set_fact Message: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: {{ 'glusterfs' | quote if glusterfs_is_native or glusterfs_heketi_is_native else 'default' | quote }}: 'glusterfs_is_native' is undefined The error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_config.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - set_fact: ^ here
New PR upstream: https://github.com/openshift/openshift-ansible/pull/4953
Merged upstream.
Failed to verify with version openshift-ansible-3.6.173.0.1-1.git.0.71e81fa.el7. Seems it still have problem: # ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml ... TASK [openshift_storage_glusterfs : set_fact] ********************************** ... "glusterfs_namespace": "glusterfs", ... TASK [openshift_storage_glusterfs : Create heketi secret] ********************** Tuesday 01 August 2017 07:27:04 +0000 (0:00:00.053) 0:09:49.337 ******** fatal: [master-1.example.com]: FAILED! => { "changed": false, "failed": true } MSG: {u'returncode': 1, u'cmd': u'/usr/bin/oc secrets new heketi-storage-admin-secret --type=kubernetes.io/glusterfs --confirm key=/tmp/key-2G8vaA -n glusterfs', u'results': {}, u'stderr': u'Error from server (NotFound): namespaces "glusterfs" not found\n', u'stdout': u''} ...
New PR upstream: https://github.com/openshift/openshift-ansible/pull/4962
Also, I just noticed this: You're hitting the latest bug because you're using the [glusterfs] group instead of [glusterfs_registry] and by default that tries to create a StorageClass for general app use. This is not recommended. Either change the group name to [glusterfs_registry] or specify openshift_storage_glusterfs_storageclass=False .
Verified with PR, fix works now. Will move Status to "VERIFIED" once verify on RPM.
(In reply to Jose A. Rivera from comment #9) > Also, I just noticed this: You're hitting the latest bug because you're > using the [glusterfs] group instead of [glusterfs_registry] and by default > that tries to create a StorageClass for general app use. This is not > recommended. Either change the group name to [glusterfs_registry] or specify > openshift_storage_glusterfs_storageclass=False . Got it, according to [1], seems we didn't mean to use an external glusterfs as docker registry back-end storage. Right? [1]. openshift-ansible/inventory/byo/hosts.byo.glusterfs.external.example
The goal is to test external GlusterFS for both regular use and as registry backend. Though comparing the two, I notice your inventory file is also missing the "glusterfs_ip" variable for its external hosts.
(In reply to Jose A. Rivera from comment #12) > The goal is to test external GlusterFS for both regular use and as registry > backend. Though comparing the two, I notice your inventory file is also > missing the "glusterfs_ip" variable for its external hosts. Got it, so far the test cases could cover them. About the "glusterfs_ip" variable, in the environment, the correct ip can be resolved to the correct host, will add it once the dns service got problem.
Hi Scott, Could you please help to merge this~ thank you
I thought it was decided that the PR wasn't necessary. Regardless CRS is not a 3.6.0 feature so while I've merged the fix I don't think we should respin the release. Moving to 3.6.1.
PR wasn't necessary to continue testing, correct, though it is required to resolve the issue they ran in to. And yes, PR was already merged. :) Agreed that this should target 3.6.1.
Verified with version openshift-ansible-3.6.173.0.5-1.git.0.74d5acc.el7, code has been merged, installer could create a namespace to avoid this. # ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml ... TASK [openshift_storage_glusterfs : Verify target namespace exists] ************ Friday 04 August 2017 03:50:28 +0000 (0:00:00.044) 0:09:02.893 ********* changed: [master-1.example.com] => { "changed": true, "results": { "cmd": "/usr/bin/oc get namespace glusterfs -o json", "results": { "apiVersion": "v1", "kind": "Namespace", "metadata": { "annotations": { "openshift.io/description": "", "openshift.io/display-name": "", "openshift.io/sa.scc.mcs": "s0:c8,c2", "openshift.io/sa.scc.supplemental-groups": "1000060000/10000", "openshift.io/sa.scc.uid-range": "1000060000/10000" }, "creationTimestamp": "2017-08-04T03:50:29Z", "name": "glusterfs", "resourceVersion": "1328", "selfLink": "/api/v1/namespaces/glusterfs", "uid": "0eddb60a-78c8-11e7-9735-fa163ef72e9c" }, "spec": { "finalizers": [ "openshift.io/origin", "kubernetes" ] }, "status": { "phase": "Active" } }, "returncode": 0 }, "state": "present" } ...