Bug 1476197 - Install CRS as docker registry storage failed due to namespace "glusterfs" not found
Install CRS as docker registry storage failed due to namespace "glusterfs" no...
Status: VERIFIED
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer (Show other bugs)
3.6.0
Unspecified Unspecified
urgent Severity urgent
: ---
: 3.6.z
Assigned To: Jose A. Rivera
Wenkai Shi
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-28 05:12 EDT by Wenkai Shi
Modified: 2017-08-04 08:00 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Wenkai Shi 2017-07-28 05:12:12 EDT
Description of problem:
Install CRS as docker registry storage failed due to namespace "glusterfs" not found.

Version-Release number of the following components:
openshift-ansible-3.6.172.0.1-1.git.0.5bd2286.el7

How reproducible:
100%

Steps to Reproduce:
1. Install OCP with CRS
# cat hosts
[OSEv3:children]
masters
nodes
glusterfs

[OSEv3:vars]
...
openshift_hosted_registry_storage_kind=glusterfs
openshift_storage_glusterfs_is_native=false
openshift_storage_glusterfs_heketi_is_native=false
openshift_storage_glusterfs_heketi_url=glusterfs-1.example.com
openshift_storage_glusterfs_heketi_admin_key=redhat

[masters]
master-1.example.com

[nodes]
master-1.example.com
node-1.example.com

[glusterfs]
glusterfs-1.example.com glusterfs_devices="['/dev/vsda']"
glusterfs-2.example.com glusterfs_devices="['/dev/vsda']"
glusterfs-3.example.com glusterfs_devices="['/dev/vsda']"

2.
3.

Actual results:
# ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
...
TASK [openshift_storage_glusterfs : Create heketi secret] **********************
Friday 28 July 2017  08:32:37 +0000 (0:00:00.050)       0:10:34.981 *********** 

fatal: [master-1.example.com]: FAILED! => {
    "changed": false, 
    "failed": true
}

MSG:

{u'returncode': 1, u'cmd': u'/usr/bin/oc secrets new heketi-storage-admin-secret --type=kubernetes.io/glusterfs --confirm key=/tmp/key-628Tqk -n glusterfs', u'results': {}, u'stderr': u'Error from server (NotFound): namespaces "glusterfs" not found\n', u'stdout': u''}
...

Expected results:
Seems "glusterfs" should be create even though CRS situation

Additional info:
Comment 1 Wenkai Shi 2017-07-28 05:13:08 EDT
This block all CRS related testing.
Comment 2 Jose A. Rivera 2017-07-28 09:09:29 EDT
PR is upstream: https://github.com/openshift/openshift-ansible/pull/4927
Comment 4 Wenkai Shi 2017-07-31 01:28:24 EDT
Failed to verify in openshift-ansible-3.6.172.0.3-1.git.0.8753f3b.el7.

# ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
...
TASK [openshift_storage_glusterfs : set_fact] ************************************************************************************************************************************************
fatal: [master-1.example.com]: FAILED! => {
    "failed": true
}

MSG:

the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: {{ 'glusterfs' | quote if glusterfs_is_native or glusterfs_heketi_is_native else 'default' | quote }}: 'glusterfs_is_native' is undefined

The error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_config.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

---
- set_fact:
  ^ here
...
Failure summary:

  1. Host:     master-1.example.com
     Play:     Configure GlusterFS
     Task:     openshift_storage_glusterfs : set_fact
     Message:  the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: {{ 'glusterfs' | quote if glusterfs_is_native or glusterfs_heketi_is_native else 'default' | quote }}: 'glusterfs_is_native' is undefined
               
               The error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_config.yml': line 2, column 3, but may
               be elsewhere in the file depending on the exact syntax problem.
               
               The offending line appears to be:
               
               ---
               - set_fact:
                 ^ here
Comment 5 Jose A. Rivera 2017-07-31 09:27:48 EDT
New PR upstream: https://github.com/openshift/openshift-ansible/pull/4953
Comment 6 Jose A. Rivera 2017-07-31 10:07:13 EDT
Merged upstream.
Comment 7 Wenkai Shi 2017-08-01 04:30:55 EDT
Failed to verify with version openshift-ansible-3.6.173.0.1-1.git.0.71e81fa.el7. Seems it still have problem:

# ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
...
TASK [openshift_storage_glusterfs : set_fact] **********************************
...
        "glusterfs_namespace": "glusterfs", 
...
TASK [openshift_storage_glusterfs : Create heketi secret] **********************
Tuesday 01 August 2017  07:27:04 +0000 (0:00:00.053)       0:09:49.337 ******** 
fatal: [master-1.example.com]: FAILED! => {
    "changed": false, 
    "failed": true
}

MSG:

{u'returncode': 1, u'cmd': u'/usr/bin/oc secrets new heketi-storage-admin-secret --type=kubernetes.io/glusterfs --confirm key=/tmp/key-2G8vaA -n glusterfs', u'results': {}, u'stderr': u'Error from server (NotFound): namespaces "glusterfs" not found\n', u'stdout': u''}
...
Comment 8 Jose A. Rivera 2017-08-01 11:06:48 EDT
New PR upstream: https://github.com/openshift/openshift-ansible/pull/4962
Comment 9 Jose A. Rivera 2017-08-01 11:18:23 EDT
Also, I just noticed this: You're hitting the latest bug because you're using the [glusterfs] group instead of [glusterfs_registry] and by default that tries to create a StorageClass for general app use. This is not recommended. Either change the group name to [glusterfs_registry] or specify openshift_storage_glusterfs_storageclass=False .
Comment 10 Wenkai Shi 2017-08-02 02:18:00 EDT
Verified with PR, fix works now. Will move Status to "VERIFIED" once verify on RPM.
Comment 11 Wenkai Shi 2017-08-02 04:49:57 EDT
(In reply to Jose A. Rivera from comment #9)
> Also, I just noticed this: You're hitting the latest bug because you're
> using the [glusterfs] group instead of [glusterfs_registry] and by default
> that tries to create a StorageClass for general app use. This is not
> recommended. Either change the group name to [glusterfs_registry] or specify
> openshift_storage_glusterfs_storageclass=False .

Got it, according to [1], seems we didn't mean to use an external glusterfs as docker registry back-end storage. Right? 

[1]. openshift-ansible/inventory/byo/hosts.byo.glusterfs.external.example
Comment 12 Jose A. Rivera 2017-08-02 11:22:54 EDT
The goal is to test external GlusterFS for both regular use and as registry backend. Though comparing the two, I notice your inventory file is also missing the "glusterfs_ip" variable for its external hosts.
Comment 13 Wenkai Shi 2017-08-02 13:25:16 EDT
(In reply to Jose A. Rivera from comment #12)
> The goal is to test external GlusterFS for both regular use and as registry
> backend. Though comparing the two, I notice your inventory file is also
> missing the "glusterfs_ip" variable for its external hosts.

Got it, so far the test cases could cover them.
About the "glusterfs_ip" variable, in the environment, the correct ip can be resolved to the correct host, will add it once the dns service got problem.
Comment 14 Wenkai Shi 2017-08-03 02:05:54 EDT
Hi Scott,
Could you please help to merge this~ thank you
Comment 15 Scott Dodson 2017-08-03 08:21:44 EDT
I thought it was decided that the PR wasn't necessary. Regardless CRS is not a 3.6.0 feature so while I've merged the fix I don't think we should respin the release. Moving to 3.6.1.
Comment 16 Jose A. Rivera 2017-08-03 08:25:39 EDT
PR wasn't necessary to continue testing, correct, though it is required to resolve the issue they ran in to. And yes, PR was already merged. :)

Agreed that this should target 3.6.1.
Comment 18 Wenkai Shi 2017-08-04 00:22:15 EDT
Verified with version openshift-ansible-3.6.173.0.5-1.git.0.74d5acc.el7, code has been merged, installer could create a namespace to avoid this.

# ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
...
TASK [openshift_storage_glusterfs : Verify target namespace exists] ************
Friday 04 August 2017  03:50:28 +0000 (0:00:00.044)       0:09:02.893 ********* 

changed: [master-1.example.com] => {
    "changed": true, 
    "results": {
        "cmd": "/usr/bin/oc get namespace glusterfs -o json", 
        "results": {
            "apiVersion": "v1", 
            "kind": "Namespace", 
            "metadata": {
                "annotations": {
                    "openshift.io/description": "", 
                    "openshift.io/display-name": "", 
                    "openshift.io/sa.scc.mcs": "s0:c8,c2", 
                    "openshift.io/sa.scc.supplemental-groups": "1000060000/10000", 
                    "openshift.io/sa.scc.uid-range": "1000060000/10000"
                }, 
                "creationTimestamp": "2017-08-04T03:50:29Z", 
                "name": "glusterfs", 
                "resourceVersion": "1328", 
                "selfLink": "/api/v1/namespaces/glusterfs", 
                "uid": "0eddb60a-78c8-11e7-9735-fa163ef72e9c"
            }, 
            "spec": {
                "finalizers": [
                    "openshift.io/origin", 
                    "kubernetes"
                ]
            }, 
            "status": {
                "phase": "Active"
            }
        }, 
        "returncode": 0
    }, 
    "state": "present"
}
...

Note You need to log in before you can comment on or make changes to this bug.