Bug 1479832 - documentation on how to add aggregated logging to a cluster is wrong
Summary: documentation on how to add aggregated logging to a cluster is wrong
Keywords:
Status: CLOSED EOL
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: ---
Assignee: Vikram Goyal
QA Contact: Vikram Goyal
Vikram Goyal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-09 13:44 UTC by Joel Rosental R.
Modified: 2020-12-14 09:26 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-10 06:45:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Joel Rosental R. 2017-08-09 13:44:43 UTC
Description of problem:

Due to a recent restructure of ansible playbooks, PVC detection logic has changed and now it is not detecting nor using pre-defined PVCs for ES-deployments.


Version-Release number of selected component (if applicable):
OCP 3.5

How reproducible:
Always

Steps to Reproduce:
1. $ oc get pvc --show-labels
NAME           STATUS    VOLUME         CAPACITY   ACCESSMODES   AGE       LABELS
logging-es-1   Bound     logging-es-1   20Gi       RWO           23h       logging-infra=elasticsearch
logging-es-2   Bound     logging-es-2   20Gi       RWO           23h       logging-infra=elasticsearch
logging-es-3   Bound     logging-es-3   20Gi       RWO           23h       logging-infra=elasticsearch


2. $ oc get dc/logging-es-ac5954m6 -o yaml
...
      volumes:
      - name: elasticsearch
        secret:
          defaultMode: 420
          secretName: logging-elasticsearch
      - configMap:
          defaultMode: 420
          name: logging-elasticsearch
        name: elasticsearch-config
      - emptyDir: {}
        name: elasticsearch-storage
...

3. $ ansible-playbook -vv -i hosts -e openshift_logging_install_logging=True openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml

TASK [openshift_logging : Gather OpenShift Logging Facts] **********************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/install_logging.yaml:2
ok: [master01.example.com] => {
    "ansible_facts": {
        "openshift_logging_facts": {
            ...
            "elasticsearch": {
                "clusterrolebindings": {},
                "configmaps": {},
                "daemonsets": {},
                "deploymentconfigs": {},
                "oauthclients": {},
                "pvcs": {
                    "logging-es-1": {},
                    "logging-es-2": {},
                    "logging-es-3": {}
                },
                "rolebindings": {},
                "routes": {},
                "sccs": {},
                "secrets": {
                    "logging-elasticsearch": {
                        "keys": [
                            "searchguard.truststore",
                            "admin-cert",
                            "admin.jks",
                            "searchguard.key",
                            "admin-ca",
                            "key",
                            "truststore",
                            "admin-key"
                        ]
                    }
                },
                "services": {}
            },
            ...
        }
    },
    "changed": false
}

TASK [openshift_logging : Getting current ES deployment size] ******************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/install_elasticsearch.yaml:2
ok: [master01.example.com] => {
    "ansible_facts": {
        "openshift_logging_current_es_size": "0"
    },
    "changed": false
}

TASK [openshift_logging : set_fact] ********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/install_elasticsearch.yaml:5
skipping: [master01.example.com] => {
    "changed": false,
    "skip_reason": "Conditional check failed",
    "skipped": true
}

TASK [openshift_logging : set_fact] ********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/install_elasticsearch.yaml:8

TASK [openshift_logging : include] *********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/install_elasticsearch.yaml:13

TASK [openshift_logging : include] *********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/install_elasticsearch.yaml:37
included: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml for master01.example.com
included: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml for master01.example.com
included: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml for master01.example.com

TASK [openshift_logging : set_fact] ********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:2
skipping: [master01.example.com] => {
    "changed": false,
    "skip_reason": "Conditional check failed",
    "skipped": true
}

TASK [openshift_logging : set_fact] ********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:5
skipping: [master01.example.com] => {
    "changed": false,
    "skip_reason": "Conditional check failed",
    "skipped": true
}

TASK [openshift_logging : set_fact] ********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:10
ok: [master01.example.com] => {
    "ansible_facts": {
        "es_storage_claim": ""
    },
    "changed": false
}

TASK [openshift_logging : oc_obj] **********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:17
skipping: [master01.example.com] => {
    "changed": false,
    "skip_reason": "Conditional check failed",
    "skipped": true
}

TASK [openshift_logging : set_fact] ********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:28
skipping: [master01.example.com] => {
    "changed": false,
    "skip_reason": "Conditional check failed",
    "skipped": true
}

TASK [openshift_logging : Generating PersistentVolumeClaims] *******************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:36
skipping: [master01.example.com] => {
    "changed": false,
    "skip_reason": "Conditional check failed",
    "skipped": true
}

TASK [openshift_logging : Generating PersistentVolumeClaims - Dynamic] *********
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:47
skipping: [master01.example.com] => {
    "changed": false,
    "skip_reason": "Conditional check failed",
    "skipped": true
}

TASK [openshift_logging : set_fact] ********************************************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:60
skipping: [master01.example.com] => {
    "changed": false,
    "skip_reason": "Conditional check failed",
    "skipped": true
}

TASK [openshift_logging : Generate Elasticsearch DeploymentConfig] *************
task path: /home/percar/openshift-ansible/roles/openshift_logging/tasks/set_es_storage.yaml:66
ok: [master01.example.com] => {
    "changed": false,
    "checksum": "893afaa4b50b4e1ac3527aa23132479826f01161",
    "dest": "/tmp/openshift-logging-ansible-MtwByC/templates/logging-logging-es-j7o5b79s-dc.yaml",
    "gid": 0,
    "group": "root",
    "md5sum": "07bc5f3b1c3bd36a4e9aa743190d1b5f",
    "mode": "0644",
    "owner": "root",
    "secontext": "unconfined_u:object_r:user_home_t:s0",
    "size": 2698,
    "src": "/home/percar/.ansible/tmp/ansible-tmp-1501230941.69-139846461227249/source",
    "state": "file",
    "uid": 0
}

4.- Resulting dc:

 $ oc get dc
NAME                  REVISION   DESIRED   CURRENT   TRIGGERED BY
logging-curator       1          1         1         config
logging-es-7dewf34t   1          1         1         config
logging-es-lt7xweky   1          1         1         config
logging-es-x5bbetaa   1          1         1         config
logging-kibana        1          1         1         config

$ oc describe dc logging-es-7dewf34t
Name:           logging-es-7dewf34t
Namespace:      logging
Created:        25 minutes ago
Labels:         component=es
                deployment=logging-es-7dewf34t
                logging-infra=elasticsearch
                provider=openshift
Annotations:    <none>
Latest Version: 1
Selector:       component=es,deployment=logging-es-7dewf34t,logging-infra=elasticsearch,provider=openshift
Replicas:       1
Triggers:       Config
Strategy:       Recreate
Template:
  Labels:               component=es
                        deployment=logging-es-7dewf34t
                        logging-infra=elasticsearch
                        provider=openshift
  Service Account:      aggregated-logging-elasticsearch
  Containers:
   elasticsearch:
    Image:      openshift3/logging-elasticsearch:v3.5
    Ports:      9200/TCP, 9300/TCP
    Limits:
      memory:   8Gi
    Requests:
      memory:   512Mi
    Volume Mounts:
      /elasticsearch/persistent from elasticsearch-storage (rw)
      /etc/elasticsearch/secret from elasticsearch (ro)
      /usr/share/java/elasticsearch/config from elasticsearch-config (ro)
    Environment Variables:
      NAMESPACE:                 (v1:metadata.namespace)
      KUBERNETES_TRUST_CERT:    true
      SERVICE_DNS:              logging-es-cluster
      CLUSTER_NAME:             logging-es
      INSTANCE_RAM:             8Gi
      NODE_QUORUM:              2
      RECOVER_AFTER_NODES:      2
      RECOVER_EXPECTED_NODES:   3
      RECOVER_AFTER_TIME:       5m
  Volumes:
   elasticsearch:
    Type:       Secret (a volume populated by a Secret)
    SecretName: logging-elasticsearch
   elasticsearch-config:
    Type:       ConfigMap (a volume populated by a ConfigMap)
    Name:       logging-elasticsearch
   elasticsearch-storage:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:

Deployment #1 (latest):
        Name:           logging-es-7dewf34t-1
        Created:        25 minutes ago
        Status:         Complete
        Replicas:       1 current / 1 desired
        Selector:       component=es,deployment=logging-es-7dewf34t-1,deploymentconfig=logging-es-7dewf34t,logging-infra=elasticsearch,provider=openshift
        Labels:         component=es,deployment=logging-es-7dewf34t,logging-infra=elasticsearch,openshift.io/deployment-config.name=logging-es-7dewf34t,provider=openshift
        Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed

Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason                          Message
  ---------     --------        -----   ----                            -------------   --------        ------                          -------
  25m           25m             1       {deploymentconfig-controller }                  Normal          DeploymentCreated               Created new replication controller "logging-es-7dewf34t-1" for version 1
  25m           25m             1       {deploymentconfig-controller }                  Normal          ReplicationControllerScaled     Scaled replication controller "logging-es-7dewf34t-1" from 0 to 1

Actual results:
ansible playbook not taking up pre-defined PVCs

Expected results:
they should be picked up

Additional info:


Document URL: 
https://docs.openshift.com/container-platform/3.5/install_config/aggregate_logging.html

Additional information: 
https://github.com/openshift/openshift-ansible/issues/4886


Note You need to log in before you can comment on or make changes to this bug.