Description of problem: Cluster is on AWS EC2 and the AWS cloud provider is configured correctly. The logging deployer creates a pvc with an annotation of volume.beta.kubernetes.io/storage-class: dynamic After the deployer completes the pvc is not bound and no pv is created. The events show: 4s 24m 98 logging-es-0 PersistentVolumeClaim Warning ProvisioningFailed persistent-volume-controller storageclass.storage.k8s.io "dynamic" not found The only storageclass that exists is: root@ip-172-31-36-76: ~ # oc get storageclass NAME TYPE gp2 (default) kubernetes.io/aws-ebs Removing the annotation and allowing the pvc to use the default storage class allows the pv to be created dynamically and the pvc binds to it. Without this, the pvc just stays stuck in Pending and elasticsearch never starts. Version-Release number of selected component (if applicable): 3.6.116 and 3.6.121 using openshift-ansible HEAD 62fcd88038910c52796f0e5b37e1e0d8019b80cf How reproducible: Always Steps to Reproduce: 1. Create an inventory file for logging (see below) 2. Run the playbooks/byo/openshift-cluster/openshift-logging.yaml playbook 3. oc get pvc when it completes Actual results: pvc is not bound. elasticsearch pod is not started Expected results: pvc is bound and elasticsearch starts ok Additional info: [oo_first_master] ip-172-31-36-76 [oo_first_master:vars] openshift_deployment_type=openshift-enterprise openshift_release=v3.6.0 openshift_logging_install_logging=true openshift_logging_use_ops=false openshift_logging_master_url=https://ec2-34-223-225-62.us-west-2.compute.amazonaws.com:8443 openshift_logging_master_public_url=https://ec2-34-223-225-62.us-west-2.compute.amazonaws.com:8443 openshift_logging_kibana_hostname=kibana.0620-8i0.qe.rhcloud.com openshift_logging_namespace=logging openshift_logging_image_prefix=registry.ops.openshift.com/openshift3/ openshift_logging_image_version=v3.6.121 openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_size=50Gi openshift_logging_fluentd_use_journal=true #openshift_logging_fluentd_journal_read_from_head=false openshift_logging_use_mux=true openshift_logging_use_mux_client=true openshift_logging_es_pv_selector=None
This is a "standard" openshift-ansible byo/config.yml install with a properly configured AWS cloud provider.
Hemant, How should we be making use of https://github.com/openshift/openshift-ansible/pull/4262 to use the default storage class on a given environment? Should we actually be naming those storage classes 'default' rather than 'aws' and 'gcp' and then configure things like logging and metrics to use 'default' ?
@Scott - can we not dynamically infer storageClass name that default class name will be `gp2` on AWS and `standard` on GCE? There was some discussion upstream and we chose not to name default classes `default` because it looks weird when you list them. ~> oc get storageclasses default(default) xxxxx Alternately, things like metrics and logging can skip storageclass annotation in their definition and that in turn will cause them to bind to default storageclass automatically (as Mike found out).
I have performed a small refactor on this section of the code and removed the annotation. By removing the annotation, the pvc will use the default storage class. If you pass the dynamic option as false then it will set the storageClassName option to "" which in turns disables the storage class for that pvc.
This time with github link: https://github.com/openshift/openshift-ansible/pull/4532
Not fixed in penshift-ansible.noarch 3.6.126-1.git.0.58d33f0.el7 Moving back to POST until available in a puddle.
in openshift-ansible-3.6.123.1002-1.git.0.506cfa7.el7
verified on 3.6.126.1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1716