Description of problem: In OpenStack PVCs for metrics and logging stay in a pending state rather than being requested and created. Version-Release number of selected component (if applicable): OpenShift Master: v3.6.173.0.5 Kubernetes Master: v1.6.1+5115d708d7 How reproducible: Deploy a cluster with the following values [OSEv3:children] masters etcd nodes # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=cloud-user deployment_type=openshift-enterprise debug_level=2 ansible_become=true console_port=8443 openshift_debug_level="{{ debug_level }}" openshift_node_debug_level="{{ node_debug_level | default(debug_level, true) }}" openshift_master_debug_level="{{ master_debug_level | default(debug_level, true) }}" openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] openshift_master_access_token_max_seconds=2419201 osm_cluster_network_cidr=172.45.0.0/16 openshift_registry_selector="role=infra" openshift_router_selector="role=infra" openshift_hosted_router_replicas=1 openshift_hosted_registry_replicas=1 openshift_master_cluster_method=native #openshift_node_local_quota_per_fsgroup=512Mi openshift_cloudprovider_kind=openstack openshift_cloudprovider_openstack_auth_url=http://10.19.114.177:5000/v2.0 openshift_cloudprovider_openstack_username=nope openshift_cloudprovider_openstack_password=nope openshift_cloudprovider_openstack_tenant_name=openshift-tenant openshift_master_cluster_hostname=openshift.refarch.roger.com openshift_master_cluster_public_hostname=openshift.refarch.roger.com openshift_master_default_subdomain=apps.refarch.roger.com osm_default_node_selector="role=app" os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant' osm_use_cockpit=true containerized=false # registry openshift_hosted_registry_storage_kind=openstack openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_openstack_filesystem=ext4 openshift_hosted_registry_storage_openstack_volumeID=3b5c9517-abd1-4364-80b9-a1469cae8fd8 openshift_hosted_registry_storage_volume_size=15Gi # metrics openshift_hosted_metrics_deploy=true openshift_hosted_metrics_storage_kind=dynamic openshift_hosted_metrics_storage_volume_size=10Gi openshift_metrics_hawkular_nodeselector={"role":"infra"} openshift_metrics_cassandra_nodeselector={"role":"infra"} openshift_metrics_heapster_nodeselector={"role":"infra"} # logging openshift_hosted_logging_deploy=true openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_size=10Gi openshift_logging_es_cluster_size=3 openshift_logging_es_nodeselector={"role":"infra"} openshift_logging_kibana_nodeselector={"role":"infra"} openshift_logging_curator_nodeselector={"role":"infra"} # host group for masters [masters] master-0.refarch.roger.com master-1.refarch.roger.com master-2.refarch.roger.com # host group for etcd [etcd] master-0.refarch.roger.com master-1.refarch.roger.com master-2.refarch.roger.com # host group for nodes, includes region info [nodes] master-0.refarch.roger.com openshift_node_labels="{'role': 'master'}" openshift_hostname=master-0.refarch.roger.com master-1.refarch.roger.com openshift_node_labels="{'role': 'master'}" openshift_hostname=master-1.refarch.roger.com master-2.refarch.roger.com openshift_node_labels="{'role': 'master'}" openshift_hostname=master-2.refarch.roger.com app-node-0.refarch.roger.com openshift_node_labels="{'role': 'app'}" openshift_hostname=app-node-0.refarch.roger.com app-node-1.refarch.roger.com openshift_node_labels="{'role': 'app'}" openshift_hostname=app-node-1.refarch.roger.com app-node-2.refarch.roger.com openshift_node_labels="{'role': 'app'}" openshift_hostname=app-node-2.refarch.roger.com infra-node-0.refarch.roger.com openshift_node_labels="{'role': 'infra'}" openshift_hostname=infra-node-0.refarch.roger.com infra-node-1.refarch.roger.com openshift_node_labels="{'role': 'infra'}" openshift_hostname=infra-node-1.refarch.roger.com infra-node-2.refarch.roger.com openshift_node_labels="{'role': 'infra'}" openshift_hostname=infra-node-2.refarch.roger.com Steps to Reproduce: 1. deploy cluster with metrics and logging enabled and storage set to dynamic 2. oc project logging 3. oc get pvc Actual results: NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE logging-es-0 Pending 21m logging-es-1 Pending 20m logging-es-2 Pending 20m Expected results: A storage class configured and storage requested from storage class and pvcs not longer in a pending state [root@master-2 ~]# oc project test Now using project "test" on server "https://openshift.refarch.roger.com:8443". [root@master-2 ~]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE test Bound pvc-1d93fce1-8c2d-11e7-aba7-fa163e2f468f 1Gi RWO standard 4m Additional info:
Matt can you point us towards documentation for how we'd create the default storage class provider for openstack if that's possible to do so in a general manner.
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: <name> provisioner: kubernetes.io/cinder parameters: availability: nova
This seems like the installer should just create a default storage class whenever we're on a provider for which we know how to do so. Moving this to installer component.
Yes, there is an openshift_default_storage_class role and it has only gce + aws StorageClasses at the moment, an openstack one could be added to it: https://github.com/openshift/openshift-ansible/blob/7132c4f3e17df1c7056f2a745811a7c463cfe5f7/roles/openshift_default_storage_class/README.md https://github.com/openshift/openshift-ansible/blob/3409e6db205b6b24914e16c62972de50071f4051/playbooks/common/openshift-cluster/openshift_hosted.yml#L29 https://kubernetes.io/docs/concepts/storage/persistent-volumes/#openstack-cinder
https://github.com/openshift/openshift-ansible/pull/5722 added a default storage provider for openstack in openshift-ansible-3.7.0-0.178.2 which should fix this bug for openstack.
Verified with openshift-ansible-3.7.26