Description of problem: The Kuryr controller currently uses OpenStack credentials stored in plain text in the ConfigMap kuryr-config in the openshift-kuryr namespace. The customer has asked if the OpenStack credentials are stored in a secret instead on the OpenShift cluster. Version-Release number of selected component (if applicable): OpenShift 4.6.21 How reproducible: Deploy OpenShift 4.6.21 on OSP 13 with Kuryr as the SDN on OpenShift. This is the current default behaviour when deploying with Kuryr. Steps to Reproduce: 1. Log into OpenShift cluster 2. Display contents of the Kuryr controller configuration oc get cm -n openshift-kuryr kuryr-config -o yaml 3. The relevant section is in the data field of the ConfigMap, from the [neutron] category, e.g. https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/kuryr/003-config.yaml#L73-L74 Actual results: [neutron] auth_type = password username = <redacted> password = <redacted> Expected results: Password not stored in plain text in the config map Additional info:
Verified on 4.9.0-0.nightly-2021-06-28-221420 over OSP16.1 (RHOS-16.1-RHEL-8-20210323.n.0) with UPI installation. Cluster successfully installed: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.0-0.nightly-2021-06-28-221420 True False 14m Cluster version is 4.9.0-0.nightly-2021-06-28-221420 [Neutron] section is not on the configMap anymore: $ oc get cm -n openshift-kuryr kuryr-config -o yaml apiVersion: v1 data: kuryr.conf: | [DEFAULT] debug = false [binding] default_driver = kuryr.lib.binding.drivers.vlan [cni_daemon] daemon_enabled = true docker_mode = true netns_proc_dir = /host_proc vif_annotation_timeout = 500 [ingress] #l7_router_uuid = <None> [kubernetes] api_root = "" ssl_ca_crt_file = /var/run/secrets/kubernetes.io/serviceaccount/ca.crt token_file = /var/run/secrets/kubernetes.io/serviceaccount/token ssl_verify_server_crt = true controller_ha = false controller_ha_elector_port = 16401 watch_retry_timeout = 3600 pod_vif_driver = nested-vlan vif_pool_driver = nested multi_vif_drivers = noop enabled_handlers = vif,kuryrport,service,endpoints,kuryrloadbalancer,policy,pod_label,namespace,openshift_machine,kuryrnetworkpolicy,kuryrnetwork pod_security_groups_driver = policy service_security_groups_driver = policy pod_subnets_driver = namespace nodes_subnets_driver = openshift endpoints_driver_octavia_provider = ovn [pod_vif_nested] worker_nodes_subnets = 93bfcd3a-0928-46d1-9e65-c6bc4a219340 [octavia_defaults] member_mode = L2 sg_mode = create enforce_sg_rules = false lb_algorithm = SOURCE_IP_PORT [namespace_subnet] pod_router = f7884a08-4d8b-411a-b00c-56416ae85948 pod_subnet_pool = 325c093c-c754-4d79-8eda-0e7c3c4031f8 [neutron_defaults] service_subnet = dc5f6e4b-50b5-4ce5-a807-c2d34a8fdd14 project = 3210dadc4c0e41f1bf8dacd64753ee33 pod_security_groups = fc8af316-6fc4-44cc-a99c-5ab8a4070cac resource_tags = openshiftClusterID=ostest-qcfxf external_svc_net = b55d1e5d-b2a9-4e75-ac60-521c583739ec network_device_mtu = 1442 [vif_pool] ports_pool_max = 0 ports_pool_min = 1 ports_pool_batch = 3 ports_pool_update_frequency = 30 [health_server] port = 8091 [cni_health_server] port = 8090 [prometheus_exporter] controller_exporter_port = 9654 cni_exporter_port = 9655 kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn networkoperator.openshift.io/kuryr-octavia-version: v2.13 creationTimestamp: "2021-06-29T09:45:33Z" name: kuryr-config namespace: openshift-kuryr ownerReferences: - apiVersion: operator.openshift.io/v1 blockOwnerDeletion: true controller: true kind: Network name: cluster uid: b690818e-af49-4de5-9911-35c4a9a7d331 resourceVersion: "3145" uid: 5a8489c9-9c58-4eb6-a510-189768774597 The info is now present on secret kuryr-config-credentials: $ oc get secret -n openshift-kuryr kuryr-config-credentials -o yaml apiVersion: v1 data: kuryr-credentials.conf: W25ldXRyb25dCmF1dGhfdHlwZSA9IHBhc3N3b3JkCmF1dGhfdXJsID0gaHR0cHM6Ly8xMC4wLjAuMTAxOjEzMDAwCmluc2VjdXJlID0gZmFsc2UKdG9rZW4gPSAiIgpwYXNzd29yZCA9IHJlZGhhdAp1c2VybmFtZSA9IHNoaWZ0c3RhY2tfdXNlcgpwcm9qZWN0X2RvbWFpbl9uYW1lID0gRGVmYXVsdApwcm9qZWN0X2RvbWFpbl9pZCA9ICIiCnByb2plY3RfaWQgPSAzMjEwZGFkYzRjMGU0MWYxYmY4ZGFjZDY0NzUzZWUzMwpwcm9qZWN0X25hbWUgPSBzaGlmdHN0YWNrCnVzZXJfZG9tYWluX25hbWUgPSBEZWZhdWx0CnVzZXJfZG9tYWluX2lkID0gIiIKcmVnaW9uX25hbWUgPSByZWdpb25PbmUKIyBUaGVyZSdzIG5vIGdvb2Qgd2F5IHRvIGp1c3QgImFwcGVuZCIgdXNlci1wcm92aWRlZCBjZXJ0cyB0byBzeXN0ZW0gb25lcywKIyBzbyBqdXN0IGNvbmZpZ3VyZSBvcGVuc3RhY2tzZGsgdG8gdXNlIGl0LgpjYWZpbGUgPSAvZXRjL3NzbC9jZXJ0cy91c2VyLWNhLWJ1bmRsZS5jcnQK kind: Secret metadata: creationTimestamp: "2021-06-29T09:45:33Z" name: kuryr-config-credentials namespace: openshift-kuryr ownerReferences: - apiVersion: operator.openshift.io/v1 blockOwnerDeletion: true controller: true kind: Network name: cluster uid: b690818e-af49-4de5-9911-35c4a9a7d331 resourceVersion: "3147" uid: 0e340e9f-0412-4a22-9c78-1d057fe1ee4e type: Opaque and mounted on the kuryr-controller pod: $ oc get -n openshift-kuryr $(oc get pod -n openshift-kuryr -l app=kuryr-controller -o NAME) -o json | jq '.spec.volumes[] | select(.name=="credentials-volume")' { "name": "credentials-volume", "secret": { "defaultMode": 420, "secretName": "kuryr-config-credentials" } }
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759