Bug 1435235 - containers: identical volume name for different volumes in different pods is not useful for users (at least not admin)
Summary: containers: identical volume name for different volumes in different pods is ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: UI - OPS
Version: 5.7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: GA
: 5.10.0
Assignee: Nimrod Shneor
QA Contact: juwatts
URL:
Whiteboard: container
Depends On: 1594567
Blocks: 1552889
TreeView+ depends on / blocked
 
Reported: 2017-03-23 12:37 UTC by Dafna Ron
Modified: 2023-09-14 03:55 UTC (History)
9 users (show)

Fixed In Version: 5.10.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1552889 (view as bug list)
Environment:
Last Closed: 2019-02-11 13:55:24 UTC
Category: ---
Cloudforms Team: Container Management
Target Upstream Version:
Embargoed:
nshneor: needinfo+
nshneor: needinfo-


Attachments (Terms of Use)
more info on volumes and screenshots (408.43 KB, application/x-gzip)
2017-03-23 12:37 UTC, Dafna Ron
no flags Details
Pod Summary PV (137.83 KB, image/png)
2018-01-18 10:11 UTC, brahmani
no flags Details

Description Dafna Ron 2017-03-23 12:37:33 UTC
Created attachment 1265733 [details]
more info on volumes and screenshots

Description of problem:

when you create the same pod for different projects, the volume name presented in the pod would be identical for different volumes which would make any debugging impossible from UI. 

for example: 

I have several different projects with different users and different volumes but they are all running cfme podify. 

I can see that the name the UI presents to the user on two cfme pods from two different projects have the same volume name. 

cloudforms-1-y1f4t
the volumes table reports the following: 

Volumes
Name 	                 Property 	                    Value
cfme-app-volume 	Persistent Volume Claim Name 	cloudforms
default-token-ptbko 	Secret Name 	               default-token-ptbko 


Volumes
Name 	                 Property 	                 Value
cfme-app-volume 	Persistent Volume Claim Name 	cloudforms
default-token-xb81y 	Secret Name 	            default-token-xb81y 


although in the pod description we see the following: 

Volume Mounts:
      /persistent from cfme-app-volume (rw)


but the claim names for the volumes are actually different. 

I think that if we want to describe the volume properly in a way which would make it easy to differentiate between them and locate them properly, it would be best to show the actual volume names and the claim by project (as shown here): 

[root@dafna-pods-master manageiq-pods]# oc get pv
NAME              CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                    REASON    AGE
cloudforms        2Gi        RWO           Recycle         Bound      pods/postgresql                    1d
cloudforms57      2Gi        RWO           Recycle         Bound      pods57/postgresql                  2h
cloudformsnew     2Gi        RWO           Recycle         Bound      57pods/cloudforms                  1h
cloudformsnew1    2Gi        RWO           Recycle         Bound      cloudfoms/postgresql               1h
metrics-volume    10Gi       RWO           Retain          Released   57pods/postgresql                  1d
nfs-pv01          2Gi        RWO           Recycle         Bound      pods/cloudforms                    1d
nfs-pv02          2Gi        RWO           Recycle         Bound      pods57/cloudforms                  2h
nfs-pv03          2Gi        RWO           Recycle         Bound      57pods/postgresql                  1h
nfs-pv04          2Gi        RWO           Recycle         Bound      cloudfoms/cloudforms               1h
registry-volume   5Gi        RWX           Retain          Bound      default/registry-claim             1d
 


Version-Release number of selected component (if applicable):

cfme-5.7.2.0-1.el7cf.x86_64

How reproducible:

100%

Steps to Reproduce:
1. create several cfme pods (different projects) 
2. 
3.

Actual results:

we have the same name presented for all volumes in UI. 
when we look at the pvs in openshift we can actually see what is the actual name. 

Expected results:

we may be presenting information which is correct (not sure where it is taken from yet) but it's useless because what we want is the actual volume name. 

Additional info:



[root@dafna-pods-master manageiq-pods]# oc describe pod cloudforms-1-y1f4t
Name:			cloudforms-1-y1f4t
Namespace:		cloudfoms
Security Policy:	privileged
Node:			dafna-pods-node01.qa.lab.tlv.redhat.com/10.35.69.179
Start Time:		Thu, 23 Mar 2017 13:07:20 +0200
Labels:			app=cloudforms57
			deployment=cloudforms-1
			deploymentconfig=cloudforms
			name=cloudforms
Status:			Running
IP:			10.129.0.49
Controllers:		ReplicationController/cloudforms-1
Containers:
  cloudforms:
    Container ID:	docker://6225d74d2001296164f1824491de7e45e337277683e0092e8139b09f64ebf941
    Image:		brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cloudforms/cfme57-openshift-app@sha256:ecdfab90503691cdd900c2a64c470ec46d9a6b3f12f625785d85919503341d7f
    Image ID:		docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cloudforms/cfme57-openshift-app@sha256:ecdfab90503691cdd900c2a64c470ec46d9a6b3f12f625785d85919503341d7f
    Ports:		80/TCP, 443/TCP
    Requests:
      memory:		4Gi
    State:		Running
      Started:		Thu, 23 Mar 2017 13:10:22 +0200
    Ready:		True
    Restart Count:	0
    Liveness:		tcp-socket :443 delay=480s timeout=3s period=10s #success=1 #failure=3
    Readiness:		http-get https://:443/ delay=200s timeout=3s period=10s #success=1 #failure=3
    Volume Mounts:
      /persistent from cfme-app-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ptbko (ro)
    Environment Variables:
      APPLICATION_INIT_DELAY:		30
      DATABASE_SERVICE_NAME:		postgresql
      DATABASE_REGION:			0
      MEMCACHED_SERVICE_NAME:		memcached
      POSTGRESQL_USER:			root
      POSTGRESQL_PASSWORD:		smartvm
      POSTGRESQL_DATABASE:		vmdb_production
      POSTGRESQL_MAX_CONNECTIONS:	100
      POSTGRESQL_SHARED_BUFFERS:	64MB
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  cfme-app-volume:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	cloudforms
    ReadOnly:	false
  default-token-ptbko:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-ptbko
QoS Class:	Burstable
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From							SubobjectPath			Type		Reason		Message
  ---------	--------	-----	----							-------------			--------	------		-------
  1h		1h		1	{default-scheduler }									Normal		Scheduled	Successfully assigned cloudforms-1-y1f4t to dafna-pods-node01.qa.lab.tlv.redhat.com
  1h		1h		1	{kubelet dafna-pods-node01.qa.lab.tlv.redhat.com}	spec.containers{cloudforms}	Normal		Pulling		pulling image "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cloudforms/cfme57-openshift-app@sha256:ecdfab90503691cdd900c2a64c470ec46d9a6b3f12f625785d85919503341d7f"
  58m		58m		1	{kubelet dafna-pods-node01.qa.lab.tlv.redhat.com}	spec.containers{cloudforms}	Normal		Pulled		Successfully pulled image "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cloudforms/cfme57-openshift-app@sha256:ecdfab90503691cdd900c2a64c470ec46d9a6b3f12f625785d85919503341d7f"
  58m		58m		1	{kubelet dafna-pods-node01.qa.lab.tlv.redhat.com}	spec.containers{cloudforms}	Normal		Created		Created container with docker id 6225d74d2001; Security:[seccomp=unconfined]
  58m		58m		1	{kubelet dafna-pods-node01.qa.lab.tlv.redhat.com}	spec.containers{cloudforms}	Normal		Started		Started container with docker id 6225d74d2001
  54m		53m		7	{kubelet dafna-pods-node01.qa.lab.tlv.redhat.com}	spec.containers{cloudforms}	Warning		Unhealthy	Readiness probe failed: Get https://10.129.0.49:443/: dial tcp 10.129.0.49:443: getsockopt: connection refused
  53m		52m		4	{kubelet dafna-pods-node01.qa.lab.tlv.redhat.com}	spec.containers{cloudforms}	Warning		Unhealthy	Readiness probe failed: HTTP probe failed with statuscode: 503
[root@dafna-pods-master manageiq-pods]# 
[root@dafna-pods-master manageiq-pods]# 
[root@dafna-pods-master manageiq-pods]# oc describe pod cloudforms-3-v8mtq
Error from server: pods "cloudforms-3-v8mtq" not found
(reverse-i-search)`admin ': oadm policy add-cluster-role-to-user cluster-^Cmin logs
[root@dafna-pods-master manageiq-pods]# oc login -u system:admin
Logged into "https://dafna-pods-master.qa.lab.tlv.redhat.com:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    57pods
  * cloudfoms
    default
    kube-system
    logging
    management-infra
    openshift
    openshift-infra
    pods
    pods57

Using project "cloudfoms".
[root@dafna-pods-master manageiq-pods]# oc project pods
Now using project "pods" on server "https://dafna-pods-master.qa.lab.tlv.redhat.com:8443".
[root@dafna-pods-master manageiq-pods]# oc describe pod cloudforms-3-v8mtq
Name:			cloudforms-3-v8mtq
Namespace:		pods
Security Policy:	privileged
Node:			dafna-pods-master.qa.lab.tlv.redhat.com/10.35.69.170
Start Time:		Wed, 22 Mar 2017 12:07:12 +0200
Labels:			app=cloudforms
			deployment=cloudforms-3
			deploymentconfig=cloudforms
			name=cloudforms
Status:			Running
IP:			10.130.0.56
Controllers:		ReplicationController/cloudforms-3
Containers:
  cloudforms:
    Container ID:	docker://ff682a003a2ccf0196c8f711098cfe1aafc63442c7ca4d3598a9c3b76bb86267
    Image:		brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cloudforms/cfme58-openshift-app@sha256:3efd998b7ae777ffd4816eb0280fd0cc9dd67aa66fd30bc845b7116c86fe5b34
    Image ID:		docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cloudforms/cfme58-openshift-app@sha256:3efd998b7ae777ffd4816eb0280fd0cc9dd67aa66fd30bc845b7116c86fe5b34
    Ports:		80/TCP, 443/TCP
    Requests:
      memory:		4Gi
    State:		Running
      Started:		Thu, 23 Mar 2017 12:09:19 +0200
    Last State:		Terminated
      Reason:		Error
      Exit Code:	255
      Started:		Wed, 22 Mar 2017 12:07:14 +0200
      Finished:		Thu, 23 Mar 2017 12:09:12 +0200
    Ready:		True
    Restart Count:	1
    Liveness:		tcp-socket :443 delay=480s timeout=3s period=10s #success=1 #failure=3
    Readiness:		http-get https://:443/ delay=200s timeout=3s period=10s #success=1 #failure=3
    Volume Mounts:
      /persistent from cfme-app-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xb81y (ro)
    Environment Variables:
      APPLICATION_INIT_DELAY:		30
      DATABASE_SERVICE_NAME:		postgresql
      DATABASE_REGION:			0
      MEMCACHED_SERVICE_NAME:		memcached
      POSTGRESQL_USER:			root
      POSTGRESQL_PASSWORD:		smartvm
      POSTGRESQL_DATABASE:		vmdb_production
      POSTGRESQL_MAX_CONNECTIONS:	100
      POSTGRESQL_SHARED_BUFFERS:	64MB
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  cfme-app-volume:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	cloudforms
    ReadOnly:	false
  default-token-xb81y:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-xb81y
QoS Class:	Burstable
Tolerations:	<none>
No events.
[root@dafna-pods-master manageiq-pods]# oc get pv
NAME              CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                    REASON    AGE
cloudforms        2Gi        RWO           Recycle         Bound      pods/postgresql                    1d
cloudforms57      2Gi        RWO           Recycle         Bound      pods57/postgresql                  2h
cloudformsnew     2Gi        RWO           Recycle         Bound      57pods/cloudforms                  1h
cloudformsnew1    2Gi        RWO           Recycle         Bound      cloudfoms/postgresql               1h
metrics-volume    10Gi       RWO           Retain          Released   57pods/postgresql                  1d
nfs-pv01          2Gi        RWO           Recycle         Bound      pods/cloudforms                    1d
nfs-pv02          2Gi        RWO           Recycle         Bound      pods57/cloudforms                  2h
nfs-pv03          2Gi        RWO           Recycle         Bound      57pods/postgresql                  1h
nfs-pv04          2Gi        RWO           Recycle         Bound      cloudfoms/cloudforms               1h
registry-volume   5Gi        RWX           Retain          Bound      default/registry-claim             1d
[root@dafna-pods-master manageiq-pods]# 
[root@dafna-pods-master manageiq-pods]#

Comment 1 Barak 2017-03-26 13:29:35 UTC
Franco, keep me honest here.

The reason that the above volumes are shown this way in the UI is that in the 5.7 template released the name of the volume is hardcoded to "cfme-app-volume", although this name only applies in the project itself, I see the point that it is confusing.

But this is actually not a bug.

However for 5.8 we have changed the behavior a bit,
And the template $NAME is used as a part of the Volume names (and PVCs) in the template, meaning that if you want to install multiple times podified cfme you probably want to change the name of the template to differentiate the different instances, and there for the volume names will change accordingly.

I tend to CLOSE NOTABUG this bug.

Franco ?

Comment 2 Federico Simoncelli 2017-03-26 20:48:46 UTC
(In reply to Barak from comment #1) 
> I tend to CLOSE NOTABUG this bug.

In addition: all volume names are scoped by Pod and I am sure there are othere repeating over and over again in multiple Pods (e.g. volumes for secrets, etc.)

I assigned this to Zahi to check if there was anything we could do to improve the situation (I can't think of anything, but I didn't dedicate any time to think about this).
Maybe if it's a persistent volume we should also mention the name of the persistent volume? (which is unique)
Anyway we should evaluate if it's worth the trouble.

Comment 3 Franco Bladilo 2017-03-27 20:50:31 UTC
Barak,

Yes, in 5.8 we use ${NAME} as part of the PVCs and volume name, this is parametized, it holds the naming for all frontend objects, and can be used to obtain the effect that I believe is being requested.

oc new-app .... -p NAME=miq_xxx

Relevant template declaration :

volumes:
        - name: ${NAME}-region
          persistentVolumeClaim:
            claimName: ${NAME}-region
...

volumeClaimTemplates:
    - metadata:
        annotations: null
        name: ${NAME}-server
...

The naming of the backend PVs is entirely up to the admin.

Comment 4 zakiva 2017-03-28 14:54:43 UTC
One thing we can do to improve the Pod page is adding a link to the Persistent Volumes the pod uses in the relationships table. There is currently an ongoing work on adding Persistent Volume => Pods direction in https://github.com/ManageIQ/manageiq/pull/14231. So I suggest we start with adding the other direction too.

Comment 6 Dave Johnson 2017-07-14 03:48:42 UTC
Please assess the importance of this issue and update the priority accordingly.  Somewhere it was missed in the bug triage process.  Please refer to https://bugzilla.redhat.com/page.cgi?id=fields.html#priority for a reminder on each priority's definition.

If it's something like a tracker bug where it doesn't matter, please set it to Low/Low.

Comment 11 brahmani 2018-01-18 10:11:06 UTC
Created attachment 1382844 [details]
Pod Summary PV

Comment 12 brahmani 2018-01-18 10:15:46 UTC
Verified on 5.9.0.16.20180109204148_7ac9852
From my point of view i didn't see improvement on the UI that solve this issue.

1. On Pod summary view, on Volume table, the volume name is not identical to what i get on OCP, for example:

oc get pv -n openshift-metrics
NAME                             CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                                       STORAGECLASS   REASON    AGE
metrics-volume                   10Gi       RWO           Retain          Bound     openshift-infra/metrics-cassandra-1                                  20d
prometheus-alertbuffer-volume    10Gi       RWO           Retain          Bound     openshift-metrics/prometheus-alertbuffer                             20d
prometheus-alertmanager-volume   10Gi       RWO           Retain          Bound     openshift-metrics/prometheus-alertmanager                            20d
prometheus-volume                10Gi       RWO           Retain          Bound     openshift-metrics/prometheus                                         20d
registry-volume                  5Gi        RWX           Retain          Bound     default/registry-claim                                               20d

On UI , see attach Pod summary Image.

2. From Pod Summary , Relationships table, there is no link to the PV. see attach Pod summary Image.

Comment 13 Nimrod Shneor 2018-01-18 12:40:54 UTC
There is an open isse on manageiq-ui-classic (specifically on GTL) which prevents the UI side of this bug from being solved.. (https://github.com/ManageIQ/manageiq-ui-classic/issues/18)

Comment 14 Nimrod Shneor 2018-01-18 12:42:10 UTC
Barak, what do you think would be a good way to continue with this? Currently this is stuck on UI.

Comment 15 Nimrod Shneor 2018-01-23 08:41:43 UTC
So I was able to solve this and there is a new PR for this bug:
https://github.com/ManageIQ/manageiq-ui-classic/pull/3299

Comment 16 Nimrod Shneor 2018-02-06 09:49:20 UTC
PR: https://github.com/ManageIQ/manageiq/pull/16956

Comment 20 Red Hat Bugzilla 2023-09-14 03:55:29 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.