Red Hat Bugzilla – Bug 1287569
pods: race condition on reload of pods where list of pods appear along with "no longer exist" message
Last modified: 2017-07-25 01:37:04 EDT
Created attachment 1101417 [details]
Description of problem:
I deleted pods and because of replicators the pods are re-spowned.
After killing the pods I logged into the ui and viewed my 5 pods with a "no longer exist" message (see screen shot)
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. create replicators
2. kill all pods
3. log in to ui as the pods are killed
we see the list of pods along with "no longer exist" message
we should try to filter the message before the new pods are logged
Additional info:screen shot +logs
Created attachment 1101418 [details]
(In reply to Dafna Ron from comment #0)
> Expected results:
> we should try to filter the message before the new pods are logged
> Additional info:screen shot +logs
It is not clear what was expected here. The pod you tried to browse no longer exists in the database as it was removed.
An error and the most updated list of pods is displayed.
I did not want to view a specific pod but all pods.
If I was selecting a specific pod than I would agree with you, however, in this case I was on the main page for pods which means I was not querying a specific pod but all pods.
If you look at the screenshot I attached you can clearly see that I am on the main pods page and selected "ALL" in filter. the message suggests that there are no pods and yet shows a list of the new pods below it.
Fede, Isn't this duplicate to BZ#1300767
(In reply to Avi Tal from comment #4)
> Fede, Isn't this duplicate to BZ#1300767
Difference with bug 1300767 is that in that case they removed the provider.
In this case as far as I understood from the description it happened when Dafna deleted the pods on the OpenShift side.
Dafna, I've never seen this happening. Did it happen to you guys again?
Should we close this for the time being?
it may happen when the cfme and the provider are in different locations which causes a few milliseconds delay in db sync between openshift and cfme.
I think this bug would probably be more apparent on scale testing.
we can try to reproduce if you like for latest.
(In reply to Dafna Ron from comment #8)
> it may happen when the cfme and the provider are in different locations
> which causes a few milliseconds delay in db sync between openshift and cfme.
> I think this bug would probably be more apparent on scale testing.
> we can try to reproduce if you like for latest.
If you're going to reproduce this please have Beni assisting you during the procedure so he understand exactly what you do and what may be happening.