Team, There is nothing on this BZ for sometime now even though I can assume that there is work being done in the background. There is an ACE that is created for the associated case and I am involved to see how we can take this forward. Can I please get a status update on the BZ and if possible have this prioritized? Thanks, Praveen Escalation Manager
This is dependent on 1545939. The fix for that other bug was delivered as a hotfix to the customer impacted by this bug.
Nir, what are the exact steps to reproduce/verify this in my environment?
(In reply to Bruna Bonguardo from comment #50) > Nir, what are the exact steps to reproduce/verify this in my environment? Hi Bruna, We spoke f2f about this, but it makes sense to keep the information here up-to-date: I advised repeating the steps Slawek took on comment 39, which should give an indication for quality engineering to verify this. If you need any help with this, please do let me know. Nir
As you can see, I created 10 load balancers with 10 listeners each. The memory is as stated above. It does not look unreasonably high. Nir - can you connect to the server with me and check?
Yes, this looks fine. Memory is allocated by python in first iteration and it's not given back to OS after resources are removed. But in next iterations same memory is reused by new resources thus consumption is not raising anymore. I don't see any leak in https://docs.google.com/spreadsheets/d/147vG-PO8SAijjK4sR87K-7v3YGfR0fae5AlOdiXZwrA/edit#gid=0 IMO it works fine.
According to our records, this should be resolved by openstack-neutron-lbaas-9.2.2-8.el7ost. This build is available now.