Red Hat Bugzilla – Bug 1302413
optimize metadata service caching logic to avoid unnecessary data retrieval from conductor
Last modified: 2017-06-30 09:31:03 EDT
Currently the metadata service retrieves data unneccessarily from the conductor in at least two separate cases:
1. where the same data was already retrieved and cached by another metadata service worker running on the same node
2. when a retrieval for that same data is already in flight
This unneccessary load on the conductor could be reduced by using a shared cache across all workers running on the same node, and recording when a retrieval for some URL is already in progress so that if that same data is requested again before the initial fetch has completed, then the subsequent request can simply await it's arrival in the shared cached as opposed to independently re-fetching it in parallel.
The metadata caching is flawed in general, but there is a symptomatic fix for this related performance issue:
upstream, which pre-fetches some of data before caching, here:
Unfortunately the second change introduced the pre-fetch as a side-effect, so it cannot be backported as is.
I have submitted two upstream changes that are related to this.
Disabling memached for metadata caching:
No parallel queries of the same data (this addresses point 2 in the description):
Clarification: the upstream changes are stop-gap fixes (and might not get accepted). The main problem of caching the db queries instead of a whole python object, in order to make it shareable between different processes, is still unaddressed.
Sven can you help me understand what if anything remains that is backportable here versus needing to be addressed in a later release (Newton/Ocata)?
Stephen, Diana was working on that after me. I have no idea about the current status.
Red Hat OpenStack Platform version 5 is now End-of-Life, and as such will not have further updates. See https://access.redhat.com/support/policy/updates/openstack/platform/ for full support lifecycle details.