From PR: https://github.com/ManageIQ/manageiq/pull/10929 we are using customer images from the DB when rendering trees like the /service/explorer. The cached hash ends up looking something like this: "../../../pictures/XXXr1.jpg" => "/images/100/../../../pictures/XXXr1.jpg", "../../../pictures/XXXr2.jpg" => "/images/100/../../../pictures/XXXr2.jpg", "../../../pictures/XXXr3.jpg" => "/images/100/../../../pictures/XXXr4.jpg" This means that these will not end up on the file digest of sprockets, and thus cause some slow down.
Hi Nick , Can you please tell me how to verify this bug . Thanks, shveta
Hi Shveta, This "bug" (I would call it more a performance fix), will present it self if you go to, say, the `/service/explorer` with a lot of services that all (or at least most) have custom images attached to them. The custom images, for each service, would need to do a directory scan that was unnecessary to determine the URL for displaying the image. I was testing with a database that had about 9.5k services in it, and all of them had a custom image attached. When testing against that database locally, I was seeing about 7 seconds for rendering the page on the backend, and with this patch in place, the time dropped to about 2.5 seconds. Keep in mind that this is more noticeable when there is a large number of services to render in a tree. If there is less, the amount of times this calculation had to be done is decreased, so the improvement is less impactful, but the result should be the same regardless. This is a fix specifically for users with a large DB. Thanks, -Nick
Verifying based on above comment as a DB that big is difficult to recreate. 5.7.0.7-beta1.20161025153249_9376fbd