Bug 976848 - subman blocks while slowly writing cache of status requests
subman blocks while slowly writing cache of status requests
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: subscription-manager (Show other bugs)
5.10
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Carter Kozak
IDM QE LIST
:
Depends On:
Blocks: rhsm-rhel510
  Show dependency treegraph
 
Reported: 2013-06-21 12:04 EDT by Adrian Likins
Modified: 2013-10-01 09:49 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-10-01 09:49:27 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
pycallgraph out of a 'subscription-manager list --consumed" showing time spent writing caches (122.65 KB, application/pdf)
2013-06-21 12:04 EDT, Adrian Likins
no flags Details

  None (edit)
Description Adrian Likins 2013-06-21 12:04:41 EDT
Created attachment 763897 [details]
pycallgraph out of a 'subscription-manager list --consumed" showing time spent writing caches

Description of problem:

To support use offline, subscription-manager will write a cache of
results from server side status checks everytime it get's it.

For systems with a lot of entitlement info, writing out this cache can be noticibly slow. This seems to be caused mostly by json serialization overhead.

See attached pycallgraph-list-consumed.pdf for an example for a system with 16 entitlements. That example is for a cli 'list --consumed' invocation. 

In the gui, subman can end up writing out the status cache multiple times. I'll file that as a seperate bug.

1. json serilization is slow

2. the (re-)serialization may be unneccasary, since we get the serialized version from the REST api call, deserialize it, then serialize it again.

3. client (gui, in particular) blocks while doing this. The cache isn't strictly neccasary for operation to continue, so it is not neccasary to wait/block for this to be done. It should be async.


Possible fixes:

1. faster json serilization (unlikely on rhel5)
2. make cache writing async so gui doesn't block
3. don't write out cache
4. figure out how to use the unserialized result from the connection object to short circuit deserialize from request -> serialized to memory -> deserialized to disk cache  loop

Version-Release number of selected component (if applicable):
Anything that supports server side status caching (1.8+ more or less)


How reproducible:

Always, but it is easier to see with a lot of entitlements.
Comment 1 Carter Kozak 2013-07-10 14:37:51 EDT
commit a6f551cebac16702a26950620995c718713cb532
Author: ckozak <ckozak@redhat.com>
Date:   Mon Jul 1 11:29:44 2013 -0400

    976848: 976851: thread cache write, limit disk reads, singleton
Comment 2 Carter Kozak 2013-07-15 12:10:54 EDT
This is an optimization bug/fix, if it is implemented properly, QA tests should not detect any difference.
Comment 3 John Sefler 2013-07-17 14:36:34 EDT
I do not have any explicit test coverage to confirm that comment 0 was a problem nor assert that comment 0 is now fixed. 
I also do not have any metrics to confirm that performance has improved with this bug fix.

As suggested in comment 2, I can say that our automated tests have not revealed a regression in behavior with respect to compliance status that has not already been addressed in other bugzillas.  Therefore, I am flipping this bug to VERIFIED without any adverse regression.

[root@jsefler-5 ~]# rpm -q subscription-manager
subscription-manager-1.8.13-1.el5

Additional Info:
No comment on that pycallgraph attachment.

Note You need to log in before you can comment on or make changes to this bug.