Bug 1369082
| Summary: | Large virt-who json may cause performance issue | |||
|---|---|---|---|---|
| Product: | Red Hat Satellite | Reporter: | Justin Sherrill <jsherril> | |
| Component: | Content Management | Assignee: | Justin Sherrill <jsherril> | |
| Status: | CLOSED ERRATA | QA Contact: | Perry Gagne <pgagne> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 6.2.0 | CC: | aperotti, bbuckingham, bkearney, cdonnell, chrobert, daniele, hsun, inecas, jcallaha, jsherril, ktordeur, mmccune, mmello, nshaik, oshtaier, pgagne, pmoravec, shihliu, zhunting | |
| Target Milestone: | Unspecified | Keywords: | Triaged | |
| Target Release: | Unused | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| URL: | http://projects.theforeman.org/issues/16228 | |||
| Whiteboard: | ||||
| Fixed In Version: | tfm-rubygem-katello-3.0.0.82-1 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1385860 1389103 1389104 (view as bug list) | Environment: | ||
| Last Closed: | 2016-11-10 08:23:06 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1353215, 1385860, 1389103, 1389104 | |||
| Attachments: | ||||
|
Description
Justin Sherrill
2016-08-22 12:55:28 UTC
Created redmine issue http://projects.theforeman.org/issues/16228 from this bug Upstream bug assigned to jsherril Upstream bug component is Content Management Upstream bug assigned to jsherril Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/16228 has been resolved. Could you help to provide more details? 1. virt-who package version? 2. virt-who configure options ==== for virt-who-0.17 ==== 1). if the VIRTWHO_DEBUG option is disabled, the json info should be not printed out 2). about the checkin interval time, if VIRTWHO_INTERVAL <=0, the default checkin time should be 60s, if VIRTWHO_INTERVAL > 60, will checkin as the setting value. We are seeing cases where the Hypervisor task in Katello consumes *huge* amount of space in the dynflow_actions table because we probably store all the JSON from the virt-who transaction for every single API call. HOTFIX INSTRUCTIONS I've produced hotfix packages for the customer's installed version of Satellite (6.2.2.1 on el7) and tested on a similar Satellite. With this you will not receive the whole JSON output from virt-who in the task. Install instructions: 1) Download the rpm attached to this Bugzilla to your satellite server. 2) Stop katello-service katello-service stopls 3) Install packages yum localinstall tfm-rubygem-katello-3.0.0.80-2.bz1369082_1357878.el7sat.noarch.rpm 4) Start katello-service katello-service- start This should properly start Katello. Now if you check check the hypervisors task now it should only have a small amount of data now recognizing that virt-who has checked in. Created attachment 1211777 [details] tfm-rubygem-katello-3.0.0.80-2.bz1369082_1357878.el7sat.noarch.rpm > --- Comment #15 from jcallaha --- > It looks like there are two issues here. > > 1. Large virt-who check-in performance issues. > > 2. Repeated check-ins when actions are performed on the virt backend. actually just (1) that we are resolving in this bug. The frequency of the actual tasks isn't at issue, it is the size of the task that is stored in the DB. > > If that is correct, then I Have a few questions. > > 1. Can we use a large json input to simulate the large check-in? > 1.a.If not, then does anyone have access to a large live system? > 1.b. This might be unnecessary anyway as long as we don't see the full json in > the task, correct? > yup, will get one attached to this BZ that you can use. > 2. Would the repeated checkin tasks be seen in the foreman-tasks page? > 2.a. If so, then what exactly should we be looking for (no tasks, minimal > tasks)? > 2.b. If not, then where should we be looking to make sure the behavior is what > we are expecting? > IMO and Justin can also comment: what we would want todo is check the size of the task object when running an API call on the hypervisor task Before this fix, customer saw a 10MB+ JSON extract on one task when running: # curl -k -u admin:changeme "https://localhost/foreman_tasks/api/tasks?page=1&per_page=1" before this fix it was a very large file, even for one task after the fix it should be KB, not MB in size. Created attachment 1212914 [details]
tfm-rubygem-katello-3.0.0.80-2.1369082_1357878.el6sat.noarch.rpm
HOTFIX INSTRUCTIONS I've produced hotfix packages for the customer's installed version of Satellite (6.2.3 on el7) and tested on a similar Satellite. With this you will not receive the whole JSON output from virt-who in the task. Install instructions: 1) Download the following rpms attached to this Bugzilla to your satellite server. tfm-rubygem-katello-3.0.0.81-2.BZ_1368746_1365952.el7sat.noarch.rpm tfm-rubygem-katello_ostree-3.0.0.81-2.BZ_1368746_1365952.el7sat.noarch.rpm 2) Stop katello-service katello-service stopls 3) Install packages yum localinstall tfm-rubygem-katello-3.0.0.81-2.BZ_1368746_1365952.noarch.rpm tfm-rubygem-katello_ostree-3.0.0.81-2.BZ_1368746_1365952.el7sat.noarch.rpm 4) Start katello-service katello-service- start This should properly start Katello. Now if you check check the hypervisors task now it should only have a small amount of data now recognizing that virt-who has checked in. Remeber this is Satellite 6.2.3, if you need this hotfix for another version please contact us. Created attachment 1217424 [details]
tfm-rubygem-katello-3.0.0.81-2.BZ_1368746_1365952.el7sat.noarch.rpm
Created attachment 1217426 [details]
tfm-rubygem-katello_ostree-3.0.0.81-2.BZ_1368746_1365952.el7sat.noarch.rpm
I see that tfm-rubygem-katello_ostree has been added into the list of patched packages and the bugzilla that should be fixed are different into the file name. Can you please explain the difference between the two patches for 6.2.2.1 and 6.2.3? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:2700 The two posted rpms were hotfixes and not the final released rpms. That is how the normal hotfix process goes. 6.2.3 includes many more fixes not available with just the hotfix, the patch for this particular issue would be the same |