Bug 1279547
Summary: | cinder-api is consuming around 10GB RAM when running a "snapshot-list" command on the large set of data | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Eric Harney <eharney> |
Component: | openstack-cinder | Assignee: | Gorka Eguileor <geguileo> |
Status: | CLOSED ERRATA | QA Contact: | lkuchlan <lkuchlan> |
Severity: | high | Docs Contact: | |
Priority: | urgent | ||
Version: | 7.0 (Kilo) | CC: | adahms, eharney, fpercoco, geguileo, jobernar, jschluet, nlevinki, scohen, sgotliv, srevivo |
Target Milestone: | async | Keywords: | ZStream |
Target Release: | 7.0 (Kilo) | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | openstack-cinder-2015.1.3-4.el7ost | Doc Type: | Bug Fix |
Doc Text: |
Previously, Cinder calculated filters, limits, and offset locally instead of in the database. This would result in large amounts of memory being required to perform these calculations because all non-deleted entries in the database would need to be retrieved. With this update, these calculations are now performed in the database, and only the data required for listing the output is retrieved.
|
Story Points: | --- |
Clone Of: | 1278576 | Environment: | |
Last Closed: | 2016-04-26 15:29:37 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Eric Harney
2015-11-09 17:13:11 UTC
*** Bug 1279562 has been marked as a duplicate of this bug. *** This fix is not working correctly, causes the problem seen in bug 1287621. Tested using: openstack-cinder-2015.1.2-5.el7ost.noarch python-cinder-2015.1.2-5.el7ost.noarch python-cinderclient-1.2.1-1.el7ost.noarch Verification flow: * Create a volume [root@cougar01 ~(keystone_admin)]# cinder create 1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2016-04-03T09:36:00.148098 | | display_description | None | | display_name | None | | encrypted | False | | id | f784b4bb-5f92-4016-8cc8-7dbae227e462 | | metadata | {} | | multiattach | false | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ * Insert lots of snapshots related to that specific volume, under snapshots table in cinder database [root@cougar01 ~(keystone_admin)]# for i in {1..10000}; do mysql cinder -e "INSERT INTO snapshots(id,deleted,status,volume_id,user_id,project_id ) values($i,0,'available','f784b4bb-5f92-4016-8cc8-7dbae227e462','635bff15a7d744458779a181f451bbdf','fdb3fb93b0474349afae4bcffe7855c1');"; done * Run cinder snapshot-list command, use volume_id [root@cougar01 ~(keystone_admin)]# cinder snapshot-list +-------+--------------------------------------+-----------+--------------+------+ | ID | Volume ID | Status | Display Name | Size | +-------+--------------------------------------+-----------+--------------+------+ | 1 | f784b4bb-5f92-4016-8cc8-7dbae227e462 | available | - | - | | 10 | f784b4bb-5f92-4016-8cc8-7dbae227e462 | available | - | - | | 100 | f784b4bb-5f92-4016-8cc8-7dbae227e462 | available | - | - | | 1000 | f784b4bb-5f92-4016-8cc8-7dbae227e462 | available | - | - | | 10000 | f784b4bb-5f92-4016-8cc8-7dbae227e462 | available | - | - | . . . [root@cougar01 ~]# top | grep cinder-api 12277 cinder 20 0 449884 63836 7580 S 5.3 0.1 6:43.27 cinder-api 12277 cinder 20 0 449884 63836 7580 S 1.7 0.1 6:43.32 cinder-api 12277 cinder 20 0 449884 63836 7580 S 2.0 0.1 6:43.38 cinder-api 12277 cinder 20 0 449884 63836 7580 S 2.0 0.1 6:43.44 cinder-api 12277 cinder 20 0 449884 63836 7580 S 1.7 0.1 6:43.49 cinder-api Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0688.html |