Bug 1323323 - Need to reclaim space to disk after deleting large number of ceilometer meters
Summary: Need to reclaim space to disk after deleting large number of ceilometer meters
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: mongodb
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: async
: ---
Assignee: Flavio Percoco
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-01 21:26 UTC by Jeremy
Modified: 2019-10-10 11:45 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-04 17:51:53 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2215701 0 None None None 2016-04-04 20:22:07 UTC
Red Hat Knowledge Base (Solution) 2219091 0 None None None 2016-04-04 20:22:42 UTC

Description Jeremy 2016-04-01 21:26:36 UTC
Description of problem: Need to determine how to reclaim storage space to disk after deleting large number of records in the meter database. The meter database did contain ~400GB of data. We deleted the records by time stamp to the last 15 days so now it contains about 60GB. Now we need to figure out how to reclaim the space to disk. This is a 3 node cluster in an openstack environment..


Version-Release number of selected component (if applicable):
[heat-admin@tpacpucctrl1 ~]$ rpm -qa | grep mongo                                                                                                             
mongodb-server-2.6.9-1.el7ost.x86_64
python-pymongo-2.5.2-2.el7ost.x86_64
mongodb-2.6.9-1.el7ost.x86_64


Additional Info:

tripleo:PRIMARY> db.stats()
{
        "db" : "ceilometer",
        "collections" : 7,
        "objects" : 36097501,
        "avgObjSize" : 1693.7871190584633,
        "dataSize" : 61141482224,
        "storageSize" : 346381305376,
        "numExtents" : 197,
        "indexes" : 11,
        "indexSize" : 23036738480,
        "fileSize" : 485028265984,
        "nsSizeMB" : 16,
        "dataFileVersion" : {
                "major" : 4,
                "minor" : 5
        },
        "extentFreeList" : {
                "num" : 0,
                "totalSize" : 0
        },
        "ok" : 1
}

[root@tpacpucctrl2 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       556G  523G   34G  94% /
devtmpfs         63G     0   63G   0% /dev
tmpfs            63G   39M   63G   1% /dev/shm
tmpfs            63G  1.4M   63G   1% /run
tmpfs            63G     0   63G   0% /sys/fs/cgroup
tmpfs            13G     0   13G   0% /run/user/994
tmpfs            13G     0   13G   0% /run/user/993
tmpfs            13G     0   13G   0% /run/user/1000
tmpfs            13G     0   13G   0% /run/user/166

[root@tpacpucctrl2 ~]# du -ch /var/lib/mongodb/
1.5G    /var/lib/mongodb/journal
0       /var/lib/mongodb/_tmp
482G    /var/lib/mongodb/
482G    total

Comment 1 Flavio Percoco 2016-04-04 14:18:07 UTC
As mentioned on IRC, using `compact` on newer versions or `repairDatabase` on older ones is probably the best way to do so when the data needs to be preserved. Otherwise, in cases where data can be deleted, then dropping the database should be enough to claim the allocated space back.

Comment 2 Jeremy 2016-04-04 17:51:53 UTC
Dropped the DB by the following steps:

1.pcs resource disable openstack-ceilometer*
2.log into mongo and do db.dropDatabase()
3.check that /var/lib/mongodb/ceilometer* is gone.
4.may have to restart mongodb by doing pcs resource disable mongod-clone and re-enable
5.do use ceilometer; then db.stats() to ensure data is gone.
6. pcs resource enable openstack-ceilometer* 
7. now you will see that the tables are back and meters begin to fill the database once again.
8. ll -h /var/lib/mongodb to see the new ceilometer db created

Now the environment is stable with only 12% / use instad of 95%.


Note You need to log in before you can comment on or make changes to this bug.