Bug 1323323

Summary: Need to reclaim space to disk after deleting large number of ceilometer meters
Product: Red Hat OpenStack Reporter: Jeremy <jmelvin>
Component: mongodbAssignee: Flavio Percoco <fpercoco>
Status: CLOSED NOTABUG QA Contact: yeylon <yeylon>
Severity: unspecified Docs Contact:
Priority: high    
Version: 7.0 (Kilo)CC: fpercoco, scorcora, srevivo, yeylon
Target Milestone: asyncKeywords: Unconfirmed
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-04-04 17:51:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Jeremy 2016-04-01 21:26:36 UTC
Description of problem: Need to determine how to reclaim storage space to disk after deleting large number of records in the meter database. The meter database did contain ~400GB of data. We deleted the records by time stamp to the last 15 days so now it contains about 60GB. Now we need to figure out how to reclaim the space to disk. This is a 3 node cluster in an openstack environment..


Version-Release number of selected component (if applicable):
[heat-admin@tpacpucctrl1 ~]$ rpm -qa | grep mongo                                                                                                             
mongodb-server-2.6.9-1.el7ost.x86_64
python-pymongo-2.5.2-2.el7ost.x86_64
mongodb-2.6.9-1.el7ost.x86_64


Additional Info:

tripleo:PRIMARY> db.stats()
{
        "db" : "ceilometer",
        "collections" : 7,
        "objects" : 36097501,
        "avgObjSize" : 1693.7871190584633,
        "dataSize" : 61141482224,
        "storageSize" : 346381305376,
        "numExtents" : 197,
        "indexes" : 11,
        "indexSize" : 23036738480,
        "fileSize" : 485028265984,
        "nsSizeMB" : 16,
        "dataFileVersion" : {
                "major" : 4,
                "minor" : 5
        },
        "extentFreeList" : {
                "num" : 0,
                "totalSize" : 0
        },
        "ok" : 1
}

[root@tpacpucctrl2 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       556G  523G   34G  94% /
devtmpfs         63G     0   63G   0% /dev
tmpfs            63G   39M   63G   1% /dev/shm
tmpfs            63G  1.4M   63G   1% /run
tmpfs            63G     0   63G   0% /sys/fs/cgroup
tmpfs            13G     0   13G   0% /run/user/994
tmpfs            13G     0   13G   0% /run/user/993
tmpfs            13G     0   13G   0% /run/user/1000
tmpfs            13G     0   13G   0% /run/user/166

[root@tpacpucctrl2 ~]# du -ch /var/lib/mongodb/
1.5G    /var/lib/mongodb/journal
0       /var/lib/mongodb/_tmp
482G    /var/lib/mongodb/
482G    total

Comment 1 Flavio Percoco 2016-04-04 14:18:07 UTC
As mentioned on IRC, using `compact` on newer versions or `repairDatabase` on older ones is probably the best way to do so when the data needs to be preserved. Otherwise, in cases where data can be deleted, then dropping the database should be enough to claim the allocated space back.

Comment 2 Jeremy 2016-04-04 17:51:53 UTC
Dropped the DB by the following steps:

1.pcs resource disable openstack-ceilometer*
2.log into mongo and do db.dropDatabase()
3.check that /var/lib/mongodb/ceilometer* is gone.
4.may have to restart mongodb by doing pcs resource disable mongod-clone and re-enable
5.do use ceilometer; then db.stats() to ensure data is gone.
6. pcs resource enable openstack-ceilometer* 
7. now you will see that the tables are back and meters begin to fill the database once again.
8. ll -h /var/lib/mongodb to see the new ceilometer db created

Now the environment is stable with only 12% / use instad of 95%.