Bug 1292845
Summary: | Cleaning up docker space | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Miheer Salunke <misalunk> |
Component: | Containers | Assignee: | Michal Fojtik <mfojtik> |
Status: | CLOSED NOTABUG | QA Contact: | weiwei jiang <wjiang> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 3.1.0 | CC: | aos-bugs, bchilds, dmcphers, dmoessne, eminguez, erich, erjones, fgrosjea, geliu, haowang, jfoots, jokerman, jolee, jsafrane, knakayam, mfojtik, miminar, misalunk, mmccomas, mturansk, nschuetz, rhowe, vgoyal, wmeng |
Target Milestone: | --- | Keywords: | Performance |
Target Release: | 3.7.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-11-12 21:22:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1292964 | ||
Bug Blocks: |
Description
Miheer Salunke
2015-12-18 13:44:48 UTC
What about containers? Are there any containers in the system. (docker ps -a) containers were deleted. Customer run "docker rm $(docker ps -a -q)" command to delete them all. In docker info output, it seems to say there are 9 containers. [root@ose-node2 ~]# docker info Containers: 9 Images: 10 It will be a good idea to check with customer again the output of 'docker ps' and make sure all containers have been deleted. Mybe this will helpfull, the GC of OSE https://docs.openshift.com/enterprise/3.1/admin_guide/garbage_collection.html Can anybody explain, how a deletion of volumes on the rootfs can reclaim the space on the thin pool? Sure, we can add periodic removal of orphaned volumes to the garbage collector and remove containers with `-v`, but I don't think it solves the original issue. I'll try to reproduce on rhel. With docker-1.9.1-10.el7 with thinpool storage I created around 400 containers with volumes. With deletion of the containers, some of the space was freed. There were 165 orphaned volumes left. Their deletion freed space on rootfs without any impact on the thinpool. After deletion of all images, the free space on thinpool was back to original value. Shortened Docker info: $ docker info Containers: 0 Images: 0 Server Version: 1.9.1-el7 Storage Driver: devicemapper Pool Name: vgdocker-tp Pool Blocksize: 65.54 kB Base Device Size: 107.4 GB Backing Filesystem: xfs Data file: Metadata file: Data Space Used: 53.74 MB Data Space Total: 32.14 GB Data Space Available: 32.09 GB Metadata Space Used: 188.4 kB Metadata Space Total: 33.55 MB Metadata Space Available: 33.37 MB Udev Sync Supported: true Deferred Removal Enabled: false Deferred Deletion Enabled: false Deferred Deleted Device Count: 0 Library Version: 1.02.107-RHEL7 (2015-10-14) Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.10.0-327.el7.x86_64 Operating System: Red Hat Enterprise Linux Server 7.3 Beta (Maipo) I'm not sure what needs to be fixed here. Shall we investigate further in devicemapper not freeing the space after images being removed (which I failed to reproduce on the first try with just a Docker), or shall we make sure that all the orphaned volumes are deleted? Miheer or Alexander, any thoughts? Michael, agreed that volumes are on rootfs and removing volumes does not free space in thin pool. I think in the example above there were still some containers in the system, and that implies there must have been some images. I suspect that these containers might have written lot of data of their own and might be consuming significant space in thin pool. Setting as upcoming release as there were not significant improvements in pruning in 3.4, however you can now use scheduled jobs (alpha) to automatically prune the image. Documentation PR: https://github.com/openshift/origin/pull/11317 @mfojtik, this bug's original target is docker volume space reclaim issue, and the last fix is cronjob to auto-prune images topic, could you help to give more clue for this issue, and how to verify it? many thanks in advance! Closing Not a bug. Documentation for OpenShift shows how to configure garbage collection. https://docs.openshift.com/container-platform/3.7/admin_guide/garbage_collection.html Docker storage is meant to be ephemeral, one can manually wipe and recreate the docker storage or delete all images and containers with docker commands. Delete all containers # docker rm -f $(docker ps -aq) Delete all images # docker rmi $(docker images -q) The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days |