Hide Forgot
Description of problem: Openshift people are looking for availability of thin_ls on rhel. This is useful to figure out COW layer usage of container. Update device-mapper-persistent-data package to latest for 7.3. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Here is upstream discussion looking for thin_ls. https://github.com/google/cadvisor/issues/959#issuecomment-191326562
This is needed for both Kubernetes and OpenShift. thin_ls gives us the ability to determine the amount of storage that a container has used in its COW layer.
Currently thin_ls does not operate on live metadata $ ./thin_ls -o DEV,EXCLUSIVE /dev/mapper/fedora-docker--pool_tmeta syscall 'open' failed: Device or resource busy Note: you cannot run this tool with these options on live metadata. This a problem for the cadvisor use case. Strangely, it seems to work when docker is using the loop thin dm rather than a real device. Still looking into this.
In order for thin_ls to work on a live thin device a snapshot must be taken dmsetup message /dev/mapper/fedora-docker--pool 0 "reserve_metadata_snap" Then thin_ls will work. After thin_ls is run, the snapshot needs to be released dmsetup message /dev/mapper/fedora-docker--pool 0 "release_metadata_snap"
# rpm -q device-mapper-persistent-data device-mapper-persistent-data-0.6.2-0.1.rc8.el7.x86_64 # rpm -ql device-mapper-persistent-data | grep thin_ls /usr/sbin/thin_ls /usr/share/man/man8/thin_ls.8.gz # lvcreate -L 200M -T vgtest/mythinpool -V1G -n thin1 # lvcreate -T vgtest/mythinpool -V1G -n thin2 # dmsetup message /dev/mapper/vgtest-mythinpool-tpool 0 reserve_metadata_snap # thin_ls --metadata-snap /dev/mapper/vgtest-mythinpool_tmeta DEV MAPPED CREATE_TIME SNAP_TIME 1 0 0 0 2 0 0 0 # dmsetup message /dev/mapper/vgtest-mythinpool-tpool 0 release_metadata_snap
*** Bug 1355797 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2211.html