Bug 1273365
Summary: | Can't unmount a container image previously mounted with atomic mount | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Alex Jia <ajia> | |
Component: | atomic | Assignee: | Brent Baude <bbaude> | |
Status: | CLOSED ERRATA | QA Contact: | atomic-bugs <atomic-bugs> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 7.2 | CC: | atomic-bugs, dwalsh, lsm5, ovasik, walters | |
Target Milestone: | rc | Keywords: | Extras, Regression | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | atomic-1.6-1.gitca1e384.el7 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1274518 (view as bug list) | Environment: | ||
Last Closed: | 2016-03-31 23:25:08 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: |
Description
Alex Jia
2015-10-20 09:45:30 UTC
Is the backend the same both devmapper? Or is this on overlayfs? (In reply to Daniel Walsh from comment #2) > Is the backend the same both devmapper? Or is this on overlayfs? [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ rpm -q kernel kernel-3.10.0-229.14.1.el7.x86_64 [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ lsmod|grep -i overlay [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ dmesg|grep -i overlay [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ sudo docker info | grep -i -A3 storage Storage Driver: devicemapper Pool Name: atomicos-docker--pool Pool Blocksize: 65.54 kB Backing Filesystem: xfs Brent does the latest fix for atomic unmount fix this problem? (In reply to Daniel Walsh from comment #4) > Brent does the latest fix for atomic unmount fix this problem? Yes, I believe it will. I also just submitted a new PR to fix a bug related to atomic mount and XFS ... which default RHEL/Atomic is. Would be prudent to pick that one up too. Fixed in atomic-1.6 This issue has been verified on atomic-1.6-1.gitca1e384.el7.x86_64, so move the bug to verified status, the test details as follows. [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ sudo atomic host status TIMESTAMP (UTC) VERSION ID OSNAME REFSPEC * 2015-10-23 07:19:49 7.2.internal.0.47 4bd90ff2fd rhel-atomic-host rhelah-autobuild:rhel-atomic-host/7.2/x86_64/autobrew/buildmaster 2015-10-15 09:13:14 7.1.6 9f139ac644 rhel-atomic-host rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ rpm -q atomic atomic-1.6-1.gitca1e384.el7.x86_64 [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ sudo atomic images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE docker.io/busybox latest 0064fda8c45d 2015-10-14 19:36 1.11 MB [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ sudo atomic mount busybox /mnt [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/atomicos-root xfs 3.0G 2.1G 866M 72% / devtmpfs devtmpfs 902M 0 902M 0% /dev tmpfs tmpfs 921M 0 921M 0% /dev/shm tmpfs tmpfs 921M 304K 920M 1% /run tmpfs tmpfs 921M 0 921M 0% /sys/fs/cgroup /dev/vda1 xfs 297M 139M 159M 47% /boot tmpfs tmpfs 185M 0 185M 0% /run/user/1000 /dev/dm-4 xfs 10G 1.9M 10G 1% /var/mnt [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ sudo atomic unmount /mnt [cloud-user@plat-infra-382a4f35-aa37-447b-a7f6-c8b8181165fd ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/atomicos-root xfs 3.0G 2.1G 866M 72% / devtmpfs devtmpfs 902M 0 902M 0% /dev tmpfs tmpfs 921M 0 921M 0% /dev/shm tmpfs tmpfs 921M 300K 920M 1% /run tmpfs tmpfs 921M 0 921M 0% /sys/fs/cgroup /dev/vda1 xfs 297M 139M 159M 47% /boot tmpfs tmpfs 185M 0 185M 0% /run/user/1000 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0527.html |