Bug 753105
Summary: | [regression] 'lvremove' fails sometimes to remove snapshot volumes | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Artur Lipowski <alipowski> | ||||||||
Component: | lvm2 | Assignee: | Peter Rajnoha <prajnoha> | ||||||||
Status: | CLOSED NEXTRELEASE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||||||
Severity: | high | Docs Contact: | |||||||||
Priority: | unspecified | ||||||||||
Version: | 16 | CC: | agk, bmarzins, bmr, bugzilla, bugzilla-redhat, centaur, cyberrider, davidz, dwysocha, e_rhbugzilla, frank, heinzm, herrold, iand, john, jonathan, kay, kparal, kueda, liko, llowrey, lvm-team, mark, mbroz, msnitzer, ndevos, nls1729, non7top, panyongzhi, prajnoha, prockai, pza, rh-bugzilla, robertn, walter.haidinger, wolfgang.pichler, zart, zkabelac | ||||||||
Target Milestone: | --- | ||||||||||
Target Release: | --- | ||||||||||
Hardware: | i686 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | 712100 | Environment: | |||||||||
Last Closed: | 2013-01-16 08:48:29 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Artur Lipowski
2011-11-11 11:03:13 UTC
(In reply to comment #0) > +++ This bug was initially created as a clone of Bug #712100 +++ > > Still present in F15 The original report is already for F15. I assume you meant F16 so I'm changing the version. There are patches upstream for this (a new "retry_deactivation = 1" option in lvm.conf). This appears in lvm2 v2.02.89. However, this version is stil not released since it includes a lot of other (and more important) changes that still require some review. We'd like to test all these changes in Fedora rawhide first to avoid any other regressions. Once this is tested in rawhide, we'll backport this patch for other Fedora releases. I'm sorry for any inconvenience. (In reply to comment #1) « snip - snip » > There are patches upstream for this (a new "retry_deactivation = 1" option in > lvm.conf). This appears in lvm2 v2.02.89. However, this version is stil not « snip - snip » Peter, I presume that the "retry_deactivation" is a boolean parameter to "retry the volume deactivation process" and not a count of "maximum deactivate retries"? The parameter name *is* open to interpretation. Can you point us in the direction of a "NEW_FEATURES" document of the other you-beaut stuff coming in the later versions of lvm2? (I've found the Wiki and other doco, but it doesn't seem current - last updated in 2009 &/or 2010). Cheers! It is nice to know what is coming soon. But how can I remove a volume under Fedora 16 in a reliable way? (In reply to comment #3) > It is nice to know what is coming soon. But how can I remove a volume under > Fedora 16 in a reliable way? For snapshot volumes, I've been performing the following which has worked reliably for me: ### I have to use "dmsetup remove" to deactivate the snapshots first ### Volume list for dmsetup looks like "vg-vol1 vg-vol2 vg-vol3" etc ### ie dmsetup uses hyphens to separate the VG component from the LV for SNAPVOL in ${DM_VOLUME_LIST}; do printf "Deactivating snapshot volume %s\n" ${SNAPVOL} dmsetup remove ${SNAPVOL} dmsetup remove ${SNAPVOL}-cow ## for some reason, the copy-on-write devices aren't cleaned up auto-magically ## so I have to remove them auto-manually. done ## Okay - now we can remove the snapshot logical volumes ## Volume list for lvremove looks like "vg/vol1 vg/vol2 vg/vol3" etc ## ie lvremove uses slashes to separate the VG component from the LV lvremove -f ${LV_VOLUME_LIST} The above is taken from a working script I use to snapshot cyrus-imap file systems (after quiescing cyrus first) so that they can be backed up and still let cyrus-imap operate. I hope this gives you some ideas for your own needs. I use run_lvremove() { $DMSETUP remove "/dev/$1" || : $DMSETUP remove "/dev/$1-cow" 2>/dev/null || : /sbin/udevadm control --stop-exec-queue || : $LVM lvchange $NOUDEVSYNC --quiet -an "$1" || : /sbin/udevadm control --start-exec-queue || : $LVM lvremove --quiet -f "$1" && sleep 5 } but still get | Can't change snapshot logical volume ".nfs4.backup" | LV vg01/.nfs4.backup in use: not deactivating | Unable to deactivate logical volume ".nfs4.backup" The leftover from this is not marked as a snapshot anymore: # lvscan ACTIVE '/dev/vg01/nfs4' [4,00 MiB] inherit ACTIVE Original '/dev/vg01/data' [135,00 GiB] inherit ... ACTIVE '/dev/vg01/.nfs4.backup' [1,00 GiB] inherit ACTIVE '/dev/vg01/.virt.backup' [1,00 GiB] inherit ACTIVE Snapshot '/dev/vg01/.data.backup' [5,00 GiB] inherit And somehow, subsequent 'mount' operations fail with obscure errors like | mount: unknown filesystem type 'DM_snapshot_cow' F16 rendered working with LVM nearly impossible :( IMHO examples here are not valid - it can't be mixed lvm2 commands and dmsetup commands together they are incompatible in terms of udev synchronization. So i.e. example in comment 5 isn't really good idea at all - what is this supposed to be doing ?? While saying this - I've a patch proposal for upstream inclusion - since current retry code only understands mounted ext4 and fuse filesystems and will not try retry mechanism for other filesystems - my patch proposal is a bit more smarter is it goes through /proc/mounts entries - but it's not smart enough yet I think - but should give much better results. Created attachment 573282 [details]
Patch for scanning /proc/mounts
Patch proposal to check /proc/mounts entries to find out, whether device is mounted.
The file /proc/self/mountinfo contains the dev_t, it's generally safer to use. Also be careful with stat(), it is a 'name' not a 'device'. It's not likely to cause problems, but it's possible in theory. The name can contain anything, it is copied verbatim from the mount() system call. It is recommended to use lstat() here to avoid triggering the automounter in case you run over a path name here. I read this from thread creation when it was F13. None of the tequniques worked for me, running F16, even once. If I just create the snapshot, I can remove it. But, if I just mount it for a second, and then unmount it, I cannot remove it. It does not appear in /proc/mounts after unmounting. When I try dmsetup remove, I just get: # dmsetup remove vg_corsair-ss device-mapper: remove ioctl failed: Device or resource busy Command failed I have not seen anyone report this when they tried dmsetup. uname -a: Linux localhost 3.3.0-4.fc16.x86_64 #1 SMP Tue Mar 20 18:05:40 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lvm2-2.02.86-6.fc16.x86_64 system-config-lvm-1.1.16-4.fc16.noarch lvm2-libs-2.02.86-6.fc16.x86_64 llvm-libs-2.9-6.fc16.x86_64 Let me know if you need any more diagnostics from me. With F16 I also saw the new "not deactivating" error and had to work around it by repeating the lvremove if it failed the first time. I put upto 5 retries on the lvremove in a script and so far its only had to be repeated once, every few days. I also have the KERNEL=="dm-*", OPTIONS+="watch" line commented, in /lib/udev/rules.d/80-udisks.rules which I had to add for F15 (or F14)... I have a test machine running FC16 (x86_64), and I've run a number of snapshot tests without failure. I'm going to attach a copy of my test script along with its output for people to review. There have been no local changes to udev rules, ie the environment is essentially a bog-standard install, fully patched. I will make one comment here, which may or may not have an impact. I absolutely *HATE* gnome3 and have uninstalled the GNOME Desktop Environment because I find it inherently unusable. There are some GNOME libraries for apps that need them, but the GNOME VFS package is NOT present. Whether this has an impact with udev etc, and due to the absence of GNOME VFS is allowing LVM to work correctly, I don't know. Anyway, LVM & kernel packages for my system are shown here: [root@central ~]# rpm -qa | egrep -ie '(lvm)|(kernel)' | sort -fu abrt-addon-kerneloops-2.0.7-2.fc16.x86_64 kernel-3.2.10-3.fc16.x86_64 kernel-3.2.9-1.fc16.x86_64 kernel-3.2.9-2.fc16.x86_64 kernel-3.3.0-4.fc16.x86_64 kernel-3.3.0-8.fc16.x86_64 kernel-headers-3.3.0-8.fc16.x86_64 libreport-plugin-kerneloops-2.0.8-4.fc16.x86_64 llvm-libs-2.9-9.fc16.i686 llvm-libs-2.9-9.fc16.x86_64 lvm2-2.02.86-6.fc16.x86_64 lvm2-libs-2.02.86-6.fc16.x86_64 [root@central ~]# uname -a Linux central.treetops 3.3.0-8.fc16.x86_64 #1 SMP Thu Mar 29 18:37:19 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Created attachment 574829 [details]
Quick & Dirty test snapshot script
Created attachment 574830 [details]
Output from the snapshot test script
Here is an extract from /var/log/messages showing the logged events pertaining to the last 4 runs of my script: Apr 3 21:05:07 central kernel: [176887.592380] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:05:07 central kernel: [176887.592389] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:05:07 central kernel: [176887.592395] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:05:08 central lvm[18439]: Monitoring snapshot vg00-snap_varlog Apr 3 21:05:08 central kernel: [176888.183588] EXT4-fs (dm-15): mounted filesystem with ordered data mode. Opts: (null) Apr 3 21:05:26 central lvm[18439]: No longer monitoring snapshot vg00-snap_varlog Apr 3 21:05:56 central kernel: [176936.489527] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:05:56 central kernel: [176936.489536] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:05:56 central kernel: [176936.489542] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:05:56 central lvm[18439]: Monitoring snapshot vg00-snap_varlog Apr 3 21:05:57 central kernel: [176937.026962] EXT4-fs (dm-15): mounted filesystem with ordered data mode. Opts: (null) Apr 3 21:06:15 central lvm[18439]: No longer monitoring snapshot vg00-snap_varlog Apr 3 21:17:27 central kernel: [177627.082392] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:17:27 central kernel: [177627.082401] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:17:27 central kernel: [177627.082408] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:17:27 central lvm[18439]: Monitoring snapshot vg00-snap_varlog Apr 3 21:17:28 central kernel: [177627.966626] EXT4-fs (dm-15): mounted filesystem with ordered data mode. Opts: (null) Apr 3 21:17:47 central lvm[18439]: Extension of snapshot vg00/snap_varlog finished successfully. Apr 3 21:17:51 central lvm[18439]: No longer monitoring snapshot vg00-snap_varlog Apr 3 21:18:21 central kernel: [177681.486416] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:18:21 central kernel: [177681.486421] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:18:21 central kernel: [177681.486424] lvcreate: sending ioctl 1261 to a partition! Apr 3 21:18:21 central lvm[18439]: Monitoring snapshot vg00-snap_varlog Apr 3 21:18:22 central kernel: [177682.067903] EXT4-fs (dm-15): mounted filesystem with ordered data mode. Opts: (null) Apr 3 21:18:42 central lvm[18439]: Extension of snapshot vg00/snap_varlog finished successfully. Apr 3 21:18:45 central lvm[18439]: No longer monitoring snapshot vg00-snap_varlog Commenting out the rule does not help me at all. Retrying 1000 times does not help either. The only way I can remove the snapshot is by rebooting first. Can't say I've used the dmsetup commands; have you tried using the lvm commands instead? I normally only use the lvm commands. lvremove doesn't work without a reboot, either. I use snapshots to create consistent backups, so my backup is a bit crippled right now on this computer. Works great on another computer where I launch from Ubuntu 10. This is an excerpt of what I use for backups using lvm snapshots... works fine on FC16. We retain a few GB of free space in the LVM storage for the snapshot, which generally suffices on most systems unless there is a lot of updates happening during backups, which is rare on the systems we use this on. -- DTYPE=$1 # NB: level contains level and 'u' option (eg: 0u) LEVEL=$2 TAPE=$3 FS=$4 RHOST=$5 case "${TAPE}${RHOST}" in --) # dumping to stdout RTAPE="-" ;; *) # using rsh/rmt(8) RTAPE="${RHOST}:${TAPE}" ;; esac LVCREATE="" if [ -x /sbin/lvcreate ] then LVCREATE="/sbin/lvcreate" LVREMOVE="/sbin/lvremove" LVDISPLAY="/sbin/lvdisplay" elif [ -x /usr/sbin/lvcreate ] then LVCREATE="/usr/sbin/lvcreate" LVREMOVE="/usr/sbin/lvremove" LVDISPLAY="/usr/sbin/lvdisplay" fi if [ "`df $FS | grep /dev/mapper`" -a "$LVCREATE" != "" ] ; then DUMPDEV=`df $FS |grep mapper | cut -d/ -f4 | cut -d' ' -f1 | tr - /` VOL=`echo $DUMPDEV | cut -d/ -f1` SNAPVOL=`echo $DUMPDEV | cut -d / -f2`-snap SNAPVOL2=`echo $DUMPDEV | cut -d / -f2`--snap SNAPDEV=/dev/$VOL/$SNAPVOL SNAPRDEV=/dev/mapper/$VOL-$SNAPVOL2 echo DUMPDEV=$DUMPDEV 1>&2 echo VOL=$VOL 1>&2 echo SNAPVOL=$SNAPVOL 1>&2 echo SNAPVOL2=$SNAPVOL2 1>&2 echo SNAPDEV=$SNAPDEV 1>&2 echo SNAPRDEV=$SNAPRDEV 1>&2 # cleanup from last backup $LVREMOVE -f $SNAPDEV >/dev/null 2>&1 echo `date` starting snapshot 1>&2 $LVCREATE -l 100%FREE -s -n $SNAPVOL /dev/$DUMPDEV 1>&2 echo `date` starting backup 1>&2 dump "${LEVEL}bfL" 32 "$RTAPE" "$FS" $SNAPRDEV # workaround FC16 bug; delay before clearing sleep 5 echo `date` clearing snapshot 1>&2 $LVREMOVE -f $SNAPDEV 1>&2 # workaround FC16 bug; do it again if needed for i in 1 2 3 4 5 do $LVDISPLAY | grep $SNAPDEV >/dev/null if [ $? = 0 ] then # its still there! sleep 5 echo `date` clearing snapshot again 1>&2 $LVREMOVE -f $SNAPDEV 1>&2 else break fi done $LVDISPLAY | grep $SNAPDEV >/dev/null if [ $? = 0 ] then echo `date` gave up clearing snapshot - manual intervention required 1>&2 STATUS=1 fi exit $STATUS I'm experiencing the same problem, looping the lvremove statement doesn't succeed, the snapshot can only be removed after a reboot. I notice also a similar problem with fsck on logical volumes - once the volume has been mounted once, and unmounted, it cannot be checked with fsck (e2fsck). # fsck /dev/mapper/Backup-Home--backup fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Device or resource busy while trying to open /dev/mapper/Backup-Home--backup Filesystem mounted or opened exclusively by another program? # mount | grep Backup | wc -l 0 Could this be a clue? what happens if you use fsck -n (read-only open) ? Also see bug 809188 - it seems there is some more generic bug, not related to lvm. (In reply to comment #20) > what happens if you use fsck -n (read-only open) ? # fsck -n /dev/mapper/Backup-Home--backup fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Device or resource busy while trying to open /dev/mapper/Backup-Home--backup Filesystem mounted or opened exclusively by another program? In reply to comment #21, ref bug 809188 I do use gnome-shell, but my backup script runs overnight when there usually isn't a user logged in. Sorry, that should have been ref bug 808795. I added comment #1 to bug 808795. It may be of interest to followers of this bug. I can't see the problem on my FC16 servers; all of which use md mirroring and lvm2. However they were all upgraded via yum distro-sync from FC15, FC14 and were all fresh installed as FC13, from memory. Perhaps that has some bearing on it. Below is a bunch of tests I just ran on one of them; fsck r/o, fsck rw/w, mount r/o, mount r/w; all good. -- # df Filesystem 1K-blocks Used Available Use% Mounted on rootfs 27354712K 4680956K 21302416K 19% / devtmpfs 504932K 4K 504928K 1% /dev tmpfs 513108K 0K 513108K 0% /dev/shm /dev/mapper/VolGroup00-LogVol01 27354712K 4680956K 21302416K 19% / tmpfs 513108K 40504K 472604K 8% /run tmpfs 513108K 0K 513108K 0% /sys/fs/cgroup tmpfs 513108K 0K 513108K 0% /media /dev/md0 196877K 73096K 113542K 40% /boot # mdadm --detail /dev/md1 /dev/md1: Version : 1.1 Creation Time : Fri Sep 16 08:30:27 2011 Raid Level : raid1 Array Size : 35838908 (34.18 GiB 36.70 GB) Used Dev Size : 35838908 (34.18 GiB 36.70 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed Apr 11 01:48:37 2012 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : hostname.domain:1 (local to host hostname.domain) UUID : a2b3dacc:8163523e:20813db4:2b122e3d Events : 7261 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 # pvdisplay --- Physical volume --- PV Name /dev/md1 VG Name VolGroup00 PV Size 34.18 GiB / not usable 22.93 MiB Allocatable yes PE Size 32.00 MiB Total PE 1093 Free PE 128 Allocated PE 965 PV UUID d2CXHD-Aumo-shdQ-p8Bm-ZpQz-7i1T-5ZqlM1 # lvdisplay --- Logical volume --- LV Name /dev/VolGroup00/LogVol01 VG Name VolGroup00 LV UUID pHt4NT-V3fk-VyPf-dMqu-yIwC-nJHJ-TLKeKu LV Write Access read/write LV Status available # open 1 LV Size 26.16 GiB Current LE 837 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/VolGroup00/LogVol00 VG Name VolGroup00 LV UUID cR4csQ-C2vd-NBuq-mzG2-VYrp-NapC-M85fmb LV Write Access read/write LV Status available # open 2 LV Size 4.00 GiB Current LE 128 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 # lvcreate -l 100%FREE -s -n /dev/VolGroup00/LogVol01-snap /dev/VolGroup00/LogVol01 Logical volume "LogVol01-snap" created # lvdisplay --- Logical volume --- LV Name /dev/VolGroup00/LogVol01 VG Name VolGroup00 LV UUID pHt4NT-V3fk-VyPf-dMqu-yIwC-nJHJ-TLKeKu LV Write Access read/write LV snapshot status source of /dev/VolGroup00/LogVol01-snap [active] LV Status available # open 1 LV Size 26.16 GiB Current LE 837 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/VolGroup00/LogVol00 VG Name VolGroup00 LV UUID cR4csQ-C2vd-NBuq-mzG2-VYrp-NapC-M85fmb LV Write Access read/write LV Status available # open 2 LV Size 4.00 GiB Current LE 128 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name /dev/VolGroup00/LogVol01-snap VG Name VolGroup00 LV UUID KjW4nK-EZIk-u80Y-vNbJ-0WoG-HfWS-fmBq3X LV Write Access read/write LV snapshot status active destination for /dev/VolGroup00/LogVol01 LV Status available # open 0 LV Size 26.16 GiB Current LE 837 COW-table size 4.00 GiB COW-table LE 128 Allocated to snapshot 0.00% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 # fsck -n /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166035/1716960 files, 1187687/6856704 blocks # fsck -n /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166035/1716960 files, 1187687/6856704 blocks # fsck -n /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166035/1716960 files, 1187687/6856704 blocks # fsck /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) Clearing orphaned inode 926467 (uid=0, gid=0, mode=0100755, size=64684) Clearing orphaned inode 924630 (uid=0, gid=0, mode=0100755, size=36784) Clearing orphaned inode 1048220 (uid=0, gid=0, mode=0100755, size=195416) Clearing orphaned inode 1180996 (uid=0, gid=0, mode=0100755, size=368404) Clearing orphaned inode 920626 (uid=0, gid=0, mode=0100755, size=934428) Clearing orphaned inode 1047692 (uid=0, gid=0, mode=0100755, size=11400) Clearing orphaned inode 924628 (uid=0, gid=0, mode=0100755, size=34328) Clearing orphaned inode 1047726 (uid=0, gid=0, mode=0100755, size=64448) Clearing orphaned inode 1047669 (uid=0, gid=0, mode=0100755, size=113736) Clearing orphaned inode 1047691 (uid=0, gid=0, mode=0100755, size=56588) Clearing orphaned inode 1047718 (uid=0, gid=0, mode=0100755, size=95800) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166024/1716960 files, 1187201/6856704 blocks # fsck /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166024/1716960 files, 1187201/6856704 blocks # fsck /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166024/1716960 files, 1187201/6856704 blocks # mkdir /j # mount -r /dev/VolGroup00/LogVol01-snap /j # ll /j total 164K dr-xr-xr-x. 26 root root 4096 Mar 25 22:26 . dr-xr-xr-x. 27 root root 4096 Apr 11 00:03 .. -rw-r--r--. 1 root root 22345 Mar 25 22:26 .readahead dr-xr-xr-x. 2 root root 4096 Mar 26 03:25 bin drwxr-xr-x. 2 root root 4096 Sep 16 2011 boot drwxr-xr-x. 2 root root 4096 Mar 3 2011 cgroup drwxr-xr-x. 2 root root 4096 Sep 16 2011 dev drwxr-xr-x. 96 root root 12288 Apr 4 03:31 etc drwxr-xr-x. 11 root root 4096 Jul 29 2011 home drwxr-xr-x. 2 root root 4096 Sep 16 2011 import dr-xr-xr-x. 19 root root 12288 Mar 26 03:25 lib drwx------. 2 root root 16384 Sep 16 2011 lost+found drwxr-xr-x. 2 root root 4096 May 18 2011 media drwxr-xr-x. 2 root root 4096 Jul 29 2011 mnt drwxr-xr-x. 4 root root 4096 Jul 29 2011 opt drwxr-xr-x. 2 root root 4096 Sep 16 2011 proc dr-xr-x---. 3 root root 4096 Jul 29 2011 root drwxr-xr-x. 19 root root 4096 Sep 16 2011 run dr-xr-xr-x. 2 root root 12288 Mar 26 03:25 sbin drwxr-xr-x. 2 root root 4096 Sep 16 2011 selinux drwxr-xr-x. 2 root root 4096 Jul 29 2011 srv drwxr-xr-x. 2 root root 4096 Sep 16 2011 sys drwxrwxrwt. 15 root root 4096 Apr 11 00:01 tmp drwxr-xr-x. 2 root root 4096 Sep 16 2011 tmp-build drwxr-xr-x. 12 root root 4096 Mar 19 03:55 usr drwxr-xr-x. 16 root root 4096 Mar 19 03:55 var # touch /j/kkk touch: cannot touch `/j/kkk': Read-only file system # umount /j # fsck /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166024/1716960 files, 1187201/6856704 blocks # lvdisplay --- Logical volume --- LV Name /dev/VolGroup00/LogVol01 VG Name VolGroup00 LV UUID pHt4NT-V3fk-VyPf-dMqu-yIwC-nJHJ-TLKeKu LV Write Access read/write LV snapshot status source of /dev/VolGroup00/LogVol01-snap [active] LV Status available # open 1 LV Size 26.16 GiB Current LE 837 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/VolGroup00/LogVol00 VG Name VolGroup00 LV UUID cR4csQ-C2vd-NBuq-mzG2-VYrp-NapC-M85fmb LV Write Access read/write LV Status available # open 2 LV Size 4.00 GiB Current LE 128 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name /dev/VolGroup00/LogVol01-snap VG Name VolGroup00 LV UUID KjW4nK-EZIk-u80Y-vNbJ-0WoG-HfWS-fmBq3X LV Write Access read/write LV snapshot status active destination for /dev/VolGroup00/LogVol01 LV Status available # open 0 LV Size 26.16 GiB Current LE 837 COW-table size 4.00 GiB COW-table LE 128 Allocated to snapshot 0.03% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 # lvremove -f /dev/VolGroup00/LogVol01-snap Logical volume "LogVol01-snap" successfully removed # lvremove -f /dev/VolGroup00/LogVol01-snap One or more specified logical volume(s) not found. # lvdisplay --- Logical volume --- LV Name /dev/VolGroup00/LogVol01 VG Name VolGroup00 LV UUID pHt4NT-V3fk-VyPf-dMqu-yIwC-nJHJ-TLKeKu LV Write Access read/write LV Status available # open 1 LV Size 26.16 GiB Current LE 837 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/VolGroup00/LogVol00 VG Name VolGroup00 LV UUID cR4csQ-C2vd-NBuq-mzG2-VYrp-NapC-M85fmb LV Write Access read/write LV Status available # open 2 LV Size 4.00 GiB Current LE 128 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 # lvcreate -l 100%FREE -s -n /dev/VolGroup00/LogVol01-snap /dev/VolGroup00/LogVol01 Logical volume "LogVol01-snap" created # fsck /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) Clearing orphaned inode 926467 (uid=0, gid=0, mode=0100755, size=64684) Clearing orphaned inode 924630 (uid=0, gid=0, mode=0100755, size=36784) Clearing orphaned inode 1048220 (uid=0, gid=0, mode=0100755, size=195416) Clearing orphaned inode 1180996 (uid=0, gid=0, mode=0100755, size=368404) Clearing orphaned inode 920626 (uid=0, gid=0, mode=0100755, size=934428) Clearing orphaned inode 1047692 (uid=0, gid=0, mode=0100755, size=11400) Clearing orphaned inode 924628 (uid=0, gid=0, mode=0100755, size=34328) Clearing orphaned inode 1047726 (uid=0, gid=0, mode=0100755, size=64448) Clearing orphaned inode 1047669 (uid=0, gid=0, mode=0100755, size=113736) Clearing orphaned inode 1047691 (uid=0, gid=0, mode=0100755, size=56588) Clearing orphaned inode 1047718 (uid=0, gid=0, mode=0100755, size=95800) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166027/1716960 files, 1187211/6856704 blocks # fsck /dev/VolGroup00/LogVol01-snap fsck from util-linux 2.20.1 e2fsck 1.41.14 (22-Dec-2010) /dev/mapper/VolGroup00-LogVol01--snap: clean, 166027/1716960 files, 1187211/6856704 blocks # mount /dev/VolGroup00/LogVol01-snap /j # ll /j total 168K dr-xr-xr-x. 27 root root 4096 Apr 11 00:05 . dr-xr-xr-x. 27 root root 4096 Apr 11 00:05 .. -rw-r--r--. 1 root root 22345 Mar 25 22:26 .readahead dr-xr-xr-x. 2 root root 4096 Mar 26 03:25 bin drwxr-xr-x. 2 root root 4096 Sep 16 2011 boot drwxr-xr-x. 2 root root 4096 Mar 3 2011 cgroup drwxr-xr-x. 2 root root 4096 Sep 16 2011 dev drwxr-xr-x. 96 root root 12288 Apr 4 03:31 etc drwxr-xr-x. 11 root root 4096 Jul 29 2011 home drwxr-xr-x. 2 root root 4096 Sep 16 2011 import drwxr-xr-x. 2 root root 4096 Apr 11 00:03 j dr-xr-xr-x. 19 root root 12288 Mar 26 03:25 lib drwx------. 2 root root 16384 Sep 16 2011 lost+found drwxr-xr-x. 2 root root 4096 May 18 2011 media drwxr-xr-x. 2 root root 4096 Jul 29 2011 mnt drwxr-xr-x. 4 root root 4096 Jul 29 2011 opt drwxr-xr-x. 2 root root 4096 Sep 16 2011 proc dr-xr-x---. 3 root root 4096 Jul 29 2011 root drwxr-xr-x. 19 root root 4096 Sep 16 2011 run dr-xr-xr-x. 2 root root 12288 Mar 26 03:25 sbin drwxr-xr-x. 2 root root 4096 Sep 16 2011 selinux drwxr-xr-x. 2 root root 4096 Jul 29 2011 srv drwxr-xr-x. 2 root root 4096 Sep 16 2011 sys drwxrwxrwt. 15 root root 4096 Apr 11 00:07 tmp drwxr-xr-x. 2 root root 4096 Sep 16 2011 tmp-build drwxr-xr-x. 12 root root 4096 Mar 19 03:55 usr drwxr-xr-x. 16 root root 4096 Mar 19 03:55 var # touch /j/kkk # ll /j/kkk -rw-r--r--. 1 root root 0 Apr 11 00:08 /j/kkk # umount /j # mount /dev/VolGroup00/LogVol01-snap /j # ll /j/kkk -rw-r--r--. 1 root root 0 Apr 11 00:08 /j/kkk # umount /j # rmdir /j # lvremove -f /dev/VolGroup00/LogVol01-snap Logical volume "LogVol01-snap" successfully removed # !! lvremove -f /dev/VolGroup00/LogVol01-snap One or more specified logical volume(s) not found. # rpm -q lvm2 lvm2-2.02.86-6.fc16.i686 # rpm -q kernel-PAE kernel-PAE-2.6.42.9-1.fc15.i686 kernel-PAE-3.2.10-3.fc16.i686 kernel-PAE-3.3.0-4.fc16.i686 # uname -r 3.3.0-4.fc16.i686.PAE Just FYI - you perhaps need to disable sandbox service and reboot if you see the problem still, see https://bugzilla.redhat.com/show_bug.cgi?id=808795#c31 Interesting. This is probably why it doesn't affect me as all the boxes I use LVM snapshots on either don't have sandbox enabled (probably related to them not having it enabled in prior fedoras and were upgraded) or are servers running in state 3. (sandbox only seems to be enabled in state 5 on fresh installed F16 boxes) The "retry_deactivation" lvm.conf option is included in lvm2 v2.02.89 and later. If set, this setting causes the LVM to retry volume removal several times if not successful (this option is used by default). The same logic works for dmsetup remove command. You can use "--retry" option there. I'm closing this bug with NEXTRELEASE as lvm2 version >= 2.02.89 is part of newer Fedora releases only (Fedora >= 17). |