RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1091553 - "Internal error: Performing unsafe table load" error during snapshot of cache origin volume removal
Summary: "Internal error: Performing unsafe table load" error during snapshot of cache...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
: 1105732 (view as bug list)
Depends On:
Blocks: BrassowRHEL7Bugs 1114068 1119326 1185916
TreeView+ depends on / blocked
 
Reported: 2014-04-25 21:00 UTC by Corey Marthaler
Modified: 2021-09-03 12:40 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.02.111-1.el7
Doc Type: Bug Fix
Doc Text:
No Documentation Needed. Snapshots of cache logical volumes will happen in a future release.
Clone Of:
: 1114068 (view as bug list)
Environment:
Last Closed: 2015-03-05 13:08:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
-vvvv of the snap lvremove (225.08 KB, text/plain)
2014-04-25 21:29 UTC, Corey Marthaler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0513 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2015-03-05 16:14:41 UTC

Description Corey Marthaler 2014-04-25 21:00:14 UTC
Description of problem:
Simple removal attempt of cache origin snapshots.

*** Cache info for this scenario ***
*  origin (slow):  /dev/sdb1
*  pool (fast):    /dev/sdb2
************************************

Create cache volume and then do differing block io operations
Create origin (slow) volume
lvcreate -L 45G -n block_io_origin cache_sanity /dev/sdb1

#### MODE: writethrough
Create cache data and cache metadata (fast) volumes
lvcreate -L 4G -n block_cache cache_sanity /dev/sdb2
lvcreate -L 8M -n block_cache_meta cache_sanity /dev/sdb2

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --type cache-pool --cachemode writethrough --poolmetadata cache_sanity/block_cache_meta cache_sanity/block_cache
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --type cache --cachepool cache_sanity/block_cache cache_sanity/block_io_origin

Making snapshot block_snap16 of origin volume
lvcreate -s /dev/cache_sanity/block_io_origin -c 16 -n block_snap16 -L 3G
Making snapshot block_snap32 of origin volume
lvcreate -s /dev/cache_sanity/block_io_origin -c 32 -n block_snap32 -L 3G
Making snapshot block_snap64 of origin volume
lvcreate -s /dev/cache_sanity/block_io_origin -c 64 -n block_snap64 -L 3G
Making snapshot block_snap128 of origin volume
lvcreate -s /dev/cache_sanity/block_io_origin -c 128 -n block_snap128 -L 3G
Making snapshot block_snap256 of origin volume
lvcreate -s /dev/cache_sanity/block_io_origin -c 256 -n block_snap256 -L 3G
Making snapshot block_snap512 of origin volume
lvcreate -s /dev/cache_sanity/block_io_origin -c 512 -n block_snap512 -L 3G


[root@harding-03 ~]# lvs -a -o +devices
  LV                      Attr       LSize  Pool        Origin                  Data%  Devices                 
  block_cache             Cwi-a-C---  4.00g                                            block_cache_cdata(0)    
  [block_cache_cdata]     Cwi-aoC---  4.00g                                            /dev/sdb2(0)            
  [block_cache_cmeta]     ewi-aoC---  8.00m                                            /dev/sdb2(1024)         
  block_io_origin         owi-a-C--- 45.00g block_cache [block_io_origin_corig]        block_io_origin_corig(0)
  [block_io_origin_corig] -wi-ao---- 45.00g                                            /dev/sdb1(0)            
  block_snap128           swi-a-s---  3.00g             block_io_origin           0.00 /dev/sdb2(3332)         
  block_snap16            swi-a-s---  3.00g             block_io_origin           0.00 /dev/sdb2(1028)         
  block_snap256           swi-a-s---  3.00g             block_io_origin           0.00 /dev/sdb2(4100)         
  block_snap32            swi-a-s---  3.00g             block_io_origin           0.00 /dev/sdb2(1796)         
  block_snap512           swi-a-s---  3.00g             block_io_origin           0.00 /dev/sdb2(4868)         
  block_snap64            swi-a-s---  3.00g             block_io_origin           0.00 /dev/sdb2(2564)         
  [lvol0_pmspare]         ewi-------  8.00m                                            /dev/sdb2(1026)         

[root@harding-03 ~]# lvremove -f cache_sanity/block_snap16
  Internal error: Performing unsafe table load while 12 device(s) are known to be suspended:  (253:9) 
  Logical volume "block_snap16" successfully removed

[root@harding-03 ~]# lvremove -f cache_sanity/block_snap32
  Internal error: Performing unsafe table load while 10 device(s) are known to be suspended:  (253:9) 
  Logical volume "block_snap32" successfully removed

Apr 25 15:50:21 harding-03 lvm[4105]: No longer monitoring snapshot cache_sanity-block_snap32
Apr 25 15:50:21 harding-03 lvm[4105]: No longer monitoring snapshot cache_sanity-block_snap64
Apr 25 15:50:21 harding-03 lvm[4105]: No longer monitoring snapshot cache_sanity-block_snap128
Apr 25 15:50:21 harding-03 lvm[4105]: No longer monitoring snapshot cache_sanity-block_snap256
Apr 25 15:50:21 harding-03 lvm[4105]: No longer monitoring snapshot cache_sanity-block_snap512
Apr 25 15:50:23 harding-03 kernel: [352762.976132] quiet_error: 146 callbacks suppressed
Apr 25 15:50:23 harding-03 kernel: [352762.981511] Buffer I/O error on device dm-4, logical block 1048560
Apr 25 15:50:23 harding-03 lvm[4105]: Monitoring snapshot cache_sanity-block_snap64
Apr 25 15:50:23 harding-03 lvm[4105]: Monitoring snapshot cache_sanity-block_snap128
Apr 25 15:50:23 harding-03 lvm[4105]: Monitoring snapshot cache_sanity-block_snap256
Apr 25 15:50:23 harding-03 kernel: [352762.988540] Buffer I/O error on device dm-4, logical block 1048560
Apr 25 15:50:23 harding-03 kernel: [352762.997130] Buffer I/O error on device dm-4, logical block 1048574
Apr 25 15:50:23 harding-03 lvm[4105]: Monitoring snapshot cache_sanity-block_snap512
Apr 25 15:50:23 harding-03 kernel: [352763.005666] Buffer I/O error on device dm-4, logical block 1048574
Apr 25 15:50:23 harding-03 kernel: [352763.014214] Buffer I/O error on device dm-4, logical block 0
Apr 25 15:50:23 harding-03 kernel: [352763.022175] Buffer I/O error on device dm-4, logical block 0
Apr 25 15:50:23 harding-03 kernel: [352763.030155] Buffer I/O error on device dm-4, logical block 1
Apr 25 15:50:23 harding-03 kernel: [352763.038139] Buffer I/O error on device dm-4, logical block 1048575
Apr 25 15:50:23 harding-03 kernel: [352763.046675] Buffer I/O error on device dm-4, logical block 1048575
Apr 25 15:50:23 harding-03 kernel: [352763.055235] Buffer I/O error on device dm-4, logical block 1048575


Version-Release number of selected component (if applicable):
3.10.0-110.el7.x86_64
lvm2-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-libs-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-cluster-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-persistent-data-0.2.8-4.el7    BUILT: Fri Jan 24 14:28:55 CST 2014
cmirror-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014


How reproducible:
Everytime

Comment 1 Corey Marthaler 2014-04-25 21:29:26 UTC
Created attachment 889912 [details]
-vvvv of the snap lvremove

Comment 3 Corey Marthaler 2014-06-04 23:11:03 UTC
Here's the attempted removal of a snap of a raid cache pool.

Test Output:
lvremove -f /dev/cache_sanity/snap1

couldn't remove volume snap1
Internal error: Performing unsafe table load while 4 device(s) are known to be suspended:  (253:24) 
   device-mapper: resume ioctl on  failed: Invalid argument
   Unable to resume cache_sanity-rename_orig_A-real (253:24)
   Failed to resume rename_orig_A.
   libdevmapper exiting with 4 device(s) still suspended.




Jun  4 17:46:10 host-073 qarshd[25149]: Running cmdline: lvremove -f /dev/cache_sanity/snap1
Jun  4 17:46:10 host-073 lvm[3121]: No longer monitoring snapshot cache_sanity-snap1
Jun  4 17:46:10 host-073 lvm[3121]: No longer monitoring snapshot cache_sanity-snap2
Jun  4 17:46:10 host-073 lvm[3121]: No longer monitoring RAID device cache_sanity-rename_pool_A_cdata for events.
Jun  4 17:46:10 host-073 lvm[3121]: No longer monitoring RAID device cache_sanity-rename_pool_A_cmeta for events.
Jun  4 17:46:10 host-073 systemd-udevd: inotify_add_watch(7, /dev/dm-25, 10) failed: No such file or directory
Jun  4 17:46:11 host-073 kernel: device-mapper: block manager: validator mismatch (old=array vs new=btree_node) for block 140
Jun  4 17:46:11 host-073 kernel: device-mapper: cache: could not load origin discards
Jun  4 17:46:11 host-073 kernel: device-mapper: table: 253:24: cache: preresume failed, error = -22

Comment 4 Zdenek Kabelac 2014-09-25 08:55:07 UTC
Support for snapshot of cached volume is for now disabled as unsupported.
Please retest.

Comment 6 Jonathan Earl Brassow 2014-09-30 15:03:49 UTC
This "works" now.  We will need to add a new feature bug to allow snapshots of cache LVs.

[root@bp-01 ~]# lvcreate -s -L 500M -n snap vg/lv
  Snapshot of cache LV is not yet supported.

Comment 7 Jonathan Earl Brassow 2014-09-30 15:08:48 UTC
*** Bug 1105732 has been marked as a duplicate of this bug. ***

Comment 8 Corey Marthaler 2014-12-11 23:02:33 UTC
Marking "Verified" in the latest rpms. Snaps of cache volumes are no longer allowed.

3.10.0-215.el7.x86_64
lvm2-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
lvm2-libs-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
lvm2-cluster-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-libs-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-event-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-event-libs-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-persistent-data-0.4.1-2.el7    BUILT: Wed Nov 12 12:39:46 CST 2014
cmirror-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014


[root@host-115 ~]# lvcreate -s /dev/cache_sanity/corigin -c 64 -n merge -L 500M
  Snapshots of cache type volume cache_sanity/corigin is not supported.

Comment 9 Jonathan Earl Brassow 2015-01-26 15:58:20 UTC
*** Bug 1105732 has been marked as a duplicate of this bug. ***

Comment 11 errata-xmlrpc 2015-03-05 13:08:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0513.html


Note You need to log in before you can comment on or make changes to this bug.