RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1179970 - "attempt to access beyond end of device" when attempting a raid cache pool on small extent sized VGs
Summary: "attempt to access beyond end of device" when attempting a raid cache pool on...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 7.4
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1366036
Blocks: 1216214 1295577 1313485 1438583 1445812 1522983
TreeView+ depends on / blocked
 
Reported: 2015-01-07 23:08 UTC by Corey Marthaler
Modified: 2023-03-08 07:27 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.175-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1216214 (view as bug list)
Environment:
Last Closed: 2018-04-10 15:16:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0853 0 None None None 2018-04-10 15:17:48 UTC

Description Corey Marthaler 2015-01-07 23:08:56 UTC
Description of problem:
This appears to be related to bugs 1168434/1108361 which should have been fixed in kernel -216.

Also, this only appears when using raid volumes. Non raid volumes on 1k extent VGs convert to cache pools just fine.


SCENARIO - [create_cache_on_1Kextent_vg]

*** Cache info for this scenario ***
*  origin (slow):  /dev/sdd1 /dev/sdf1
*  pool (fast):    /dev/sde2 /dev/sdf2
************************************

Create a small cache on a VG with an extent size of only 1K
Recreating VG with smaller (1K) extent size
pvcreate --setphysicalvolumesize 500M /dev/sdd1 /dev/sdf1 /dev/sde2 /dev/sdf2
vgcreate -s 1K cache_sanity /dev/sdd1 /dev/sdf1 /dev/sde2 /dev/sdf2

Creating small cache origin and cache pool
Create origin (slow) volume
lvcreate --type raid1 -m 1 -L 80k -n 1K_origin cache_sanity /dev/sdd1 /dev/sdf1
Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )

Create cache data and cache metadata (fast) volumes
lvcreate --type raid1 -m 1 -L 50k -n 1K_cache cache_sanity /dev/sde2 /dev/sdf2
lvcreate --type raid1 -m 1 -L 12M -n 1K_cache_meta cache_sanity /dev/sde2 /dev/sdf2
Waiting until all mirror|raid volumes become fully syncd...
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --cachemode writeback -c 32 --poolmetadata cache_sanity/1K_cache_meta cache_sanity/1K_cache
  WARNING: Converting logical volume cache_sanity/1K_cache and cache_sanity/1K_cache_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  device-mapper: reload ioctl on  failed: Invalid argument
  Aborting. Failed to activate metadata lv.
couldn't create combined cache pool volume


Jan  7 16:41:35 host-109 qarshd[3686]: Running cmdline: lvconvert --yes --type cache-pool --cachemode writeback -c 32 --poolmetadata cache_sanity/1K_cache_meta cache_sanity/1K_cache
Jan  7 16:41:35 host-109 lvm[3441]: device-mapper: waitevent ioctl on  failed: Interrupted system call
Jan  7 16:41:35 host-109 lvm[3441]: No longer monitoring RAID device cache_sanity-1K_cache_meta for events.
Jan  7 16:41:35 host-109 kernel: attempt to access beyond end of device
Jan  7 16:41:35 host-109 kernel: dm-12: rw=7185, want=9, limit=2
Jan  7 16:41:35 host-109 kernel: md: super_written gets error=-5, uptodate=0
Jan  7 16:41:35 host-109 multipathd: dm-16: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-16: devmap not registered, can't remove
Jan  7 16:41:35 host-109 multipathd: dm-16: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-15: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-15: devmap not registered, can't remove
Jan  7 16:41:35 host-109 multipathd: dm-14: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-14: devmap not registered, can't remove
Jan  7 16:41:35 host-109 multipathd: dm-13: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-13: devmap not registered, can't remove
Jan  7 16:41:35 host-109 multipathd: dm-12: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-12: devmap not registered, can't remove
Jan  7 16:41:35 host-109 multipathd: dm-15: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-14: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-13: remove map (uevent)
Jan  7 16:41:35 host-109 multipathd: dm-12: remove map (uevent)
Jan  7 16:41:35 host-109 kernel: device-mapper: raid: New device injected into existing array without 'rebuild' parameter specified
Jan  7 16:41:35 host-109 kernel: device-mapper: table: 253:16: raid: Unable to assemble array: Invalid superblocks
Jan  7 16:41:35 host-109 kernel: device-mapper: ioctl: error adding target to table
Jan  7 16:41:36 host-109 multipathd: dm-16: remove map (uevent)
Jan  7 16:41:36 host-109 multipathd: dm-16: remove map (uevent)


Version-Release number of selected component (if applicable):
3.10.0-219.el7.x86_64   BUILT: Thu 18 Dec 2014 12:53:43 AM CST
lvm2-2.02.114-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
lvm2-libs-2.02.114-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
lvm2-cluster-2.02.114-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-1.02.92-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-libs-1.02.92-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-event-1.02.92-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-event-libs-1.02.92-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-persistent-data-0.4.1-2.el7    BUILT: Wed Nov 12 12:39:46 CST 2014
cmirror-2.02.114-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015


How reproducible:
Everytime

Comment 5 Heinz Mauelshagen 2016-07-06 12:36:41 UTC
Code to reject creating raids on such small extent size added.

Comment 7 Roman Bednář 2016-07-18 07:13:04 UTC
LVM now rejects raid creation on vgs with small extent size. However the error message is not very informative, would it be possible to change it?

# vgdisplay cache_sanity | grep 'PE Size'
  PE Size               1.00 KiB

# lvcreate --type raid1 -m 1 -L10M -n 1K_origin cache_sanity 
  device-mapper: resume ioctl on (253:6) failed: Input/output error
  Unable to resume cache_sanity-1K_origin (253:6)
  Failed to activate new LV.
  Attempted to decrement suspended device counter below zero.

Comment 8 Corey Marthaler 2016-08-01 21:36:01 UTC
The new messages are no better than the original lvconvert messages in comment #0. Moving back to assigned.


[root@host-075 ~]# pvcreate --setphysicalvolumesize 500M /dev/sdd1 /dev/sdf1 /dev/sde2 /dev/sdf2
  Physical volume "/dev/sdd1" successfully created.
  Physical volume "/dev/sdf1" successfully created.
  Physical volume "/dev/sde2" successfully created.
  Physical volume "/dev/sdf2" successfully created.
[root@host-075 ~]# vgcreate -s 1K cache_sanity /dev/sdd1 /dev/sdf1 /dev/sde2 /dev/sdf2
  Volume group "cache_sanity" successfully created

[root@host-075 ~]# lvcreate --type raid1 -m 1 -L 80k -n 1K_origin cache_sanity /dev/sdd1 /dev/sdf1
  Using reduced mirror region size of 32 sectors.
  device-mapper: resume ioctl on (253:6) failed: Input/output error
  Unable to resume cache_sanity-1K_origin (253:6)
  Failed to activate new LV.
  Attempted to decrement suspended device counter below zero.
[root@host-075 ~]# lvcreate --type raid1 -m 1 -L 50k -n 1K_cache cache_sanity /dev/sde2 /dev/sdf2
  Using reduced mirror region size of 4 sectors.
  device-mapper: resume ioctl on (253:6) failed: Input/output error
  Unable to resume cache_sanity-1K_cache (253:6)
  Failed to activate new LV.
  Attempted to decrement suspended device counter below zero.
[root@host-075 ~]# lvcreate --type raid1 -m 1 -L 12M -n 1K_cache_meta cache_sanity /dev/sde2 /dev/sdf2
  device-mapper: resume ioctl on (253:6) failed: Input/output error
  Unable to resume cache_sanity-1K_cache_meta (253:6)
  Failed to activate new LV.
  Attempted to decrement suspended device counter below zero.

Aug  1 16:32:44 host-075 kernel: md: super_written gets error=-5, uptodate=0
Aug  1 16:32:44 host-075 kernel: md/raid1:mdX: Disk failure on dm-3, disabling device.#012md/raid1:mdX: Operation continuing on 1 devices.
Aug  1 16:32:44 host-075 kernel: attempt to access beyond end of device
Aug  1 16:32:44 host-075 kernel: dm-4: rw=7185, want=9, limit=2
Aug  1 16:32:44 host-075 kernel: md: super_written gets error=-5, uptodate=0
Aug  1 16:32:44 host-075 kernel: attempt to access beyond end of device
Aug  1 16:32:44 host-075 kernel: dm-4: rw=7185, want=9, limit=2
Aug  1 16:32:44 host-075 kernel: md: super_written gets error=-5, uptodate=0
Aug  1 16:32:44 host-075 kernel: mdX: bitmap file is out of date, doing full recovery
Aug  1 16:32:44 host-075 kernel: attempt to access beyond end of device
Aug  1 16:32:44 host-075 kernel: dm-4: rw=16, want=9, limit=2
Aug  1 16:32:44 host-075 kernel: mdX: bitmap initialisation failed: -5
Aug  1 16:32:44 host-075 kernel: device-mapper: raid: Failed to load bitmap
Aug  1 16:32:44 host-075 kernel: device-mapper: table: 253:6: raid: preresume failed, error = -5


3.10.0-480.el7.x86_64

lvm2-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
lvm2-libs-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
lvm2-cluster-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-libs-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-event-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-event-libs-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016

Comment 9 Heinz Mauelshagen 2016-08-16 11:40:12 UTC
Catching failure early on "lvcreate --type raid* ..." providing error message
"Unable to create RAID LV: requires minimum VG extent size 4.00 KiB"
with small extent sizes.

Pushed upstream.

Comment 10 Corey Marthaler 2016-09-07 21:45:49 UTC
This problem just got pushed off now to 4K physical extent sized VGs.

If less than 4K, it's not allowed:
[root@host-127 ~]# vgcreate -s 1K cache_sanity /dev/sdd1 /dev/sdf1 /dev/sde2 /dev/sdf2
  Volume group "cache_sanity" successfully created

[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 80k -n 1K_origin cache_sanity /dev/sdd1 /dev/sdf1
  Unable to create RAID LV: requires minimum VG extent size 4.00 KiB
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 50k -n 1K_cache cache_sanity /dev/sde2 /dev/sdf2
  Unable to create RAID LV: requires minimum VG extent size 4.00 KiB
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 12M -n 1K_cache_meta cache_sanity /dev/sde2 /dev/sdf2
  Unable to create RAID LV: requires minimum VG extent size 4.00 KiB



If 4K, then same exact problem:
[root@host-127 ~]# vgcreate -s 4K cache_sanity /dev/sdd1 /dev/sdf1 /dev/sde2 /dev/sdf2
  Volume group "cache_sanity" successfully created
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 80k -n 1K_origin cache_sanity /dev/sdd1 /dev/sdf1
  Using reduced mirror region size of 32 sectors.
  device-mapper: resume ioctl on (253:6) failed: Input/output error
  Unable to resume cache_sanity-1K_origin (253:6)
  Failed to activate new LV.
  Attempted to decrement suspended device counter below zero.

Sep  7 16:38:15 host-127 kernel: dm-2: rw=7185, want=9, limit=8
Sep  7 16:38:15 host-127 kernel: md: super_written gets error=-5, uptodate=0
Sep  7 16:38:15 host-127 kernel: md/raid1:mdX: Disk failure on dm-3, disabling device.#012md/raid1:mdX: Operation continuing on 1 devices.
Sep  7 16:38:15 host-127 kernel: attempt to access beyond end of device
Sep  7 16:38:15 host-127 kernel: dm-4: rw=7185, want=9, limit=8
Sep  7 16:38:15 host-127 kernel: md: super_written gets error=-5, uptodate=0
Sep  7 16:38:15 host-127 kernel: attempt to access beyond end of device
Sep  7 16:38:15 host-127 kernel: dm-4: rw=7185, want=9, limit=8
Sep  7 16:38:15 host-127 kernel: md: super_written gets error=-5, uptodate=0
Sep  7 16:38:15 host-127 kernel: mdX: bitmap file is out of date, doing full recovery
Sep  7 16:38:15 host-127 kernel: attempt to access beyond end of device
Sep  7 16:38:15 host-127 kernel: dm-4: rw=16, want=9, limit=8
Sep  7 16:38:15 host-127 kernel: mdX: bitmap initialisation failed: -5
Sep  7 16:38:15 host-127 kernel: device-mapper: raid: Failed to load bitmap
Sep  7 16:38:15 host-127 kernel: device-mapper: table: 253:6: raid: preresume failed, error = -5



3.10.0-501.el7.x86_64
lvm2-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
lvm2-libs-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
lvm2-cluster-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-libs-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-event-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-event-libs-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016

Comment 11 Corey Marthaler 2016-09-07 21:52:44 UTC
Looks like the check needs to require a minimum of 8k, as 8k and 16k appear to work.



[root@host-127 ~]# vgcreate -s 8K cache_sanity /dev/sdd1 /dev/sdf1 /dev/sde2 /dev/sdf2
  Volume group "cache_sanity" successfully created
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 12k -n 1K_origin1 cache_sanity /dev/sdd1 /dev/sdf1
  Rounding up size to full physical extent 16.00 KiB
  Using reduced mirror region size of 32 sectors.
  Logical volume "1K_origin1" created.
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 2k -n 1K_origin2 cache_sanity /dev/sdd1 /dev/sdf1
  Rounding up size to full physical extent 8.00 KiB
  Using reduced mirror region size of 16 sectors.
  Logical volume "1K_origin2" created.
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 80k -n 1K_origin2 cache_sanity /dev/sdd1 /dev/sdf1
  Logical Volume "1K_origin2" already exists in volume group "cache_sanity"
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 80k -n 1K_origin3 cache_sanity /dev/sdd1 /dev/sdf1
  Using reduced mirror region size of 32 sectors.
  Logical volume "1K_origin3" created.




[root@host-127 ~]# vgcreate -s 16K cache_sanity /dev/sdd1 /dev/sdf1 /dev/sde2 /dev/sdf2
  Volume group "cache_sanity" successfully created
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 12k -n 1K_origin1 cache_sanity /dev/sdd1 /dev/sdf1
  Rounding up size to full physical extent 16.00 KiB
  Using reduced mirror region size of 32 sectors.
  Logical volume "1K_origin1" created.
[root@host-127 ~]# lvcreate --type raid1 -m 1 -L 80k -n 1K_origin2 cache_sanity /dev/sdd1 /dev/sdf1
  Using reduced mirror region size of 32 sectors.
  Logical volume "1K_origin2" created.

Comment 16 Joseph Kachuck 2016-09-22 13:26:07 UTC
Hello,
This BZ did not make RHEL 7.3. This is now requested for RHEL 7.4.
This BZ should be only for:
One extent is being allocated for the RAID MetaLVs which is too small to hold the MD bitmap reasoning the failure in comment #10.

The BZ for dynamic MetaLV resizing is 1191935.

Thank You
Joe Kachuck

Comment 17 Jonathan Earl Brassow 2017-05-10 15:21:05 UTC
This minor issue never gets a chance to be worked on... will have to move to 7.5.

Comment 19 Jonathan Earl Brassow 2017-07-27 16:44:10 UTC
Seems to work now, please retest in the context of RHEL 7.5.  I'll put this bug in POST to ensure upstream changes make it into a 7.5 build.

[root@bp-01 ~]# vgcreate -s 4K vg /dev/sd[bcdefghi]1
  Volume group "vg" successfully created
[root@bp-01 ~]# !lvcre
lvcreate --type raid1 -L 80k -n raid1 vg
  Using reduced mirror region size of 16.00 KiB
  Logical volume "raid1" created.
[root@bp-01 ~]# lvcreate --type raid1 -L 50k -n 4k_cachepool vg
  Rounding up size to full physical extent 52.00 KiB
  Using reduced mirror region size of 4.00 KiB
  Logical volume "4k_cachepool" created.
[root@bp-01 ~]# lvcreate --type raid1 -L 12M -n 4k_cachepool_meta vg
  Logical volume "4k_cachepool_meta" created.
[root@bp-01 ~]# lvconvert --yes --type cache-pool vg/4k_cachepool --poolmetadata vg/4k_cachepool_meta --cachemode writeback -c 32
  WARNING: Converting logical volume vg/4k_cachepool and vg/4k_cachepool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted vg/4k_cachepool_cdata to cache pool.
[root@bp-01 ~]# lvconvert --type cache vg/raid1 --cachepool vg/4k_cachepool
Do you want wipe existing metadata of cache pool vg/4k_cachepool? [y/n]: y
  WARNING: Data redundancy is lost with writeback caching of raid logical volume!
  Logical volume vg/raid1 is now cached.


N.B. Not sure what the 'WARNING' is there for on that last convert... will file a bug for that.

Here is the resulting stack of devices as printed by 'lvs':
lvs -o name,segtype,attr,raidsyncaction,syncpercent,devices -a vg
  LV                            Type       Attr       SyncAction Cpy%Sync Devices
  [4k_cachepool]                cache-pool Cwi---C---            0.00     4k_cachepool_cdata(0)
  [4k_cachepool_cdata]          raid1      Cwi-aor--- idle       100.00   4k_cachepool_cdata_rimage_0(0),4k_cachepool_cdata_rimage_1(0)
  [4k_cachepool_cdata_rimage_0] linear     iwi-aor---                     /dev/sdb1(26)
  [4k_cachepool_cdata_rimage_1] linear     iwi-aor---                     /dev/sdc1(26)
  [4k_cachepool_cdata_rmeta_0]  linear     ewi-aor---                     /dev/sdb1(23)
  [4k_cachepool_cdata_rmeta_1]  linear     ewi-aor---                     /dev/sdc1(23)
  [4k_cachepool_cmeta]          raid1      ewi-aor--- idle       100.00   4k_cachepool_cmeta_rimage_0(0),4k_cachepool_cmeta_rimage_1(0)
  [4k_cachepool_cmeta_rimage_0] linear     iwi-aor---                     /dev/sdb1(42)
  [4k_cachepool_cmeta_rimage_1] linear     iwi-aor---                     /dev/sdc1(42)
  [4k_cachepool_cmeta_rmeta_0]  linear     ewi-aor---                     /dev/sdb1(39)
  [4k_cachepool_cmeta_rmeta_1]  linear     ewi-aor---                     /dev/sdc1(39)
  [lvol0_pmspare]               linear     ewi-------                     /dev/sdb1(3114)
  raid1                         cache      Cwi-a-C---            0.00     raid1_corig(0)
  [raid1_corig]                 raid1      rwi-aoC--- idle       100.00   raid1_corig_rimage_0(0),raid1_corig_rimage_1(0)
  [raid1_corig_rimage_0]        linear     iwi-aor---                     /dev/sdb1(3)
  [raid1_corig_rimage_1]        linear     iwi-aor---                     /dev/sdc1(3)
  [raid1_corig_rmeta_0]         linear     ewi-aor---                     /dev/sdb1(0)
  [raid1_corig_rmeta_1]         linear     ewi-aor---                     /dev/sdc1(0)

Comment 20 Jonathan Earl Brassow 2017-07-27 16:53:56 UTC
(In reply to Jonathan Earl Brassow from comment #19)

> [root@bp-01 ~]# lvconvert --type cache vg/raid1 --cachepool vg/4k_cachepool
> Do you want wipe existing metadata of cache pool vg/4k_cachepool? [y/n]: y
>   WARNING: Data redundancy is lost with writeback caching of raid logical
> volume!
>   Logical volume vg/raid1 is now cached.
> 
> 
> N.B. Not sure what the 'WARNING' is there for on that last convert... will
> file a bug for that.

bug 1475975

Comment 24 Corey Marthaler 2017-12-08 23:52:59 UTC
Marking verified in the latest rpms.

3.10.0-811.el7.x86_64
lvm2-2.02.176-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
lvm2-libs-2.02.176-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
lvm2-cluster-2.02.176-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
lvm2-lockd-2.02.176-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
lvm2-python-boom-0.8.1-5.el7    BUILT: Wed Dec  6 04:15:40 CST 2017
cmirror-2.02.176-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
device-mapper-1.02.145-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
device-mapper-libs-1.02.145-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
device-mapper-event-1.02.145-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
device-mapper-event-libs-1.02.145-5.el7    BUILT: Wed Dec  6 04:13:07 CST 2017
device-mapper-persistent-data-0.7.3-2.el7    BUILT: Tue Oct 10 04:00:07 CDT 2017



++++ vgcreate -s 1K cache_sanity /dev/sdd1 /dev/sda1 /dev/sdf1 /dev/sdg1 ++++
Adding "slow" and "fast" tags to corresponding pvs
raid creation on 1K extent VGs is not supported (BZ 1179970)
Create origin (slow) volume
lvcreate  --type raid1 -m 1 -L 80k -n smallK_origin cache_sanity @slow
  Unable to create RAID LV: requires minimum VG extent size 4.00 KiB
couldn't create raid cache origin (slow) volume


++++ vgcreate -s 4K cache_sanity /dev/sdd1 /dev/sda1 /dev/sdf1 /dev/sdg1 ++++
Adding "slow" and "fast" tags to corresponding pvs
Creating small cache origin and cache pool
Create origin (slow) volume
lvcreate  --type raid1 -m 1 -L 80k -R 64k -n smallK_origin cache_sanity @slow

Create cache data and cache metadata (fast) volumes
lvcreate  --type raid1 -m 1 -L 50k -R 32k -n smallK_cache cache_sanity @fast
lvcreate  --type raid1 -m 1 -L 12M -n smallK_cache_meta cache_sanity @fast

Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: smq  mode: writethrough
lvconvert --yes --type cache-pool --cachepolicy smq --cachemode writethrough -c 64 --poolmetadata cache_sanity/smallK_cache_meta cache_sanity/smallK_cache
  Size of cache-pool data volume cannot be smaller than chunk size 64.00 KiB.
couldn't create combined cache pool volume
Create cache data and cache metadata (fast) volumes
lvcreate  --type raid1 -m 1 -L 50k -R 32k -n smallK_cache cache_sanity @fast
lvcreate  --type raid1 -m 1 -L 12M -n smallK_cache_meta cache_sanity @fast

Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: mq  mode: writethrough
lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 32 --poolmetadata cache_sanity/smallK_cache_meta cache_sanity/smallK_cache
  WARNING: Converting cache_sanity/smallK_cache and cache_sanity/smallK_cache_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachemetadataformat 1 --cachepool cache_sanity/smallK_cache cache_sanity/smallK_origin
dmsetup status | grep cache_sanity-smallK_origin | grep writethrough | grep -w mq
                                                                                                                                                                                                                             
Separating cache pool (lvconvert --splitcache) cache_sanity/smallK_origin from cache origin                                                                                                                                     
Removing cache pool cache_sanity/smallK_cache                                                                                                                                                                                       
                                                                                                                                                                                                                                          
Removing cache origin volume cache_sanity/smallK_origin                                                                                                                                                                                   
lvremove -f /dev/cache_sanity/smallK_origin                                                                                                                                                                                                       
                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                  
++++ vgcreate -s 8K cache_sanity /dev/sdd1 /dev/sda1 /dev/sdf1 /dev/sdg1 ++++                                                                                                                                                                          
Adding "slow" and "fast" tags to corresponding pvs                                                                                                                                                                                                     
Creating small cache origin and cache pool                                                                                                                                                                                                                 
Create origin (slow) volume                                                                                                                                                                                                                                
lvcreate  --type raid1 -m 1 -L 80k -R 64k -n smallK_origin cache_sanity @slow                                                                                                                                                                                 
                                                                                                                                                                                                                                                              
Create cache data and cache metadata (fast) volumes                                                                                                                                                                                                           
lvcreate  --type raid1 -m 1 -L 50k -R 32k -n smallK_cache cache_sanity @fast                                                                                                                                                                                    
lvcreate  --type raid1 -m 1 -L 12M -n smallK_cache_meta cache_sanity @fast                                                                                                                                                                                        
                                                                                                                                                                                                                                                                  
Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: cleaner  mode: writethrough
lvconvert --yes --type cache-pool --cachepolicy cleaner --cachemode writethrough -c 64 --poolmetadata cache_sanity/smallK_cache_meta cache_sanity/smallK_cache
  Size of cache-pool data volume cannot be smaller than chunk size 64.00 KiB.
couldn't create combined cache pool volume
Create cache data and cache metadata (fast) volumes
lvcreate  --type raid1 -m 1 -L 50k -R 32k -n smallK_cache cache_sanity @fast
lvcreate  --type raid1 -m 1 -L 12M -n smallK_cache_meta cache_sanity @fast

Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: cleaner  mode: writethrough
lvconvert --yes --type cache-pool --cachepolicy cleaner --cachemode writethrough -c 32 --poolmetadata cache_sanity/smallK_cache_meta cache_sanity/smallK_cache
  WARNING: Converting cache_sanity/smallK_cache and cache_sanity/smallK_cache_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachemetadataformat 1 --cachepool cache_sanity/smallK_cache cache_sanity/smallK_origin
dmsetup status | grep cache_sanity-smallK_origin | grep writethrough | grep -w cleaner

Separating cache pool (lvconvert --splitcache) cache_sanity/smallK_origin from cache origin
Removing cache pool cache_sanity/smallK_cache

Removing cache origin volume cache_sanity/smallK_origin
lvremove -f /dev/cache_sanity/smallK_origin

Comment 27 errata-xmlrpc 2018-04-10 15:16:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0853


Note You need to log in before you can comment on or make changes to this bug.