RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1396309 - Prevent activation of RAID10 with kernels not supporting proper mapping
Summary: Prevent activation of RAID10 with kernels not supporting proper mapping
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1463740
TreeView+ depends on / blocked
 
Reported: 2016-11-17 23:54 UTC by Corey Marthaler
Modified: 2021-09-03 12:41 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.169-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1463740 (view as bug list)
Environment:
Last Closed: 2017-06-21 15:47:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2016-11-17 23:54:02 UTC
Description of problem:
This is the raid10 specific version of bug 1388962.

## Shared storage 7.2 machine 3.10.0-327.el7, lvm2-2.02.130-5.el7
[root@host-088 ~]# dmsetup targets|grep raid
raid             v1.7.0

[root@host-088 ~]# lvcreate -i 3 --type raid10 -n raid10 -L 100M VG
  Using default stripesize 64.00 KiB.
  Rounding size (25 extents) up to stripe boundary size (27 extents).
  Logical volume "raid10" created.
[root@host-088 ~]# lvs -a -o +devices
  LV                VG  Attr       LSize   Cpy%Sync Devices
  raid10            VG  rwi-a-r--- 108.00m 100.00   raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0)
  [raid10_rimage_0] VG  iwi-aor---  36.00m          /dev/sda1(1)
  [raid10_rimage_1] VG  iwi-aor---  36.00m          /dev/sdb1(1)
  [raid10_rimage_2] VG  iwi-aor---  36.00m          /dev/sdc1(1)
  [raid10_rimage_3] VG  iwi-aor---  36.00m          /dev/sdd1(1)
  [raid10_rimage_4] VG  iwi-aor---  36.00m          /dev/sde1(1)
  [raid10_rimage_5] VG  iwi-aor---  36.00m          /dev/sdf1(1)
  [raid10_rmeta_0]  VG  ewi-aor---   4.00m          /dev/sda1(0)
  [raid10_rmeta_1]  VG  ewi-aor---   4.00m          /dev/sdb1(0)
  [raid10_rmeta_2]  VG  ewi-aor---   4.00m          /dev/sdc1(0)
  [raid10_rmeta_3]  VG  ewi-aor---   4.00m          /dev/sdd1(0)
  [raid10_rmeta_4]  VG  ewi-aor---   4.00m          /dev/sde1(0)
  [raid10_rmeta_5]  VG  ewi-aor---   4.00m          /dev/sdf1(0)

[root@host-088 ~]# vgchange -an VG
  0 logical volume(s) in volume group "VG" now active


## Shared storage 7.3 machine W/ the lvm fix for bug 1395563 3.10.0-514.el7 lvm2-2.02.166-1.el7_3.2
[root@host-090 ~]# dmsetup targets|grep raid
raid             v1.9.0

[root@host-090 ~]# pvscan --cache
[root@host-090 ~]# lvs -a -o +devices
  LV                VG  Attr       LSize    Devices
  raid10            VG  rwi---r--- 108.00m  raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0)
  [raid10_rimage_0] VG  Iwi---r---  36.00m  /dev/sdb1(1)
  [raid10_rimage_1] VG  Iwi---r---  36.00m  /dev/sde1(1)
  [raid10_rimage_2] VG  Iwi---r---  36.00m  /dev/sda1(1)
  [raid10_rimage_3] VG  Iwi---r---  36.00m  /dev/sdd1(1)
  [raid10_rimage_4] VG  Iwi---r---  36.00m  /dev/sdc1(1)
  [raid10_rimage_5] VG  Iwi---r---  36.00m  /dev/sdf1(1)
  [raid10_rmeta_0]  VG  ewi---r---   4.00m  /dev/sdb1(0)
  [raid10_rmeta_1]  VG  ewi---r---   4.00m  /dev/sde1(0)
  [raid10_rmeta_2]  VG  ewi---r---   4.00m  /dev/sda1(0)
  [raid10_rmeta_3]  VG  ewi---r---   4.00m  /dev/sdd1(0)
  [raid10_rmeta_4]  VG  ewi---r---   4.00m  /dev/sdc1(0)
  [raid10_rmeta_5]  VG  ewi---r---   4.00m  /dev/sdf1(0)

[root@host-090 ~]# lvchange -ay VG/raid10
  device-mapper: reload ioctl on (253:14) failed: Invalid argument

Nov 17 17:49:00 host-090 kernel: device-mapper: raid: Reshaping raid sets not yet supported. (raid layout change)
Nov 17 17:49:00 host-090 kernel: device-mapper: raid: #011 0x102 vs 0x0
Nov 17 17:49:00 host-090 kernel: device-mapper: raid: #011 Old layout: near w/ 2 copies
Nov 17 17:49:00 host-090 kernel: ------------[ cut here ]------------
Nov 17 17:49:00 host-090 kernel: WARNING: at drivers/md/dm-raid.c:508 raid10_md_layout_to_format+0x50/0x60 [dm_raid]()
Nov 17 17:49:00 host-090 kernel: Modules linked in: dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx sd_mod crc_t10dif crct10dif_generic crct10dif_common sg iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi iptable_filter ppdev pcspkr i6300esb virtio_balloon parport_pc parport i2c_piix4 i2c_core dm_multipath nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c ata_generic pata_acpi virtio_blk virtio_net ata_piix serio_raw libata virtio_pci virtio_ring virtio floppy dm_mirror dm_region_hash dm_log dm_mod
Nov 17 17:49:00 host-090 kernel: CPU: 0 PID: 2982 Comm: lvchange Not tainted 3.10.0-514.el7.x86_64 #1
Nov 17 17:49:00 host-090 kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
Nov 17 17:49:00 host-090 kernel: 0000000000000000 00000000efc84f1e ffff88003d573ab8 ffffffff81685eac
Nov 17 17:49:00 host-090 kernel: ffff88003d573af0 ffffffff81085820 0000000000000000 ffff88002e600498
Nov 17 17:49:00 host-090 kernel: ffff88002e600000 ffff88003b877000 0000000000000000 ffff88003d573b00
Nov 17 17:49:00 host-090 kernel: Call Trace:
Nov 17 17:49:00 host-090 kernel: [<ffffffff81685eac>] dump_stack+0x19/0x1b
Nov 17 17:49:00 host-090 kernel: [<ffffffff81085820>] warn_slowpath_common+0x70/0xb0
Nov 17 17:49:00 host-090 kernel: [<ffffffff8108596a>] warn_slowpath_null+0x1a/0x20
Nov 17 17:49:00 host-090 kernel: [<ffffffffa03b8c20>] raid10_md_layout_to_format+0x50/0x60 [dm_raid]
Nov 17 17:49:00 host-090 kernel: [<ffffffffa03b98fe>] super_validate.part.26+0x78e/0x7b0 [dm_raid]
Nov 17 17:49:00 host-090 kernel: [<ffffffff81237edd>] ? bio_put+0x7d/0xa0
Nov 17 17:49:00 host-090 kernel: [<ffffffff814fc62c>] ? sync_page_io+0x8c/0x110
Nov 17 17:49:00 host-090 kernel: [<ffffffffa03bb371>] raid_ctr+0xd41/0x1680 [dm_raid]
Nov 17 17:49:00 host-090 kernel: [<ffffffffa00050c1>] ? realloc_argv+0x31/0x80 [dm_mod]
Nov 17 17:49:00 host-090 kernel: [<ffffffffa00056f7>] dm_table_add_target+0x177/0x460 [dm_mod]
Nov 17 17:49:00 host-090 kernel: [<ffffffffa0008e37>] table_load+0x157/0x390 [dm_mod]
Nov 17 17:49:00 host-090 kernel: [<ffffffffa0008ce0>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
Nov 17 17:49:00 host-090 kernel: [<ffffffffa0009a35>] ctl_ioctl+0x1e5/0x500 [dm_mod]
Nov 17 17:49:00 host-090 kernel: [<ffffffffa0009d63>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
Nov 17 17:49:00 host-090 kernel: [<ffffffff81211ed5>] do_vfs_ioctl+0x2d5/0x4b0
Nov 17 17:49:00 host-090 kernel: [<ffffffff812aea5e>] ? file_has_perm+0xae/0xc0
Nov 17 17:49:00 host-090 kernel: [<ffffffff81292e01>] ? unmerge_queues+0x61/0x70
Nov 17 17:49:00 host-090 kernel: [<ffffffff81212151>] SyS_ioctl+0xa1/0xc0
Nov 17 17:49:00 host-090 kernel: [<ffffffff816964c9>] system_call_fastpath+0x16/0x1b
Nov 17 17:49:00 host-090 kernel: ---[ end trace be565ba4c2478c63 ]---
Nov 17 17:49:00 host-090 kernel: device-mapper: raid: #011 New layout: far w/ 0 copies
Nov 17 17:49:00 host-090 kernel: device-mapper: table: 253:14: raid: Unable to assemble array: Invalid superblocks
Nov 17 17:49:00 host-090 kernel: device-mapper: ioctl: error adding target to table


Version-Release number of selected component (if applicable):
3.10.0-514.el7.x86_64

lvm2-2.02.166-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016
lvm2-libs-2.02.166-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016
lvm2-cluster-2.02.166-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016
device-mapper-1.02.135-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016
device-mapper-libs-1.02.135-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016
device-mapper-event-1.02.135-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016
device-mapper-event-libs-1.02.135-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.166-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.166-1.el7_3.2    BUILT: Wed Nov 16 04:11:32 CST 2016


How reproducible:
Everytime

Comment 3 Heinz Mauelshagen 2017-03-08 13:09:01 UTC
Upsream commit e118b65d651d

Comment 5 Roman Bednář 2017-05-16 13:24:34 UTC
The patch does not seem to fix the problem. I was able to verify raid4 version of basically the same bug ( BZ #1388962 ) but not raid10 using the same procedure.

=============================================

Initial packages (RHEL7.2):
	3.10.0-327.el7.x86_64
	lvm2-2.02.130-5.el7.x86_64

# dmsetup targets|grep raid
raid             v1.7.0


# lvs -a -o lv_name,segtype,devices -S vg_name=vg
  LV                Type   Devices                                                                                                                                                                                      
  raid10            raid10 raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0),raid10_rimage_6(0),raid10_rimage_7(0),raid10_rimage_8(0),raid10_rimage_9(0)
  [raid10_rimage_0] linear /dev/sdi(1)                                                                                                                                                                                  
  [raid10_rimage_1] linear /dev/sdg(1)                                                                                                                                                                                  
  [raid10_rimage_2] linear /dev/sdf(1)                                                                                                                                                                                  
  [raid10_rimage_3] linear /dev/sdb(1)                                                                                                                                                                                  
  [raid10_rimage_4] linear /dev/sdh(1)                                                                                                                                                                                  
  [raid10_rimage_5] linear /dev/sda(1)                                                                                                                                                                                  
  [raid10_rimage_6] linear /dev/sdc(1)                                                                                                                                                                                  
  [raid10_rimage_7] linear /dev/sdd(1)                                                                                                                                                                                  
  [raid10_rimage_8] linear /dev/sde(1)                                                                                                                                                                                  
  [raid10_rimage_9] linear /dev/sdj(1)                                                                                                                                                                                  
  [raid10_rmeta_0]  linear /dev/sdi(0)                                                                                                                                                                                  
  [raid10_rmeta_1]  linear /dev/sdg(0)                                                                                                                                                                                  
  [raid10_rmeta_2]  linear /dev/sdf(0)                                                                                                                                                                                  
  [raid10_rmeta_3]  linear /dev/sdb(0)                                                                                                                                                                                  
  [raid10_rmeta_4]  linear /dev/sdh(0)                                                                                                                                                                                  
  [raid10_rmeta_5]  linear /dev/sda(0)                                                                                                                                                                                  
  [raid10_rmeta_6]  linear /dev/sdc(0)                                                                                                                                                                                  
  [raid10_rmeta_7]  linear /dev/sdd(0)                                                                                                                                                                                  
  [raid10_rmeta_8]  linear /dev/sde(0)                                                                                                                                                                                  
  [raid10_rmeta_9]  linear /dev/sdj(0)  

# vgchange -an vg

BEFORE PATCH:
Upgrade to 3.10.0-514.el7 and lvm2-2.02.166-1.el7

# dmsetup targets|grep raid
raid             v1.9.0

# pvscan --cache

# pvscan
  PV /dev/vda2   VG rhel_virt-369   lvm2 [7.51 GiB / 40.00 MiB free]
  PV /dev/sdi    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdg    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdf    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdb    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdh    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sda    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdc    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdd    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sde    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdj    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  Total: 11 [407.43 GiB] / in use: 11 [407.43 GiB] / in no VG: 0 [0   ]


# lvchange -ay vg/raid10
  device-mapper: reload ioctl on (253:22) failed: Invalid argument


May 16 14:58:45 virt-369 kernel: device-mapper: raid: #011 New layout: far w/ 0 copies
May 16 14:58:45 virt-369 kernel: device-mapper: table: 253:22: raid: Unable to assemble array: Invalid superblocks
May 16 14:58:45 virt-369 kernel: device-mapper: ioctl: error adding target to table
May 16 14:58:45 virt-369 multipathd: dm-22: remove map (uevent)
May 16 14:58:45 virt-369 multipathd: dm-22: remove map (uevent)


AFTER PATCH:
Upgrade to lvm2-2.02.171-1.el7.x86_64

# dmsetup targets|grep raid
raid             v1.9.0

# uname -r
3.10.0-514.el7.x86_64

# pvscan --cache

# pvscan
  PV /dev/sdi    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdg    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdf    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdb    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdh    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sda    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdc    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdd    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sde    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/sdj    VG vg              lvm2 [39.99 GiB / 39.79 GiB free]
  PV /dev/vda2   VG rhel_virt-369   lvm2 [7.51 GiB / 40.00 MiB free]
  Total: 11 [407.43 GiB] / in use: 11 [407.43 GiB] / in no VG: 0 [0   ]

# rpm -q lvm2
lvm2-2.02.171-1.el7.x86_64

# lvchange -ay vg/raid10
  device-mapper: reload ioctl on  (253:22) failed: Invalid argument

May 16 15:06:22 virt-369 kernel: device-mapper: raid: Reshaping raid sets not yet supported. (raid layout change)
May 16 15:06:22 virt-369 kernel: device-mapper: raid: #011 0x102 vs 0x0
May 16 15:06:22 virt-369 kernel: device-mapper: raid: #011 Old layout: near w/ 2 copies
May 16 15:06:22 virt-369 kernel: ------------[ cut here ]------------
May 16 15:06:22 virt-369 kernel: WARNING: at drivers/md/dm-raid.c:508 raid10_md_layout_to_format+0x50/0x60 [dm_raid]()
May 16 15:06:22 virt-369 kernel: Modules linked in: dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx sd_mod crc_t10dif crct10dif_generic crct10dif_common sg iscsi_tcp libiscsi_tcp libiscsi scsi_trans
port_iscsi iptable_filter ppdev pcspkr i6300esb virtio_balloon i2c_piix4 i2c_core parport_pc parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc dm_multipath ip_tables xfs libcrc32c ata_generic pata_acpi virtio_net virtio_blk ata_piix ser
io_raw virtio_pci virtio_ring virtio libata floppy dm_mirror dm_region_hash dm_log dm_mod
May 16 15:06:22 virt-369 kernel: CPU: 0 PID: 3211 Comm: lvchange Tainted: G        W      ------------   3.10.0-514.el7.x86_64 #1
May 16 15:06:22 virt-369 kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
May 16 15:06:22 virt-369 kernel: 0000000000000000 0000000040aeb462 ffff88003ce43ab8 ffffffff81685eac
May 16 15:06:22 virt-369 kernel: ffff88003ce43af0 ffffffff81085820 0000000000000000 ffff88003cae4498
May 16 15:06:22 virt-369 kernel: ffff88003cae4000 ffff880036de3000 0000000000000000 ffff88003ce43b00
May 16 15:06:22 virt-369 kernel: Call Trace:
May 16 15:06:22 virt-369 kernel: [<ffffffff81685eac>] dump_stack+0x19/0x1b
May 16 15:06:22 virt-369 kernel: [<ffffffff81085820>] warn_slowpath_common+0x70/0xb0
May 16 15:06:22 virt-369 kernel: [<ffffffff8108596a>] warn_slowpath_null+0x1a/0x20
May 16 15:06:22 virt-369 kernel: [<ffffffffa03bcc20>] raid10_md_layout_to_format+0x50/0x60 [dm_raid]
May 16 15:06:22 virt-369 kernel: [<ffffffffa03bd8fe>] super_validate.part.26+0x78e/0x7b0 [dm_raid]
May 16 15:06:22 virt-369 kernel: [<ffffffff81237edd>] ? bio_put+0x7d/0xa0
May 16 15:06:22 virt-369 kernel: [<ffffffff814fc62c>] ? sync_page_io+0x8c/0x110
May 16 15:06:22 virt-369 kernel: [<ffffffffa03bf371>] raid_ctr+0xd41/0x1680 [dm_raid]
May 16 15:06:22 virt-369 kernel: [<ffffffffa00050c1>] ? realloc_argv+0x31/0x80 [dm_mod]
May 16 15:06:22 virt-369 kernel: [<ffffffffa00056f7>] dm_table_add_target+0x177/0x460 [dm_mod]
May 16 15:06:22 virt-369 kernel: [<ffffffffa0008e37>] table_load+0x157/0x390 [dm_mod]
May 16 15:06:22 virt-369 kernel: [<ffffffffa0008ce0>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
May 16 15:06:22 virt-369 kernel: [<ffffffffa0009a35>] ctl_ioctl+0x1e5/0x500 [dm_mod]
May 16 15:06:22 virt-369 kernel: [<ffffffffa0009d63>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
May 16 15:06:22 virt-369 kernel: [<ffffffff81211ed5>] do_vfs_ioctl+0x2d5/0x4b0
May 16 15:06:22 virt-369 kernel: [<ffffffff812aea5e>] ? file_has_perm+0xae/0xc0
May 16 15:06:22 virt-369 kernel: [<ffffffff81212151>] SyS_ioctl+0xa1/0xc0
May 16 15:06:22 virt-369 kernel: [<ffffffff816964c9>] system_call_fastpath+0x16/0x1b
May 16 15:06:22 virt-369 kernel: ---[ end trace 11faf4a83e1d08f1 ]---
May 16 15:06:22 virt-369 kernel: device-mapper: raid: #011 New layout: far w/ 0 copies
May 16 15:06:22 virt-369 kernel: device-mapper: table: 253:22: raid: Unable to assemble array: Invalid superblocks
May 16 15:06:22 virt-369 kernel: device-mapper: ioctl: error adding target to table

=============================================

3.10.0-514.el7.x86_64

lvm2-2.02.171-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
lvm2-libs-2.02.171-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-1.02.140-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-libs-1.02.140-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-event-1.02.140-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-event-libs-1.02.140-1.el7    BUILT: Wed May  3 14:05:13 CEST 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 17:15:46 CEST 2017

Comment 6 Heinz Mauelshagen 2017-06-06 18:34:22 UTC
dm-raid target calls raid10_md_layout_to_format() with layout zero thus causing the WARN_ON().  Solution can be to remove it and return "unknown" for the layout
or avoid calling it in this case.

Comment 7 Heinz Mauelshagen 2017-06-06 18:35:49 UTC
The kernel properly rejects to activate the mapping as with raid4 though but the additional WARNING is confusing.

Comment 8 Heinz Mauelshagen 2017-06-19 15:22:51 UTC
Conclusion:
- the activation is properly rejected
- kernel creates a larger message than necessary which
  can be enhanced to avoid confusion

Moving to 7.5 to address the latter.

Comment 10 Heinz Mauelshagen 2017-06-21 15:47:57 UTC
Closing in favour of 1463740 keeping track of the kernel message improvement.


Note You need to log in before you can comment on or make changes to this bug.