Bug 1396309
| Summary: | Prevent activation of RAID10 with kernels not supporting proper mapping | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> | |
| Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> | |
| lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> | |
| Status: | CLOSED NOTABUG | Docs Contact: | ||
| Severity: | urgent | |||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, prockai, rbednar, zkabelac | |
| Version: | 7.3 | |||
| Target Milestone: | rc | |||
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | lvm2-2.02.169-1.el7 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1463740 (view as bug list) | Environment: | ||
| Last Closed: | 2017-06-21 15:47:57 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1463740 | |||
|
Description
Corey Marthaler
2016-11-17 23:54:02 UTC
Upsream commit e118b65d651d The patch does not seem to fix the problem. I was able to verify raid4 version of basically the same bug ( BZ #1388962 ) but not raid10 using the same procedure. ============================================= Initial packages (RHEL7.2): 3.10.0-327.el7.x86_64 lvm2-2.02.130-5.el7.x86_64 # dmsetup targets|grep raid raid v1.7.0 # lvs -a -o lv_name,segtype,devices -S vg_name=vg LV Type Devices raid10 raid10 raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0),raid10_rimage_6(0),raid10_rimage_7(0),raid10_rimage_8(0),raid10_rimage_9(0) [raid10_rimage_0] linear /dev/sdi(1) [raid10_rimage_1] linear /dev/sdg(1) [raid10_rimage_2] linear /dev/sdf(1) [raid10_rimage_3] linear /dev/sdb(1) [raid10_rimage_4] linear /dev/sdh(1) [raid10_rimage_5] linear /dev/sda(1) [raid10_rimage_6] linear /dev/sdc(1) [raid10_rimage_7] linear /dev/sdd(1) [raid10_rimage_8] linear /dev/sde(1) [raid10_rimage_9] linear /dev/sdj(1) [raid10_rmeta_0] linear /dev/sdi(0) [raid10_rmeta_1] linear /dev/sdg(0) [raid10_rmeta_2] linear /dev/sdf(0) [raid10_rmeta_3] linear /dev/sdb(0) [raid10_rmeta_4] linear /dev/sdh(0) [raid10_rmeta_5] linear /dev/sda(0) [raid10_rmeta_6] linear /dev/sdc(0) [raid10_rmeta_7] linear /dev/sdd(0) [raid10_rmeta_8] linear /dev/sde(0) [raid10_rmeta_9] linear /dev/sdj(0) # vgchange -an vg BEFORE PATCH: Upgrade to 3.10.0-514.el7 and lvm2-2.02.166-1.el7 # dmsetup targets|grep raid raid v1.9.0 # pvscan --cache # pvscan PV /dev/vda2 VG rhel_virt-369 lvm2 [7.51 GiB / 40.00 MiB free] PV /dev/sdi VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdg VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdf VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdb VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdh VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sda VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdc VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdd VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sde VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdj VG vg lvm2 [39.99 GiB / 39.79 GiB free] Total: 11 [407.43 GiB] / in use: 11 [407.43 GiB] / in no VG: 0 [0 ] # lvchange -ay vg/raid10 device-mapper: reload ioctl on (253:22) failed: Invalid argument May 16 14:58:45 virt-369 kernel: device-mapper: raid: #011 New layout: far w/ 0 copies May 16 14:58:45 virt-369 kernel: device-mapper: table: 253:22: raid: Unable to assemble array: Invalid superblocks May 16 14:58:45 virt-369 kernel: device-mapper: ioctl: error adding target to table May 16 14:58:45 virt-369 multipathd: dm-22: remove map (uevent) May 16 14:58:45 virt-369 multipathd: dm-22: remove map (uevent) AFTER PATCH: Upgrade to lvm2-2.02.171-1.el7.x86_64 # dmsetup targets|grep raid raid v1.9.0 # uname -r 3.10.0-514.el7.x86_64 # pvscan --cache # pvscan PV /dev/sdi VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdg VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdf VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdb VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdh VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sda VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdc VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdd VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sde VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/sdj VG vg lvm2 [39.99 GiB / 39.79 GiB free] PV /dev/vda2 VG rhel_virt-369 lvm2 [7.51 GiB / 40.00 MiB free] Total: 11 [407.43 GiB] / in use: 11 [407.43 GiB] / in no VG: 0 [0 ] # rpm -q lvm2 lvm2-2.02.171-1.el7.x86_64 # lvchange -ay vg/raid10 device-mapper: reload ioctl on (253:22) failed: Invalid argument May 16 15:06:22 virt-369 kernel: device-mapper: raid: Reshaping raid sets not yet supported. (raid layout change) May 16 15:06:22 virt-369 kernel: device-mapper: raid: #011 0x102 vs 0x0 May 16 15:06:22 virt-369 kernel: device-mapper: raid: #011 Old layout: near w/ 2 copies May 16 15:06:22 virt-369 kernel: ------------[ cut here ]------------ May 16 15:06:22 virt-369 kernel: WARNING: at drivers/md/dm-raid.c:508 raid10_md_layout_to_format+0x50/0x60 [dm_raid]() May 16 15:06:22 virt-369 kernel: Modules linked in: dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx sd_mod crc_t10dif crct10dif_generic crct10dif_common sg iscsi_tcp libiscsi_tcp libiscsi scsi_trans port_iscsi iptable_filter ppdev pcspkr i6300esb virtio_balloon i2c_piix4 i2c_core parport_pc parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc dm_multipath ip_tables xfs libcrc32c ata_generic pata_acpi virtio_net virtio_blk ata_piix ser io_raw virtio_pci virtio_ring virtio libata floppy dm_mirror dm_region_hash dm_log dm_mod May 16 15:06:22 virt-369 kernel: CPU: 0 PID: 3211 Comm: lvchange Tainted: G W ------------ 3.10.0-514.el7.x86_64 #1 May 16 15:06:22 virt-369 kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 May 16 15:06:22 virt-369 kernel: 0000000000000000 0000000040aeb462 ffff88003ce43ab8 ffffffff81685eac May 16 15:06:22 virt-369 kernel: ffff88003ce43af0 ffffffff81085820 0000000000000000 ffff88003cae4498 May 16 15:06:22 virt-369 kernel: ffff88003cae4000 ffff880036de3000 0000000000000000 ffff88003ce43b00 May 16 15:06:22 virt-369 kernel: Call Trace: May 16 15:06:22 virt-369 kernel: [<ffffffff81685eac>] dump_stack+0x19/0x1b May 16 15:06:22 virt-369 kernel: [<ffffffff81085820>] warn_slowpath_common+0x70/0xb0 May 16 15:06:22 virt-369 kernel: [<ffffffff8108596a>] warn_slowpath_null+0x1a/0x20 May 16 15:06:22 virt-369 kernel: [<ffffffffa03bcc20>] raid10_md_layout_to_format+0x50/0x60 [dm_raid] May 16 15:06:22 virt-369 kernel: [<ffffffffa03bd8fe>] super_validate.part.26+0x78e/0x7b0 [dm_raid] May 16 15:06:22 virt-369 kernel: [<ffffffff81237edd>] ? bio_put+0x7d/0xa0 May 16 15:06:22 virt-369 kernel: [<ffffffff814fc62c>] ? sync_page_io+0x8c/0x110 May 16 15:06:22 virt-369 kernel: [<ffffffffa03bf371>] raid_ctr+0xd41/0x1680 [dm_raid] May 16 15:06:22 virt-369 kernel: [<ffffffffa00050c1>] ? realloc_argv+0x31/0x80 [dm_mod] May 16 15:06:22 virt-369 kernel: [<ffffffffa00056f7>] dm_table_add_target+0x177/0x460 [dm_mod] May 16 15:06:22 virt-369 kernel: [<ffffffffa0008e37>] table_load+0x157/0x390 [dm_mod] May 16 15:06:22 virt-369 kernel: [<ffffffffa0008ce0>] ? retrieve_status+0x1c0/0x1c0 [dm_mod] May 16 15:06:22 virt-369 kernel: [<ffffffffa0009a35>] ctl_ioctl+0x1e5/0x500 [dm_mod] May 16 15:06:22 virt-369 kernel: [<ffffffffa0009d63>] dm_ctl_ioctl+0x13/0x20 [dm_mod] May 16 15:06:22 virt-369 kernel: [<ffffffff81211ed5>] do_vfs_ioctl+0x2d5/0x4b0 May 16 15:06:22 virt-369 kernel: [<ffffffff812aea5e>] ? file_has_perm+0xae/0xc0 May 16 15:06:22 virt-369 kernel: [<ffffffff81212151>] SyS_ioctl+0xa1/0xc0 May 16 15:06:22 virt-369 kernel: [<ffffffff816964c9>] system_call_fastpath+0x16/0x1b May 16 15:06:22 virt-369 kernel: ---[ end trace 11faf4a83e1d08f1 ]--- May 16 15:06:22 virt-369 kernel: device-mapper: raid: #011 New layout: far w/ 0 copies May 16 15:06:22 virt-369 kernel: device-mapper: table: 253:22: raid: Unable to assemble array: Invalid superblocks May 16 15:06:22 virt-369 kernel: device-mapper: ioctl: error adding target to table ============================================= 3.10.0-514.el7.x86_64 lvm2-2.02.171-1.el7 BUILT: Wed May 3 14:05:13 CEST 2017 lvm2-libs-2.02.171-1.el7 BUILT: Wed May 3 14:05:13 CEST 2017 device-mapper-1.02.140-1.el7 BUILT: Wed May 3 14:05:13 CEST 2017 device-mapper-libs-1.02.140-1.el7 BUILT: Wed May 3 14:05:13 CEST 2017 device-mapper-event-1.02.140-1.el7 BUILT: Wed May 3 14:05:13 CEST 2017 device-mapper-event-libs-1.02.140-1.el7 BUILT: Wed May 3 14:05:13 CEST 2017 device-mapper-persistent-data-0.7.0-0.1.rc6.el7 BUILT: Mon Mar 27 17:15:46 CEST 2017 dm-raid target calls raid10_md_layout_to_format() with layout zero thus causing the WARN_ON(). Solution can be to remove it and return "unknown" for the layout or avoid calling it in this case. The kernel properly rejects to activate the mapping as with raid4 though but the additional WARNING is confusing. Conclusion: - the activation is properly rejected - kernel creates a larger message than necessary which can be enhanced to avoid confusion Moving to 7.5 to address the latter. Closing in favour of 1463740 keeping track of the kernel message improvement. |