Bug 1451822
| Summary: | rhel7.3 activation issues of rhel7.4 created raid types | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | high | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, prockai, zkabelac |
| Version: | 7.4 | ||
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.171-5.el7 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-08-01 21:54:18 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Corey Marthaler
2017-05-17 15:09:21 UTC
This is 'different' case. You get 'unknown' segtype - so this case is working fine. So something is missing from the new metadata to identify it as not-possible-to-activate on the old systems. Once again, all the cases need to be identified, and then the metadata changed. Also need to check the new userspace code checks the running kernel version correctly. (I.e. 7.4 userspace booted with a 7.3 kernel should not show this problem.) Upstream commits 1c916ec5ffd37cfb7be2101b93a2dc91aa2ef7f0 14d563accc7692dfd827a4db91912c9ab498ca1f Looks like we still have raid4 issues on 7.3 w/ 7.4 created raid volumes. Everything (except for raid0 obviously) works on 7.2 though. Are we fine with raid4 errors remaining for verification? # 7.4 system (with today's latest test kernel) 3.10.0-686.el7.bz1464274.x86_64 lvm2-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 lvm2-libs-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 lvm2-cluster-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 [root@host-127 ~]# vgcreate VG /dev/sd[abcdefgh]1 Volume group "VG" successfully created [root@host-127 ~]# lvcreate --activate ey --type raid1 -m 1 -n raid1 -L 100M VG Logical volume "raid1" created. [root@host-127 ~]# lvcreate --activate ey --type raid4 -i 3 -n raid4 -L 100M VG Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents). Logical volume "raid4" created. [root@host-127 ~]# lvcreate --activate ey --type raid5 -i 3 -n raid5 -L 100M VG Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents). Logical volume "raid5" created. [root@host-127 ~]# lvcreate --activate ey --type raid6 -i 3 -n raid6 -L 100M VG Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents). Logical volume "raid6" created. [root@host-127 ~]# lvcreate --activate ey --type raid10 -i 3 -n raid10 -L 100M VG Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents). Logical volume "raid10" created. [root@host-127 ~]# lvcreate --activate ey --type raid0 -i 3 -n raid0 -L 100M VG Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents). Logical volume "raid0" created. [root@host-127 ~]# lvs -o +segtype LV VG Attr LSize Cpy%Sync Type raid0 VG rwi-a-r--- 108.00m raid0 raid1 VG rwi-a-r--- 100.00m 100.00 raid1 raid10 VG rwi-a-r--- 108.00m 100.00 raid10 raid4 VG rwi-a-r--- 108.00m 100.00 raid4 raid5 VG rwi-a-r--- 108.00m 100.00 raid5 raid6 VG rwi-a-r--- 108.00m 100.00 raid6 [root@host-127 ~]# vgchange -an VG 0 logical volume(s) in volume group "VG" now active # 7.3 system 3.10.0-514.el7.x86_64 [root@host-130 ~]# pvscan --cache [root@host-130 ~]# lvs -o +segtype LV VG Attr LSize Cpy%Sync Type raid0 VG rwi---r--- 108.00m raid0 raid1 VG rwi---r--- 100.00m raid1 raid10 VG rwi---r--- 108.00m raid10 raid4 VG rwi---r--- 108.00m raid4 raid5 VG rwi---r--- 108.00m raid5 raid6 VG rwi---r--- 108.00m raid6 [root@host-130 ~]# [root@host-130 ~]# lvchange -ay VG/raid0 [root@host-130 ~]# lvchange -ay VG/raid1 [root@host-130 ~]# lvchange -ay VG/raid10 [root@host-130 ~]# lvchange -ay VG/raid4 device-mapper: reload ioctl on (253:32) failed: Invalid argument Jun 23 17:21:26 host-130 kernel: device-mapper: table: 253:32: raid: takeover not possible Jun 23 17:21:26 host-130 kernel: device-mapper: ioctl: error adding target to table [root@host-130 ~]# lvchange -ay VG/raid5 [root@host-130 ~]# lvchange -ay VG/raid6 [root@host-130 ~]# vgchange -an VG 0 logical volume(s) in volume group "VG" now active # 7.2 system 3.10.0-327.el7.x86_64 [root@host-132 ~]# pvscan --cache WARNING: Unrecognised segment type raid0 [root@host-132 ~]# lvchange -ay VG/raid1 WARNING: Unrecognised segment type raid0 [root@host-132 ~]# lvchange -ay VG/raid10 WARNING: Unrecognised segment type raid0 [root@host-132 ~]# lvchange -ay VG/raid4 WARNING: Unrecognised segment type raid0 [root@host-132 ~]# lvchange -ay VG/raid5 WARNING: Unrecognised segment type raid0 [root@host-132 ~]# lvchange -ay VG/raid6 WARNING: Unrecognised segment type raid0 [root@host-132 ~]# lvchange -ay VG/raid0 WARNING: Unrecognised segment type raid0 Refusing activation of LV raid0 containing an unrecognised segment. [root@host-132 ~]# lvs -o +segtype WARNING: Unrecognised segment type raid0 LV VG Attr LSize Cpy%Sync Type raid0 VG vwi---u--- 108.00m raid0 raid1 VG rwi-a-r--- 100.00m 100.00 raid1 raid10 VG rwi-a-r--- 108.00m 100.00 raid10 raid4 VG rwi-a-r--- 108.00m 100.00 raid4 raid5 VG rwi-a-r--- 108.00m 100.00 raid5 raid6 VG rwi-a-r--- 108.00m 100.00 raid6 Marking verified in the latest rpms/kernel with caveat/behavior listed in comment #7. 3.10.0-688.el7.x86_64 lvm2-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 lvm2-libs-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 lvm2-cluster-2.02.171-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-libs-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-event-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-event-libs-1.02.140-7.el7 BUILT: Thu Jun 22 08:35:15 CDT 2017 device-mapper-persistent-data-0.7.0-0.1.rc6.el7 BUILT: Mon Mar 27 10:15:46 CDT 2017 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2222 |