RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1451822 - rhel7.3 activation issues of rhel7.4 created raid types
Summary: rhel7.3 activation issues of rhel7.4 created raid types
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-17 15:09 UTC by Corey Marthaler
Modified: 2021-09-03 12:49 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.02.171-5.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 21:54:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2222 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2017-08-01 18:42:41 UTC

Description Corey Marthaler 2017-05-17 15:09:21 UTC
Description of problem:
This does not appear to affect raid1. All others are affect however.

The following types (raid4,5,6,10,0) were created on a rhel4 machines, deactivated, then scanned on the rhel7.3 machine and attempted to be activated. 


# rhel7.3

# raid4 and raid5 issues all looked like this:
[root@harding-02 ~]# pvscan --cache
[root@harding-02 ~]# lvchange -ay activator1
  device-mapper: reload ioctl on (253:46) failed: Invalid argument

May 17 09:41:51 harding-02 kernel: device-mapper: table: 253:46: raid: takeover not possible
May 17 09:41:51 harding-02 kernel: device-mapper: ioctl: error adding target to table



# raid0, raid6 and raid10 issues all looked like this:
[root@harding-02 ~]# pvscan --cache
[root@harding-02 ~]# lvchange -ay activator1
  device-mapper: create ioctl on activator1-lvol0_rmeta_0LVM-rk2rZRu5Mgu6E1tdM5EVbCur3AqKTgllds9gwmRiK556cN8hD7WMV08fotasb6hn failed: Device or resource busy




Version-Release number of selected component (if applicable):
7.3 machine:

3.10.0-514.el7.x86_64
lvm2-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
lvm2-libs-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-libs-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-event-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-event-libs-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016


7.4 machine:

3.10.0-651.el7.x86_64

lvm2-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
lvm2-libs-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-libs-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-event-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-event-libs-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017

Comment 2 Zdenek Kabelac 2017-05-17 15:27:46 UTC
This is 'different' case.

You get 'unknown' segtype - so this case is working fine.

Comment 3 Alasdair Kergon 2017-05-18 14:46:32 UTC
So something is missing from the new metadata to identify it as not-possible-to-activate on the old systems.

Once again, all the cases need to be identified, and then the metadata changed.  Also need to check the new userspace code checks the running kernel version correctly. (I.e. 7.4 userspace booted with a 7.3 kernel should not show this problem.)

Comment 5 Heinz Mauelshagen 2017-06-14 15:20:51 UTC
Upstream commits 
1c916ec5ffd37cfb7be2101b93a2dc91aa2ef7f0
14d563accc7692dfd827a4db91912c9ab498ca1f

Comment 7 Corey Marthaler 2017-06-23 22:37:58 UTC
Looks like we still have raid4 issues on 7.3 w/ 7.4 created raid volumes. Everything (except for raid0 obviously) works on 7.2 though. Are we fine with raid4 errors remaining for verification? 


# 7.4 system (with today's latest test kernel)                                                                                                                                                                                   
3.10.0-686.el7.bz1464274.x86_64
lvm2-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
lvm2-libs-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
lvm2-cluster-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
                                                                                                                                                                       
                                                                                                                                                                                                             
[root@host-127 ~]# vgcreate VG /dev/sd[abcdefgh]1                                                                                                                                                            
  Volume group "VG" successfully created                                                                                                                                                                                 
                                                                                                                                                                                                                                           
[root@host-127 ~]# lvcreate --activate ey --type raid1 -m 1 -n raid1 -L 100M VG                                                                                                                                                            
  Logical volume "raid1" created.                                                                                                                                                                                                          
[root@host-127 ~]# lvcreate --activate ey --type raid4 -i 3 -n raid4 -L 100M VG                                                                                                                                                                   
  Using default stripesize 64.00 KiB.                                                                                                                                                                                                                     
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).                                                                                                                                                                     
  Logical volume "raid4" created.                                                                                                                                                                                                                                    
[root@host-127 ~]# lvcreate --activate ey --type raid5 -i 3 -n raid5 -L 100M VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "raid5" created.
[root@host-127 ~]# lvcreate --activate ey --type raid6 -i 3 -n raid6 -L 100M VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "raid6" created.
[root@host-127 ~]# lvcreate --activate ey --type raid10 -i 3 -n raid10 -L 100M VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "raid10" created.
[root@host-127 ~]# lvcreate --activate ey --type raid0 -i 3 -n raid0 -L 100M VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "raid0" created.
[root@host-127 ~]# lvs -o +segtype
  LV     VG Attr       LSize   Cpy%Sync Type  
  raid0  VG rwi-a-r--- 108.00m          raid0 
  raid1  VG rwi-a-r--- 100.00m 100.00   raid1 
  raid10 VG rwi-a-r--- 108.00m 100.00   raid10
  raid4  VG rwi-a-r--- 108.00m 100.00   raid4 
  raid5  VG rwi-a-r--- 108.00m 100.00   raid5 
  raid6  VG rwi-a-r--- 108.00m 100.00   raid6 
[root@host-127 ~]# vgchange -an VG
  0 logical volume(s) in volume group "VG" now active




# 7.3 system
3.10.0-514.el7.x86_64

[root@host-130 ~]# pvscan --cache
[root@host-130 ~]# lvs -o +segtype
  LV     VG Attr       LSize   Cpy%Sync Type  
  raid0  VG rwi---r--- 108.00m          raid0 
  raid1  VG rwi---r--- 100.00m          raid1 
  raid10 VG rwi---r--- 108.00m          raid10
  raid4  VG rwi---r--- 108.00m          raid4 
  raid5  VG rwi---r--- 108.00m          raid5 
  raid6  VG rwi---r--- 108.00m          raid6 
[root@host-130 ~]# 
[root@host-130 ~]# lvchange -ay VG/raid0
[root@host-130 ~]# lvchange -ay VG/raid1
[root@host-130 ~]# lvchange -ay VG/raid10
[root@host-130 ~]# lvchange -ay VG/raid4
  device-mapper: reload ioctl on (253:32) failed: Invalid argument

Jun 23 17:21:26 host-130 kernel: device-mapper: table: 253:32: raid: takeover not possible
Jun 23 17:21:26 host-130 kernel: device-mapper: ioctl: error adding target to table

[root@host-130 ~]# lvchange -ay VG/raid5
[root@host-130 ~]# lvchange -ay VG/raid6
[root@host-130 ~]# vgchange -an VG
  0 logical volume(s) in volume group "VG" now active




# 7.2 system
3.10.0-327.el7.x86_64

[root@host-132 ~]# pvscan --cache
  WARNING: Unrecognised segment type raid0
[root@host-132 ~]# lvchange -ay VG/raid1
  WARNING: Unrecognised segment type raid0
[root@host-132 ~]# lvchange -ay VG/raid10
  WARNING: Unrecognised segment type raid0
[root@host-132 ~]# lvchange -ay VG/raid4
  WARNING: Unrecognised segment type raid0
[root@host-132 ~]# lvchange -ay VG/raid5
  WARNING: Unrecognised segment type raid0
[root@host-132 ~]# lvchange -ay VG/raid6
  WARNING: Unrecognised segment type raid0
[root@host-132 ~]# lvchange -ay VG/raid0
  WARNING: Unrecognised segment type raid0
  Refusing activation of LV raid0 containing an unrecognised segment.
[root@host-132 ~]# lvs -o +segtype
  WARNING: Unrecognised segment type raid0
  LV     VG Attr       LSize   Cpy%Sync Type  
  raid0  VG vwi---u--- 108.00m          raid0 
  raid1  VG rwi-a-r--- 100.00m 100.00   raid1 
  raid10 VG rwi-a-r--- 108.00m 100.00   raid10
  raid4  VG rwi-a-r--- 108.00m 100.00   raid4 
  raid5  VG rwi-a-r--- 108.00m 100.00   raid5 
  raid6  VG rwi-a-r--- 108.00m 100.00   raid6

Comment 8 Corey Marthaler 2017-06-27 20:36:15 UTC
Marking verified in the latest rpms/kernel with caveat/behavior listed in comment #7.

3.10.0-688.el7.x86_64

lvm2-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
lvm2-libs-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
lvm2-cluster-2.02.171-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-libs-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-event-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-event-libs-1.02.140-7.el7    BUILT: Thu Jun 22 08:35:15 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017

Comment 9 errata-xmlrpc 2017-08-01 21:54:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2222


Note You need to log in before you can comment on or make changes to this bug.