Bug 1326426 - raid10 leg repair size calculation is wrong
Summary: raid10 leg repair size calculation is wrong
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-12 16:04 UTC by Corey Marthaler
Modified: 2016-11-04 04:20 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-04 04:20:15 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1445 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-11-03 13:46:41 UTC

Description Corey Marthaler 2016-04-12 16:04:58 UTC
Description of problem:

### Current RHEL6.8 rpms                                                                                                                                                                                         
                                                                                                                                                                                                    
[root@host-113 ~]# lvcreate -L 1G -i 3 --type raid10 -n raid10 vg                                                                                                                                   
  Using default stripesize 64.00 KiB.                                                                                                                                                               
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.01 GiB (258 extents).                                                                                                                                     
  Logical volume "raid10" created.                                                                                                                                                                                            
[root@host-113 ~]#                                                                                                                                                                                                            
[root@host-113 ~]# lvs -a -o +devices                                                                                                                                                                                         
  LV                VG  Attr       LSize    Cpy%Sync Devices                                                                                                                                                                  
  raid10            vg  rwi-a-r---   1.01g  100.00   raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0)                                                                              
  [raid10_rimage_0] vg  iwi-aor--- 344.00m           /dev/sda1(1)                                                                                                                                                                                   
  [raid10_rimage_1] vg  iwi-aor--- 344.00m           /dev/sdb1(1)                                                                                                                                                                                   
  [raid10_rimage_2] vg  iwi-aor--- 344.00m           /dev/sdc1(1)                                                                                                                                                                                   
  [raid10_rimage_3] vg  iwi-aor--- 344.00m           /dev/sdd1(1)                                                                                                                                                                                   
  [raid10_rimage_4] vg  iwi-aor--- 344.00m           /dev/sde1(1)                                                                                                                                                                                             
  [raid10_rimage_5] vg  iwi-aor--- 344.00m           /dev/sdf1(1)                                                                                                                                                                                             
  [raid10_rmeta_0]  vg  ewi-aor---   4.00m           /dev/sda1(0)                                                                                                                                                                                             
  [raid10_rmeta_1]  vg  ewi-aor---   4.00m           /dev/sdb1(0)                                                                                                                                                                                                 
  [raid10_rmeta_2]  vg  ewi-aor---   4.00m           /dev/sdc1(0)                                                                                                                                                                                                 
  [raid10_rmeta_3]  vg  ewi-aor---   4.00m           /dev/sdd1(0)                                                                                                                                                                                                     
  [raid10_rmeta_4]  vg  ewi-aor---   4.00m           /dev/sde1(0)
  [raid10_rmeta_5]  vg  ewi-aor---   4.00m           /dev/sdf1(0)

[root@host-113 ~]# echo offline > /sys/block/sda/device/state
[root@host-113 ~]#  dd if=/dev/urandom of=/dev/vg/raid10
^C
3793+0 records in
3793+0 records out
1942016 bytes (1.9 MB) copied, 0.560893 s, 3.5 MB/s

Apr 12 10:43:40 host-075 lvm[3702]: Faulty devices in vg/raid10 successfully replaced.

[root@host-113 ~]# lvs -a -o +devices
  /dev/sda1: open failed: No such device or address
  Couldn't find device with uuid pZHaZK-dCc0-n22d-FWY1-umqJ-0gER-Lx1D1g.
  LV                VG  Attr       LSize    Cpy%Sync Devices
  raid10            vg  rwi-a-r---   1.01g  100.00   raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0)
  [raid10_rimage_0] vg  iwi-aor--- 344.00m           /dev/sdg1(1)    <- REPLACED LEG IS 344.00M IN SIZE
  [raid10_rimage_1] vg  iwi-aor--- 344.00m           /dev/sdb1(1)
  [raid10_rimage_2] vg  iwi-aor--- 344.00m           /dev/sdc1(1)
  [raid10_rimage_3] vg  iwi-aor--- 344.00m           /dev/sdd1(1)
  [raid10_rimage_4] vg  iwi-aor--- 344.00m           /dev/sde1(1)
  [raid10_rimage_5] vg  iwi-aor--- 344.00m           /dev/sdf1(1)
  [raid10_rmeta_0]  vg  ewi-aor---   4.00m           /dev/sdg1(0)
  [raid10_rmeta_1]  vg  ewi-aor---   4.00m           /dev/sdb1(0)
  [raid10_rmeta_2]  vg  ewi-aor---   4.00m           /dev/sdc1(0)
  [raid10_rmeta_3]  vg  ewi-aor---   4.00m           /dev/sdd1(0)
  [raid10_rmeta_4]  vg  ewi-aor---   4.00m           /dev/sde1(0)
  [raid10_rmeta_5]  vg  ewi-aor---   4.00m           /dev/sdf1(0)





### RHEL7.2 rpms

[root@host-075 ~]# lvcreate -L 1G -i 3 --type raid10 -n raid10 vg
  Using default stripesize 64.00 KiB.
  Rounding size (256 extents) up to stripe boundary size (258 extents).
  Logical volume "raid10" created.
[root@host-075 ~]# lvs -a -o +devices
  LV                VG  Attr       LSize    Cpy%Sync Devices
  raid10            vg  rwi-a-r---   1.01g  100.00   raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0)
  [raid10_rimage_0] vg  iwi-aor--- 344.00m           /dev/sda1(1)
  [raid10_rimage_1] vg  iwi-aor--- 344.00m           /dev/sdb1(1)
  [raid10_rimage_2] vg  iwi-aor--- 344.00m           /dev/sdc1(1)
  [raid10_rimage_3] vg  iwi-aor--- 344.00m           /dev/sdd1(1)
  [raid10_rimage_4] vg  iwi-aor--- 344.00m           /dev/sde1(1)
  [raid10_rimage_5] vg  iwi-aor--- 344.00m           /dev/sdf1(1)
  [raid10_rmeta_0]  vg  ewi-aor---   4.00m           /dev/sda1(0)
  [raid10_rmeta_1]  vg  ewi-aor---   4.00m           /dev/sdb1(0)
  [raid10_rmeta_2]  vg  ewi-aor---   4.00m           /dev/sdc1(0)
  [raid10_rmeta_3]  vg  ewi-aor---   4.00m           /dev/sdd1(0)
  [raid10_rmeta_4]  vg  ewi-aor---   4.00m           /dev/sde1(0)
  [raid10_rmeta_5]  vg  ewi-aor---   4.00m           /dev/sdf1(0)

[root@host-075 ~]# echo offline > /sys/block/sda/device/state
[root@host-075 ~]# dd if=/dev/urandom of=/dev/vg/raid10
^C
5625+0 records in
5625+0 records out
2880000 bytes (2.9 MB) copied, 1.02476 s, 2.8 MB/s

Apr 12 10:43:40 host-075 lvm[3702]: Faulty devices in vg/raid10 successfully replaced.

[root@host-075 ~]# lvs -a -o +devices
  WARNING: Device for PV 0Ze5cU-TnyI-VcJD-MDYI-uwhP-E4Pd-mUFUSL not found or rejected by a filter.
  LV                VG  Attr       LSize    Cpy%Sync Devices 
  raid10            vg  rwi-a-r---   1.01g  100.00   raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0)
  [raid10_rimage_0] vg  iwi-aor---   1.01g           /dev/sdg1(1)    <- REPLACED LEG IS THE 1.01G TOTAL VOLUME SIZE
  [raid10_rimage_1] vg  iwi-aor--- 344.00m           /dev/sdb1(1)
  [raid10_rimage_2] vg  iwi-aor--- 344.00m           /dev/sdc1(1)
  [raid10_rimage_3] vg  iwi-aor--- 344.00m           /dev/sdd1(1)
  [raid10_rimage_4] vg  iwi-aor--- 344.00m           /dev/sde1(1)
  [raid10_rimage_5] vg  iwi-aor--- 344.00m           /dev/sdf1(1)
  [raid10_rmeta_0]  vg  ewi-aor---   4.00m           /dev/sdg1(0)
  [raid10_rmeta_1]  vg  ewi-aor---   4.00m           /dev/sdb1(0)
  [raid10_rmeta_2]  vg  ewi-aor---   4.00m           /dev/sdc1(0)
  [raid10_rmeta_3]  vg  ewi-aor---   4.00m           /dev/sdd1(0)
  [raid10_rmeta_4]  vg  ewi-aor---   4.00m           /dev/sde1(0)
  [raid10_rmeta_5]  vg  ewi-aor---   4.00m           /dev/sdf1(0)




Version-Release number of selected component (if applicable):
3.10.0-327.4.4.el7.x86_64
lvm2-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
lvm2-libs-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
lvm2-cluster-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-libs-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-event-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-event-libs-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015

Comment 2 Heinz Mauelshagen 2016-07-06 13:14:36 UTC
Corey,

this is fixed in current 7.3 rpms.
Please confirm.

Comment 4 Corey Marthaler 2016-07-07 21:35:00 UTC
Fix verified in the latest rpms.

lvm2-2.02.160-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                                                                              
lvm2-libs-2.02.160-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                                                            
lvm2-cluster-2.02.160-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                                              
device-mapper-1.02.130-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                                             
device-mapper-libs-1.02.130-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                                   
device-mapper-event-1.02.130-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                                 
device-mapper-event-libs-1.02.130-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                         
device-mapper-persistent-data-0.6.2-0.1.rc8.el7    BUILT: Wed May  4 02:56:34 CDT 2016                                                                                                
cmirror-2.02.160-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                                         
sanlock-3.3.0-1.el7    BUILT: Wed Feb 24 09:52:30 CST 2016                                                                                                                            
sanlock-lib-3.3.0-1.el7    BUILT: Wed Feb 24 09:52:30 CST 2016                                                                                                                         
lvm2-lockd-2.02.160-1.el7    BUILT: Wed Jul  6 11:16:47 CDT 2016                                                                                                                        
                                                                                                                                                                                          
                                                                                                                                                                                            


[root@host-075 ~]# lvcreate -L 1G -i 3 --type raid10 -n raid10 vg
  Using default stripesize 64.00 KiB.
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.01 GiB (258 extents).
  Logical volume "raid10" created.
[root@host-075 ~]# lvs -a -o +devices
  LV                VG Attr       LSize   Cpy%Sync Devices
  raid10            vg rwi-a-r---   1.01g 100.00   raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0)
  [raid10_rimage_0] vg iwi-aor--- 344.00m          /dev/sda1(1)
  [raid10_rimage_1] vg iwi-aor--- 344.00m          /dev/sdb1(1)
  [raid10_rimage_2] vg iwi-aor--- 344.00m          /dev/sdc1(1)
  [raid10_rimage_3] vg iwi-aor--- 344.00m          /dev/sdd1(1)
  [raid10_rimage_4] vg iwi-aor--- 344.00m          /dev/sde1(1)
  [raid10_rimage_5] vg iwi-aor--- 344.00m          /dev/sdf1(1)
  [raid10_rmeta_0]  vg ewi-aor---   4.00m          /dev/sda1(0)
  [raid10_rmeta_1]  vg ewi-aor---   4.00m          /dev/sdb1(0)
  [raid10_rmeta_2]  vg ewi-aor---   4.00m          /dev/sdc1(0)
  [raid10_rmeta_3]  vg ewi-aor---   4.00m          /dev/sdd1(0)
  [raid10_rmeta_4]  vg ewi-aor---   4.00m          /dev/sde1(0)
  [raid10_rmeta_5]  vg ewi-aor---   4.00m          /dev/sdf1(0)

[root@host-075 ~]# echo offline > /sys/block/sda/device/state
[root@host-075 ~]# dd if=/dev/urandom of=/dev/vg/raid10
^C
29873+0 records in
29873+0 records out
15294976 bytes (15 MB) copied, 7.88979 s, 1.9 MB/s

Jul  7 16:26:23 host-075 lvm[1328]: Faulty devices in vg/raid10 successfully replaced.
Jul  7 16:26:23 host-075 lvm[1328]: Device #0 of raid10 array, vg-raid10, has failed.
Jul  7 16:26:23 host-075 lvm[1328]: WARNING: Device for PV 852lfr-U0P2-wa1x-2WG5-buf3-rzvy-n4ZfMB not found or rejected by a filter.
Jul  7 16:26:23 host-075 lvm[1328]: WARNING: Device for PV 852lfr-U0P2-wa1x-2WG5-buf3-rzvy-n4ZfMB not found or rejected by a filter.
Jul  7 16:26:23 host-075 lvm[1328]: Faulty devices in vg/raid10 successfully replaced.

[root@host-075 ~]# lvs -a -o +devices
  WARNING: Device for PV 852lfr-U0P2-wa1x-2WG5-buf3-rzvy-n4ZfMB not found or rejected by a filter.
  LV                VG Attr       LSize   Cpy%Sync Devices
  raid10            vg rwi-a-r---   1.01g 100.00   raid10_rimage_0(0),raid10_rimage_1(0),raid10_rimage_2(0),raid10_rimage_3(0),raid10_rimage_4(0),raid10_rimage_5(0)
  [raid10_rimage_0] vg iwi-aor--- 344.00m          /dev/sdg1(1)
  [raid10_rimage_1] vg iwi-aor--- 344.00m          /dev/sdb1(1)
  [raid10_rimage_2] vg iwi-aor--- 344.00m          /dev/sdc1(1)
  [raid10_rimage_3] vg iwi-aor--- 344.00m          /dev/sdd1(1)
  [raid10_rimage_4] vg iwi-aor--- 344.00m          /dev/sde1(1)
  [raid10_rimage_5] vg iwi-aor--- 344.00m          /dev/sdf1(1)
  [raid10_rmeta_0]  vg ewi-aor---   4.00m          /dev/sdg1(0)
  [raid10_rmeta_1]  vg ewi-aor---   4.00m          /dev/sdb1(0)
  [raid10_rmeta_2]  vg ewi-aor---   4.00m          /dev/sdc1(0)
  [raid10_rmeta_3]  vg ewi-aor---   4.00m          /dev/sdd1(0)
  [raid10_rmeta_4]  vg ewi-aor---   4.00m          /dev/sde1(0)
  [raid10_rmeta_5]  vg ewi-aor---   4.00m          /dev/sdf1(0)

Comment 6 errata-xmlrpc 2016-11-04 04:20:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html


Note You need to log in before you can comment on or make changes to this bug.