RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1851529 - raid5 and raid6 + integrity can not be extended
Summary: raid5 and raid6 + integrity can not be extended
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.3
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 8.0
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-26 20:32 UTC by Corey Marthaler
Modified: 2021-09-07 11:50 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.03.09-4.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-04 02:00:38 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4546 0 None None None 2020-11-04 02:00:51 UTC

Description Corey Marthaler 2020-06-26 20:32:42 UTC
Description of problem:
In the lvmraid man page it lists lv reduction as a current limitation but not lv extension. It appears extension works with raid levels 1,4, and 10 but not 5 and 6.
 
# RAID4 (PASSES)
[root@hayes-01 ~]# lvcreate --raidintegrity y --type raid4 -R 256.00k -i 2 -n raid4 -l 720 centipede2
  Using default stripesize 64.00 KiB.
  Creating integrity metadata LV raid4_rimage_0_imeta with size 12.00 MiB.
  Logical volume "raid4_rimage_0_imeta" created.
  Creating integrity metadata LV raid4_rimage_1_imeta with size 12.00 MiB.
  Logical volume "raid4_rimage_1_imeta" created.
  Creating integrity metadata LV raid4_rimage_2_imeta with size 12.00 MiB.
  Logical volume "raid4_rimage_2_imeta" created.
  Logical volume "raid4" created.

[root@hayes-01 ~]# lvextend -L +1G /dev/centipede2/raid4
  Using stripesize of last segment 64.00 KiB
  Size of logical volume centipede2/raid4 changed from 2.81 GiB (720 extents) to 3.81 GiB (976 extents).
  Logical volume centipede2/raid4 successfully resized.


# RAID5
[root@hayes-01 ~]# lvcreate --raidintegrity y --type raid5 -R 256.00k -i 5 -n raid5 -l 720 centipede2
  Using default stripesize 64.00 KiB.
  Creating integrity metadata LV raid5_rimage_0_imeta with size 8.00 MiB.
  Logical volume "raid5_rimage_0_imeta" created.
  Creating integrity metadata LV raid5_rimage_1_imeta with size 8.00 MiB.
  Logical volume "raid5_rimage_1_imeta" created.
  Creating integrity metadata LV raid5_rimage_2_imeta with size 8.00 MiB.
  Logical volume "raid5_rimage_2_imeta" created.
  Creating integrity metadata LV raid5_rimage_3_imeta with size 8.00 MiB.
  Logical volume "raid5_rimage_3_imeta" created.
  Creating integrity metadata LV raid5_rimage_4_imeta with size 8.00 MiB.
  Logical volume "raid5_rimage_4_imeta" created.
  Creating integrity metadata LV raid5_rimage_5_imeta with size 8.00 MiB.
  Logical volume "raid5_rimage_5_imeta" created.
  Logical volume "raid5" created.

[root@hayes-01 ~]# lvextend -L +1G /dev/centipede2/raid5
  Using stripesize of last segment 64.00 KiB
  Rounding size (976 extents) up to stripe boundary size for segment (980 extents).
  Size of logical volume centipede2/raid5 changed from 2.81 GiB (720 extents) to <3.83 GiB (980 extents).
  device-mapper: reload ioctl on  (253:61) failed: Invalid argument
  Failed to lock logical volume centipede2/raid5.

s-01 lvm[1404]: raid5_ls array, centipede2-raid5, is now in-sync.
Jun 26 15:29:06 hayes-01 kernel: device-mapper: table: 253:61: integrity: The device is too small
Jun 26 15:29:06 hayes-01 kernel: device-mapper: ioctl: error adding target to table


# RAID6
[root@hayes-01 ~]# lvcreate --raidintegrity y --type raid6_zr -R 256.00k -i 5 -n raid6_zr -l 720 centipede2
  Using default stripesize 64.00 KiB.
  Creating integrity metadata LV raid6_zr_rimage_0_imeta with size 8.00 MiB.
  Logical volume "raid6_zr_rimage_0_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_1_imeta with size 8.00 MiB.
  Logical volume "raid6_zr_rimage_1_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_2_imeta with size 8.00 MiB.
  Logical volume "raid6_zr_rimage_2_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_3_imeta with size 8.00 MiB.
  Logical volume "raid6_zr_rimage_3_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_4_imeta with size 8.00 MiB.
  Logical volume "raid6_zr_rimage_4_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_5_imeta with size 8.00 MiB.
  Logical volume "raid6_zr_rimage_5_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_6_imeta with size 8.00 MiB.
  Logical volume "raid6_zr_rimage_6_imeta" created.
  Logical volume "raid6_zr" created.

[root@hayes-01 ~]# lvextend -L +1G /dev/centipede2/raid6_zr
  Using stripesize of last segment 64.00 KiB
  Rounding size (976 extents) up to stripe boundary size for segment (980 extents).
  Size of logical volume centipede2/raid6_zr changed from 2.81 GiB (720 extents) to <3.83 GiB (980 extents).
  device-mapper: reload ioctl on  (253:27) failed: Invalid argument
  Failed to lock logical volume centipede2/raid6_zr.
Jun 26 15:30:31 hayes-01 kernel: device-mapper: table: 253:27: integrity: The device is too small
Jun 26 15:30:31 hayes-01 kernel: device-mapper: ioctl: error adding target to table


Version-Release number of selected component (if applicable):
kernel-4.18.0-215.el8    BUILT: Tue Jun 16 14:14:53 CDT 2020
lvm2-2.03.09-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
lvm2-libs-2.03.09-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
lvm2-dbusd-2.03.09-2.el8    BUILT: Fri May 29 11:32:49 CDT 2020
lvm2-lockd-2.03.09-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
boom-boot-1.2-1.el8    BUILT: Sun Jun  7 07:20:03 CDT 2020
device-mapper-1.02.171-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
device-mapper-libs-1.02.171-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
device-mapper-event-1.02.171-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
device-mapper-event-libs-1.02.171-2.el8    BUILT: Fri May 29 11:29:58 CDT 2020
device-mapper-persistent-data-0.8.5-3.el8    BUILT: Wed Nov 27 07:05:21 CST 2019


How reproducible:
Everytime

Comment 2 David Teigland 2020-06-26 21:58:48 UTC
This seems to be related to the size of the LV.  If I run the same but add 1G to an existing 1G LV it works, and our test suite using smaller sizes works, but when I use the same sizes as above I see the same failure.

Comment 3 David Teigland 2020-06-29 19:50:00 UTC
When lvm is calculating the necessary metadata size (4MB for each 500MB of data, +4MB) it is not accounting for "initial_sectors", which is much smaller than the amount of normal metadata sectors.  Given certain sizes, the growth of initial_sectors can cause the metadata device to be too small.  Currently working to confirm the ratio needed for initial_sectors.

Comment 4 David Teigland 2020-06-30 21:54:49 UTC
pushed to master
https://sourceware.org/git/?p=lvm2.git;a=commit;h=ad773511c59aea239592c014a2dab4161ed92214


(The tests below are using fewer stripes than shown above, but my tests with this many stripes still had the same problem.)


[root@null-04 ~]# lvcreate --raidintegrity y --type raid5 -R 256.00k -i 4 -n rr -l 720 test
  Using default stripesize 64.00 KiB.
  Creating integrity metadata LV rr_rimage_0_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Creating integrity metadata LV rr_rimage_2_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_2_imeta" created.
  Creating integrity metadata LV rr_rimage_3_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_3_imeta" created.
  Creating integrity metadata LV rr_rimage_4_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_4_imeta" created.
  Logical volume "rr" created.

[root@null-04 ~]# lvs -a test
  LV                  VG   Attr       LSize   Origin              Cpy%Sync 
  rr                  test rwi-a-r---   2.81g                     100.00  
  [rr_rimage_0]       test gwi-aor--- 720.00m [rr_rimage_0_iorig] 100.00  
  [rr_rimage_0_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_0_iorig] test -wi-ao---- 720.00m                             
  [rr_rimage_1]       test gwi-aor--- 720.00m [rr_rimage_1_iorig] 100.00  
  [rr_rimage_1_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_1_iorig] test -wi-ao---- 720.00m                             
  [rr_rimage_2]       test gwi-aor--- 720.00m [rr_rimage_2_iorig] 100.00  
  [rr_rimage_2_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_2_iorig] test -wi-ao---- 720.00m                             
  [rr_rimage_3]       test gwi-aor--- 720.00m [rr_rimage_3_iorig] 100.00  
  [rr_rimage_3_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_3_iorig] test -wi-ao---- 720.00m                             
  [rr_rimage_4]       test gwi-aor--- 720.00m [rr_rimage_4_iorig] 100.00  
  [rr_rimage_4_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_4_iorig] test -wi-ao---- 720.00m                             
  [rr_rmeta_0]        test ewi-aor---   4.00m                             
  [rr_rmeta_1]        test ewi-aor---   4.00m                             
  [rr_rmeta_2]        test ewi-aor---   4.00m                             
  [rr_rmeta_3]        test ewi-aor---   4.00m                             
  [rr_rmeta_4]        test ewi-aor---   4.00m       

[root@null-04 ~]# lvextend -L+1G test/rr
  Using stripesize of last segment 64.00 KiB
  Size of logical volume test/rr changed from 2.81 GiB (720 extents) to 3.81 GiB (976 extents).
  Logical volume test/rr successfully resized.

[root@null-04 ~]# lvs -a test
  LV                  VG   Attr       LSize   Origin              Cpy%Sync 
  rr                  test rwi-a-r---   3.81g                     91.09   
  [rr_rimage_0]       test gwi-aor--- 976.00m [rr_rimage_0_iorig] 100.00  
  [rr_rimage_0_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_0_iorig] test -wi-ao---- 976.00m                             
  [rr_rimage_1]       test gwi-aor--- 976.00m [rr_rimage_1_iorig] 100.00  
  [rr_rimage_1_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_1_iorig] test -wi-ao---- 976.00m                             
  [rr_rimage_2]       test gwi-aor--- 976.00m [rr_rimage_2_iorig] 100.00  
  [rr_rimage_2_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_2_iorig] test -wi-ao---- 976.00m                             
  [rr_rimage_3]       test gwi-aor--- 976.00m [rr_rimage_3_iorig] 100.00  
  [rr_rimage_3_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_3_iorig] test -wi-ao---- 976.00m                             
  [rr_rimage_4]       test gwi-aor--- 976.00m [rr_rimage_4_iorig] 100.00  
  [rr_rimage_4_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_4_iorig] test -wi-ao---- 976.00m                             
  [rr_rmeta_0]        test ewi-aor---   4.00m                             
  [rr_rmeta_1]        test ewi-aor---   4.00m                             
  [rr_rmeta_2]        test ewi-aor---   4.00m                             
  [rr_rmeta_3]        test ewi-aor---   4.00m                             
  [rr_rmeta_4]        test ewi-aor---   4.00m                         


[root@null-04 ~]# lvcreate --raidintegrity y --type raid6_zr -R 256.00k -i 3 -n rr -l 720 test
  Using default stripesize 64.00 KiB.
  Creating integrity metadata LV rr_rimage_0_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Creating integrity metadata LV rr_rimage_2_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_2_imeta" created.
  Creating integrity metadata LV rr_rimage_3_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_3_imeta" created.
  Creating integrity metadata LV rr_rimage_4_imeta with size 16.00 MiB.
  Logical volume "rr_rimage_4_imeta" created.
  Logical volume "rr" created.

[root@null-04 ~]# lvs -a test
  LV                  VG   Attr       LSize   Origin              Cpy%Sync 
  rr                  test rwi-a-r---   2.81g                     100.00  
  [rr_rimage_0]       test gwi-aor--- 960.00m [rr_rimage_0_iorig] 100.00  
  [rr_rimage_0_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_0_iorig] test -wi-ao---- 960.00m                             
  [rr_rimage_1]       test gwi-aor--- 960.00m [rr_rimage_1_iorig] 100.00  
  [rr_rimage_1_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_1_iorig] test -wi-ao---- 960.00m                             
  [rr_rimage_2]       test gwi-aor--- 960.00m [rr_rimage_2_iorig] 100.00  
  [rr_rimage_2_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_2_iorig] test -wi-ao---- 960.00m                             
  [rr_rimage_3]       test gwi-aor--- 960.00m [rr_rimage_3_iorig] 100.00  
  [rr_rimage_3_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_3_iorig] test -wi-ao---- 960.00m                             
  [rr_rimage_4]       test gwi-aor--- 960.00m [rr_rimage_4_iorig] 100.00  
  [rr_rimage_4_imeta] test ewi-ao----  16.00m                             
  [rr_rimage_4_iorig] test -wi-ao---- 960.00m                             
  [rr_rmeta_0]        test ewi-aor---   4.00m                             
  [rr_rmeta_1]        test ewi-aor---   4.00m                             
  [rr_rmeta_2]        test ewi-aor---   4.00m                             
  [rr_rmeta_3]        test ewi-aor---   4.00m                             
  [rr_rmeta_4]        test ewi-aor---   4.00m            

[root@null-04 ~]# lvextend -L+1G test/rr
  Using stripesize of last segment 64.00 KiB
  Rounding size (976 extents) up to stripe boundary size for segment (978 extents).
  Size of logical volume test/rr changed from 2.81 GiB (720 extents) to 3.82 GiB (978 extents).
  Logical volume test/rr successfully resized.

[root@null-04 ~]# lvs -a test
  LV                  VG   Attr       LSize  Origin              Cpy%Sync 
  rr                  test rwi-a-r---  3.82g                     100.00  
  [rr_rimage_0]       test gwi-aor---  1.27g [rr_rimage_0_iorig] 100.00  
  [rr_rimage_0_imeta] test ewi-ao---- 44.00m                             
  [rr_rimage_0_iorig] test -wi-ao----  1.27g                             
  [rr_rimage_1]       test gwi-aor---  1.27g [rr_rimage_1_iorig] 100.00  
  [rr_rimage_1_imeta] test ewi-ao---- 44.00m                             
  [rr_rimage_1_iorig] test -wi-ao----  1.27g                             
  [rr_rimage_2]       test gwi-aor---  1.27g [rr_rimage_2_iorig] 100.00  
  [rr_rimage_2_imeta] test ewi-ao---- 44.00m                             
  [rr_rimage_2_iorig] test -wi-ao----  1.27g                             
  [rr_rimage_3]       test gwi-aor---  1.27g [rr_rimage_3_iorig] 100.00  
  [rr_rimage_3_imeta] test ewi-ao---- 44.00m                             
  [rr_rimage_3_iorig] test -wi-ao----  1.27g                             
  [rr_rimage_4]       test gwi-aor---  1.27g [rr_rimage_4_iorig] 100.00  
  [rr_rimage_4_imeta] test ewi-ao---- 44.00m                             
  [rr_rimage_4_iorig] test -wi-ao----  1.27g                             
  [rr_rmeta_0]        test ewi-aor---  4.00m                             
  [rr_rmeta_1]        test ewi-aor---  4.00m                             
  [rr_rmeta_2]        test ewi-aor---  4.00m                             
  [rr_rmeta_3]        test ewi-aor---  4.00m                             
  [rr_rmeta_4]        test ewi-aor---  4.00m

Comment 7 Corey Marthaler 2020-08-18 03:13:11 UTC
Fix verified in the latest rpms.

kernel-4.18.0-232.el8    BUILT: Mon Aug 10 02:17:54 CDT 2020
lvm2-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-libs-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-dbusd-2.03.09-5.el8    BUILT: Wed Aug 12 15:49:44 CDT 2020
lvm2-lockd-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-libs-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-event-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
device-mapper-event-libs-1.02.171-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020


[root@hayes-01 ~]# lvcreate --raidintegrity y --type raid5 -R 256.00k -i 5 -n raid5 -l 720 centipede2
  Using default stripesize 64.00 KiB.
  Creating integrity metadata LV raid5_rimage_0_imeta with size 16.00 MiB.
  Logical volume "raid5_rimage_0_imeta" created.
  Creating integrity metadata LV raid5_rimage_1_imeta with size 16.00 MiB.
  Logical volume "raid5_rimage_1_imeta" created.
  Creating integrity metadata LV raid5_rimage_2_imeta with size 16.00 MiB.
  Logical volume "raid5_rimage_2_imeta" created.
  Creating integrity metadata LV raid5_rimage_3_imeta with size 16.00 MiB.
  Logical volume "raid5_rimage_3_imeta" created.
  Creating integrity metadata LV raid5_rimage_4_imeta with size 16.00 MiB.
  Logical volume "raid5_rimage_4_imeta" created.
  Creating integrity metadata LV raid5_rimage_5_imeta with size 16.00 MiB.
  Logical volume "raid5_rimage_5_imeta" created.
  Logical volume "raid5" created.
[root@hayes-01 ~]# lvextend -L +1G /dev/centipede2/raid5
  Using stripesize of last segment 64.00 KiB
  Rounding size (976 extents) up to stripe boundary size for segment (980 extents).
  Size of logical volume centipede2/raid5 changed from 2.81 GiB (720 extents) to <3.83 GiB (980 extents).
  Logical volume centipede2/raid5 successfully resized.

[root@hayes-01 ~]# lvcreate --raidintegrity y --type raid6_zr -R 256.00k -i 5 -n raid6_zr -l 720 centipede2
  Using default stripesize 64.00 KiB.
  Creating integrity metadata LV raid6_zr_rimage_0_imeta with size 16.00 MiB.
  Logical volume "raid6_zr_rimage_0_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_1_imeta with size 16.00 MiB.
  Logical volume "raid6_zr_rimage_1_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_2_imeta with size 16.00 MiB.
  Logical volume "raid6_zr_rimage_2_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_3_imeta with size 16.00 MiB.
  Logical volume "raid6_zr_rimage_3_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_4_imeta with size 16.00 MiB.
  Logical volume "raid6_zr_rimage_4_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_5_imeta with size 16.00 MiB.
  Logical volume "raid6_zr_rimage_5_imeta" created.
  Creating integrity metadata LV raid6_zr_rimage_6_imeta with size 16.00 MiB.
  Logical volume "raid6_zr_rimage_6_imeta" created.
  Logical volume "raid6_zr" created.
[root@hayes-01 ~]# lvextend -L +1G /dev/centipede2/raid6_zr
  Using stripesize of last segment 64.00 KiB
  Rounding size (976 extents) up to stripe boundary size for segment (980 extents).
  Size of logical volume centipede2/raid6_zr changed from 2.81 GiB (720 extents) to <3.83 GiB (980 extents).
  Logical volume centipede2/raid6_zr successfully resized.

Comment 10 errata-xmlrpc 2020-11-04 02:00:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4546


Note You need to log in before you can comment on or make changes to this bug.