RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 834703 - unable to extend a striped raid (4|5|6)
Summary: unable to extend a striped raid (4|5|6)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 840699 852441
TreeView+ depends on / blocked
 
Reported: 2012-06-22 20:18 UTC by Corey Marthaler
Modified: 2013-02-21 08:10 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.97-2.el6
Doc Type: Bug Fix
Doc Text:
Extending a RAID 4/5/6 logical volume was failing to work properly because the parity devices were not properly accounted for. This has been corrected and it is now possible to extend a RAID 4/5/6 logical volume.
Clone Of:
Environment:
Last Closed: 2013-02-21 08:10:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Patch that fixes problem (1.24 KB, patch)
2012-07-02 22:54 UTC, Jonathan Earl Brassow
no flags Details | Diff
Fix version 2 - still not perfect. (2.09 KB, patch)
2012-07-03 00:06 UTC, Jonathan Earl Brassow
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0501 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-02-20 21:30:45 UTC

Description Corey Marthaler 2012-06-22 20:18:42 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Corey Marthaler 2012-06-22 20:22:09 UTC
This works with mirrored raid (raid1) but not with any of the striped raids.

RAID6:

[root@hayes-01 bin]# lvs -a -o +devices
  LV                     VG          Attr     LSize   Devices
  full_extend            raid_sanity Rwi-a-r- 12.00m  full_extend_rimage_0(0),full_extend_rimage_1(0),full_extend_rimage_2(0),full_extend_rimage_3(0),full_extend_rimage_4(0)
  [full_extend_rimage_0] raid_sanity iwi-aor-  4.00m  /dev/etherd/e1.1p9(1)
  [full_extend_rimage_1] raid_sanity iwi-aor-  4.00m  /dev/etherd/e1.1p8(1)
  [full_extend_rimage_2] raid_sanity iwi-aor-  4.00m  /dev/etherd/e1.1p7(1)
  [full_extend_rimage_3] raid_sanity iwi-aor-  4.00m  /dev/etherd/e1.1p6(1)
  [full_extend_rimage_4] raid_sanity iwi-aor-  4.00m  /dev/etherd/e1.1p5(1)
  [full_extend_rmeta_0]  raid_sanity ewi-aor-  4.00m  /dev/etherd/e1.1p9(0)
  [full_extend_rmeta_1]  raid_sanity ewi-aor-  4.00m  /dev/etherd/e1.1p8(0)
  [full_extend_rmeta_2]  raid_sanity ewi-aor-  4.00m  /dev/etherd/e1.1p7(0)
  [full_extend_rmeta_3]  raid_sanity ewi-aor-  4.00m  /dev/etherd/e1.1p6(0)
  [full_extend_rmeta_4]  raid_sanity ewi-aor-  4.00m  /dev/etherd/e1.1p5(0)

[root@hayes-01 bin]# dd if=/dev/zero of=/dev/raid_sanity/full_extend
dd: writing to `/dev/raid_sanity/full_extend': No space left on device
24577+0 records in
24576+0 records out
12582912 bytes (13 MB) copied, 1.05469 s, 11.9 MB/s

[root@hayes-01 bin]# lvextend -L 50M /dev/raid_sanity/full_extend
  Rounding size to boundary between physical extents: 52.00 MiB
  Extending logical volume full_extend to 52.00 MiB
  Internal error: _alloc_init called for non-virtual segment with no disk space.


Version:
2.6.32-278.el6.x86_64
lvm2-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
lvm2-libs-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
lvm2-cluster-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-libs-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-event-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-event-libs-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
cmirror-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012

Comment 2 Corey Marthaler 2012-06-22 20:37:48 UTC
Looks like this doesn't require a full striped raid device, any extend attempt fails.

Comment 3 Corey Marthaler 2012-06-28 19:43:16 UTC
This should be considered for 6.3.z.

Comment 4 Alasdair Kergon 2012-06-29 01:39:14 UTC
alloc_init is supplied with a number of new extents to find, but not told how to lay them out.  area_count, parity_count, metadata_area_count are all zero => internal error.

These settings should be taken instead from the last segment of the existing device that is being extended.

Comment 5 Alasdair Kergon 2012-06-29 02:22:23 UTC
lvcreate requires -i and validates it against the number of parity devs.
This code is missing from lvresize.

The code to get the number of stripes from the last segment of the existing device is missing from at least two places I can see.


You should add these further test cases that I think will also fail due to this bug:

Specifying invalid values for -i/-m with lvextend. Combinations that lvcreate would reject, I think lvextend would accept before going wrong.  And it can't handle devices with segments with different properties yet anyway but doesn't realise this...

lvreduce I think can also miscalculate the amount to reduce by.

Comment 6 Corey Marthaler 2012-06-29 20:48:54 UTC
Related reduce bugs are: 836653 (raid1) and 836660 (raid456).

Comment 7 Jonathan Earl Brassow 2012-07-02 22:54:39 UTC
Created attachment 595823 [details]
Patch that fixes problem

This patch fixes the problem.  I'm testing that I haven't made any oversights.  I need to validate anything that is RAID4/5/6 and calls _calc_area_multiple().

Another thing that I found that doesn't work (but does work on regular striping) is to extend the size with a different number of PVs.
Example:
~> lvcreate --type raid5 -L 100M -i 4 -n lv vg
~> lvextend -L +100M -i 2 vg/lv

Comment 8 Jonathan Earl Brassow 2012-07-03 00:06:29 UTC
Created attachment 595841 [details]
Fix version 2 - still not perfect.

Comment 9 benscott 2012-07-06 03:43:33 UTC
I am not certain if this bug covers the problem
I am having but here goes:

lvs --version
  LVM version:     2.02.95(2) (2012-03-06)
  Library version: 1.02.74 (2012-03-06)
  Driver version:  4.22.0


if I start with three pvs with 2364 extents each:

pvs --units 4m
  PV         VG   Fmt  Attr PSize PFree   
  /dev/sdq   vg1  lvm2 a--  2364  2364
  /dev/sdr   vg1  lvm2 a--  2364  2364
  /dev/sds   vg1  lvm2 a--  2364  2364

Since RAID 5 needs a parity stripe and each stripe needs an extent 
for metadata I should have 4726 (2 * 2363) extents for a volume. 

But, if I run:

lvcreate --stripes 2 --stripesize 64k --type raid5 --extents 4726 vg1 /dev/sdq /dev/sdr /dev/sds
  Insufficient free space: 7094 extents needed, but only 7092 available

So I run this instead, with 2 fewer extents:

lvcreate --stripes 2  --stripesize 64k --type raid5 --extents 4724 vg1 /dev/sdq /dev/sdr /dev/sds
  Logical volume "lvol0" created

But now I have one wasted extent on each pv:

pvs --units 4m
  PV         VG   Fmt  Attr PSize    PFree   
  /dev/sdq   vg1  lvm2 a--  2364.00U    1.00U
  /dev/sdr   vg1  lvm2 a--  2364.00U    1.00U
  /dev/sds   vg1  lvm2 a--  2364.00U    1.00U


Yet when I finally I execute this to add the 2 extents:
 
lvextend --stripes 2  --stripesize 64k --type raid5 --extents +2 vg1/lvol0 /dev/sdq /dev/sdr /dev/sds

Extending logical volume lvol0 to 18.46 GiB
  Failed to find segment for lvol0_rimage_0 extent 4723
  Failed to find segment for lvol0_rimage_0 extent 4723
  Failed to find segment for lvol0_rimage_0 extent 4723
  Failed to find segment for lvol0_rimage_0 extent 4723
  Failed to find segment for lvol0_rimage_0 extent 4723
  Failed to find segment for lvol0_rimage_0 extent 4723
  Logical volume lvol0 successfully resized

pvs --units 4m
  PV         VG   Fmt  Attr PSize    PFree   
  /dev/sdq   vg1  lvm2 a--  2364.00U       0U
  /dev/sdr   vg1  lvm2 a--  2364.00U       0U
  /dev/sds   vg1  lvm2 a--  2364.00U       0U


Now all the pvs are used, just like it seems they should 
have been with the first command I tried. Why would the
extend work when the initial creation failed?

Since my program calls lvcreate, it needs to be able to 
determine the maximum lv size, which I am having trouble
with here. If someone could look into this I would
appreciate it.

Thank you.

Comment 10 Jonathan Earl Brassow 2012-07-06 16:31:22 UTC
Fix committed upstream:

commit 8767435ef847831455fadc1f7e8f4d2d94aef0d5
Author: Jonathan Brassow <jbrassow>
Date:   Tue Jun 26 09:44:54 2012 -0500

    RAID:  Fix extending size of RAID 4/5/6 logical volumes.
    
    Reducing a RAID 4/5/6 LV or extending it with a different number of
    stripes is still not implemented.  This patch covers the "simple" case
    where the LV is extended with the same number of stripes as the orginal.


Concerning comment 9, I think this is more due to the extent calculation that are being done.  Those concerns may be worth their own bug.

Comment 11 Alasdair Kergon 2012-07-06 16:47:16 UTC
The underlying problem here is that parity devices were not being taken into account correctly in calculations.  That needs reviewing and fixing across the whole code base - not just in the places where people happen to spot it and report it.

Comment 12 Alasdair Kergon 2012-07-06 16:54:28 UTC
- What does the case in comment 9 look like on current upstream code now?

Have the "Failed to find segment for lvol0_rimage_0 extent 4723" bits gone now?

Comment 13 Jonathan Earl Brassow 2012-07-16 14:23:13 UTC
In response to comment 9, I don't get any "Failed to find segment for lvol0_rimage_0 extent 4723" error messages.  There are still issues with how size adjustments are made, but that is a separate bug...

[root@bp-01 ~]# pvs --units 4m
  PV         VG      Fmt  Attr PSize     PFree    
  /dev/sda2  vg_bp01 lvm2 a--  38021.00U        0U
  /dev/sdb1  vg      lvm2 a--  59841.00U 59841.00U
  /dev/sdc1  vg      lvm2 a--  59841.00U 59841.00U
  /dev/sdd1  vg      lvm2 a--  59841.00U 59841.00U
[root@bp-01 ~]# lvcreate --type raid5 -l 119680 -i 2 -n lv vg
  Using default stripesize 64.00 KiB
  Insufficient free space: 179525 extents needed, but only 179523 available
[root@bp-01 ~]# lvcreate --type raid5 -l 119678 -i 2 -n lv vg
  Using default stripesize 64.00 KiB
  Logical volume "lv" created
[root@bp-01 ~]# pvs --units 4m
  PV         VG      Fmt  Attr PSize     PFree
  /dev/sda2  vg_bp01 lvm2 a--  38021.00U    0U
  /dev/sdb1  vg      lvm2 a--  59841.00U 1.00U
  /dev/sdc1  vg      lvm2 a--  59841.00U 1.00U
  /dev/sdd1  vg      lvm2 a--  59841.00U 1.00U

Comment 14 Jonathan Earl Brassow 2012-07-24 13:40:44 UTC
Switching back to assigned.  Found a bug that can cause a segfault when trying to add/replace a raid4/5/6 image.  This is a result of the _calc_area_multiple changes make for this bug.  Will have a fix shortly.

Comment 15 Jonathan Earl Brassow 2012-07-25 18:17:53 UTC
In addition to comment 10, this additional commit is also needed:

commit 5555d2a000ed4e3d5a694896f3dc6a7290543f43
Author: Jonathan Brassow <jbrassow>
Date:   Tue Jul 24 19:02:06 2012 -0500

    RAID: Fix segfault when attempting to replace RAID 4/5/6 device
    
    Commit 8767435ef847831455fadc1f7e8f4d2d94aef0d5 allowed RAID 4/5/6
    LV to be extended properly, but introduced a regression in device
    replacement - a critical component of fault tolerance.
    
    When only 1 or 2 drives are being replaced, the 'area_count' needed
    can be equal to the parity_count.  The 'area_multiple' for RAID 4/5/6
    was computed as 'area_count - parity_devs', which could result in
    'area_multiple' being 0.  This would ultimately lead to a division by
    zero error.  Therefore, in calc_area_multiple, it is important to take
    into account the number of areas that are being requested - just as
    we already do in _alloc_init.

Comment 20 Nenad Peric 2012-10-02 08:03:50 UTC
Verified with the following software versions:


lvm2-libs-2.02.97-2.el6.x86_64
lvm2-2.02.97-2.el6.x86_64
device-mapper-persistent-data-0.1.4-1.el6.x86_64
device-mapper-1.02.76-2.el6.x86_64
device-mapper-event-libs-1.02.76-2.el6.x86_64
device-mapper-event-1.02.76-2.el6.x86_64
device-mapper-libs-1.02.76-2.el6.x86_64
kernel-2.6.32-306.el6.x86_64

Comment 21 errata-xmlrpc 2013-02-21 08:10:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html


Note You need to log in before you can comment on or make changes to this bug.