RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1244257 - alloc: Fix lvextend failure when varying stripes.
Summary: alloc: Fix lvextend failure when varying stripes.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.8
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Alasdair Kergon
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1248051 1268411
TreeView+ depends on / blocked
 
Reported: 2015-07-17 15:16 UTC by George Angelopoulos
Modified: 2019-10-10 09:58 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.140-1.el6
Doc Type: Bug Fix
Doc Text:
When extending a logical volume (LV) where the amount of data stripes was previously lowered, the LV terminated unexpectedly with a segmentation fault due to an error in stripe position detection. With this update, the behavior of a number of functions related to stripe detection has been adjusted to prevent the bug, and the described crash no longer occurs.
Clone Of:
: 1248051 (view as bug list)
Environment:
Last Closed: 2016-05-11 01:17:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0964 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-05-10 22:57:40 UTC

Description George Angelopoulos 2015-07-17 15:16:57 UTC
Description of problem:

A segfault was reported when extending an LV with a smaller number of
stripes than originally used.  Under unusual circumstances, the cling
detection code could successfully find a match against the excess
stripe positions and think it had finished prematurely leading to an
allocation being pursued with a length of zero.

Rename ix_offset to num_positional_areas and move it to struct
alloc_state so that _is_condition() can obtain access to it.


Version-Release number of selected component (if applicable):
lvm2-2.02.87-6.el6.x86_64
lvm2-libs-2.02.87-6.el6.x86_64


How reproducible:
Hasn't been reproduced.


Additional info:

Upstream submitted patch
https://lists.fedorahosted.org/pipermail/lvm2-commits/2015-July/004304.html

Comment 2 Alasdair Kergon 2015-07-20 16:16:59 UTC
Problematic layout reported:

seg1 - start 0 - 4 stripes - len 4 x 65536 = 262144
/pv_manip.c:415         pv0 2: start  65536 len 65536: lvol0(0:0)
/pv_manip.c:415         pv0 4: start 196608 len 65536: lvol0(0:1)
/pv_manip.c:415         pv1 3: start 131712 len 65536: lvol0(0:2)
/pv_manip.c:415         pv1 0: start      0 len 65536: lvol0(0:3)

seg2 - start 262144 - 4 stripes - len 4 x 65536 = 262144
/pv_manip.c:415         pv0 3: start 131072 len 65536: lvol0(262144:0)
/pv_manip.c:415         pv0 5: start 262144 len 65536: lvol0(262144:1)
/pv_manip.c:415         pv1 4: start 197248 len 65536: lvol0(262144:2)
/pv_manip.c:415         pv1 2: start  66176 len 65536: lvol0(262144:3)

seg3 - start 524288 - 2 stripes - len 2 x 640 = 1280
/pv_manip.c:415         pv0 0: start     0 len   640: lvol0(524288:0)
/pv_manip.c:415         pv1 1: start 65536 len   640: lvol0(524288:1)

Unallocated space:
/pv_manip.c:415         pv0 1: start    640 len  64896: NULL(0:0)
/pv_manip.c:415         pv0 6: start 327680 len 101054: NULL(0:0)
/pv_manip.c:415         pv1 5: start 262784 len 165950: NULL(0:0)

followed by lvextend -i2

Comment 3 Nenad Peric 2015-07-29 13:34:26 UTC
In order to try and verify this issue, one should create a striped LV of width 4 (i 4), in Comment #2 it is of 32MiB, and then extend by 2 stripes. (-i2)

Comment 5 Peter Rajnoha 2015-08-03 09:41:58 UTC
To QA:

The easiest way to reproduce this issue and check the fix is to use the layout that customer had exactly - you can use thin volumes to simulate the exact dev size customer had (the underlying device for testing can be really small - mine is 128MiB, but even smaller will do the job - we're not writing much into thin volumes representing PVs, so we hardly run out of real space here):

# pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created

# vgcreate vg /dev/sda
  Volume group "vg" successfully created

# lvcreate -l100%FREE -T vg/pool
  Logical volume "pool" created.

# lvcreate -V 1714940m -T vg/pool -n pv0
  Logical volume "pv0" created.

# lvcreate -V 1714940m -T vg/pool -n pv1
  Logical volume "pv1" created.

# pvcreate --restorefile vgbackup -u 68K1Yf-geMe-fAZK-TUF1-dRvq-w4Zo-vf08jc /dev/vg/pv0
  Couldn't find device with uuid 68K1Yf-geMe-fAZK-TUF1-dRvq-w4Zo-vf08jc.
  Couldn't find device with uuid lPcNxu-9fAE-M3Pe-hzCM-SJLk-wQ37-lYsq84.
  Physical volume "/dev/vg/pv0" successfully created

# pvcreate --restorefile vgbackup -u lPcNxu-9fAE-M3Pe-hzCM-SJLk-wQ37-lYsq84 /dev/vg/pv1
  Couldn't find device with uuid lPcNxu-9fAE-M3Pe-hzCM-SJLk-wQ37-lYsq84.
  Physical volume "/dev/vg/pv1" successfully created

# vgcfgrestore -f vgbackup vgbackup
  Restored volume group vgbackup

# lvextend -L +1G -i 2 /dev/vgbackup/backor_pool
  Using stripesize of last segment 128.00 KiB
  Size of logical volume vgbackup/backor_pool changed from 2.00 TiB (524288 extents) to 2.00 TiB (524544 extents).
  Logical volume backor_pool successfully resized

# lvextend -l +100%FREE -i 2 /dev/vgbackup/backor_pool
  Using stripesize of last segment 128.00 KiB
Segmentation fault (core dumped)

(the last step should pass correctly without segfault in the version where this issue is fixed)

Comment 7 Nenad Peric 2015-08-03 11:06:54 UTC
Before patch:

[root@virt-007 backup]# lvextend -l +100%FREE -i 2 /dev/vgbackup/backor_pool
  Using stripesize of last segment 128.00 KiB
Segmentation fault (core dumped)


After patch (126-1):

[root@virt-007 ~]# lvextend -l +100%FREE -i 2 /dev/vgbackup/backor_pool
  Using stripesize of last segment 128.00 KiB
  Size of logical volume vgbackup/backor_pool changed from 2.00 TiB (524544 extents) to 3.27 TiB (857468 extents).
  Logical volume backor_pool successfully resized


This fix was VERIFIED with:

lvm2-2.02.126-1.el7.x86_64

Comment 10 Roman Bednář 2016-02-15 12:31:31 UTC
VERIFIED

Before patch:

[root@virt-012 backup]# lvextend -l +100%FREE -i 2 /dev/vgbackup/backor_pool
  Using stripesize of last segment 128.00 KiB
Segmentation fault (core dumped)


After patch:

[root@virt-010 backup]# lvextend -l +100%FREE -i 2 /dev/vgbackup/backor_pool
  Using stripesize of last segment 128.00 KiB
  Size of logical volume vgbackup/backor_pool changed from 2.00 TiB (524544 extents) to 3.27 TiB (857468 extents).
  Logical volume backor_pool successfully resized.


Tested on:

2.6.32-614.el6.x86_64

lvm2-2.02.141-2.el6    BUILT: Wed Feb 10 14:49:03 CET 2016
lvm2-libs-2.02.141-2.el6    BUILT: Wed Feb 10 14:49:03 CET 2016
lvm2-cluster-2.02.141-2.el6    BUILT: Wed Feb 10 14:49:03 CET 2016
udev-147-2.71.el6    BUILT: Wed Feb 10 14:07:17 CET 2016
device-mapper-1.02.115-2.el6    BUILT: Wed Feb 10 14:49:03 CET 2016
device-mapper-libs-1.02.115-2.el6    BUILT: Wed Feb 10 14:49:03 CET 2016
device-mapper-event-1.02.115-2.el6    BUILT: Wed Feb 10 14:49:03 CET 2016
device-mapper-event-libs-1.02.115-2.el6    BUILT: Wed Feb 10 14:49:03 CET 2016
device-mapper-persistent-data-0.6.2-0.1.rc1.el6    BUILT: Wed Feb 10 16:52:15 CET 2016
cmirror-2.02.141-2.el6    BUILT: Wed Feb 10 14:49:03 CET 2016

Comment 12 errata-xmlrpc 2016-05-11 01:17:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0964.html


Note You need to log in before you can comment on or make changes to this bug.