Bug 967247 - lvconvert fails mirror conversion when space is limited
Summary: lvconvert fails mirror conversion when space is limited
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: Unspecified
OS: Linux
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
Depends On:
TreeView+ depends on / blocked
Reported: 2013-05-26 01:05 UTC by benscott
Modified: 2013-11-21 23:23 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.02.100-4.el6
Doc Type: Bug Fix
Doc Text:
A miscalculation was performed when trying to determine whether the necessary space was available for adding additional images to a RAID logical volume. This resulted in errors when trying to add new images and the space available was close to the space required. The miscalculation has been fixed.
Clone Of:
Last Closed: 2013-11-21 23:23:59 UTC

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1704 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-11-20 21:52:01 UTC

Description benscott 2013-05-26 01:05:15 UTC
root:~# lvs --version
  LVM version:     2.02.98(2) (2012-10-15)
  Library version: 1.02.77 (2012-10-15)
  Driver version:  4.23.1

If a linear volume is converted into a RAID 1 mirror
the calculation of the needed extents seems to go
awry if space is tight. The lvconvert program also
seems to ignore some physical volumes on the command
line. Below I start with a volume named 'lvol0' that
takes all but one extent of one physical volume. The
other physical volmes are the same size.

root:~# lvs --all -o +seg_pe_ranges
  LV    VG    Attr      LSize   PE Ranges
  lvol0 vgtwo -wi-a---- 5.13g   /dev/sdc:0-5249

root:~# pvs
  PV         VG    Fmt  Attr PSize   PFree
  /dev/sdc   vgtwo lvm2 a--    5.13g   1.00m
  /dev/sdd   vgtwo lvm2 a--    5.13g   5.13g
  /dev/sde   vgtwo lvm2 a--    5.13g   5.13g

Next I run 'lvconvert' specifying /dev/sdc for the
metadata and /dev/sdd for the new mirror image:

root:~# lvconvert --type raid1 -m1 vgtwo/lvol0 /dev/sdc /dev/sdd
  Insufficient free space: 5252 extents needed, but only 5251 available
  Failed to allocate new image components

However if I run 'lvconvert' and leave off /dev/sdc
(which should be needed) and add another physical
volume it works:

root@:~# lvconvert --type raid1 -m1 vgtwo/lvol0  /dev/sdd /dev/sde

root:~# pvs
  PV         VG    Fmt  Attr PSize   PFree
  /dev/sdc   vgtwo lvm2 a--    5.13g      0
  /dev/sdd   vgtwo lvm2 a--    5.13g      0
  /dev/sde   vgtwo lvm2 a--    5.13g   5.13g

root:~# lvs --all -o +seg_pe_ranges
  LV               VG    Attr      LSize  Copy%  PE Ranges
  lvol0            vgtwo rwi-a-r-- 5.13g  54.99  lvol0_rimage_0:0-5249 lvol0_rimage_1:0-5249
  [lvol0_rimage_0] vgtwo Iwi-aor-- 5.13g         /dev/sdc:0-5249
  [lvol0_rimage_1] vgtwo Iwi-aor-- 5.13g         /dev/sdd:1-5250
  [lvol0_rmeta_0]  vgtwo ewi-aor-- 1.00m         /dev/sdc:5250-5250
  [lvol0_rmeta_1]  vgtwo ewi-aor-- 1.00m         /dev/sdd:0-0

Note that I needed to specify /dev/sdd for its added
space but lvconvert didn't use it. It put the metadata
on /dev/sdc which was not specified.

Comment 2 Jonathan Earl Brassow 2013-09-26 15:47:42 UTC
problem seems to be in lib/metadata/lv_manip.c:_sufficient_pes_free()

area_extents_needed already seems to contain the needed data and metadata extent count.  Thus, adding metadata_extents_needed to that gives double accounting for total_extents_needed.

This is also why the command fails while saying it needs one more extent than it actually does.

Comment 3 Jonathan Earl Brassow 2013-09-26 16:31:25 UTC
Fix committed upstream:

commit acdc731e83f7ba646a5e3c55398032464605ee58
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Thu Sep 26 11:30:07 2013 -0500

    RAID: Fix _sufficient_pes_free calculation for RAID
    lib/metadata/lv_manip.c:_sufficient_pes_free() was calculating the
    required space for RAID allocations incorrectly due to double
    accounting.  This resulted in failure to allocate when available
    space was tight.
    When RAID data and metadata areas are allocated together, the total
    amount is stored in ah->new_extents and ah->alloc_and_split_meta is
    set.  '_sufficient_pes_free' was adding the necessary metadata extents
    to ah->new_extents without ever checking ah->alloc_and_split_meta.
    This often led to double accounting of the metadata extents.  This
    patch checks 'ah->alloc_and_split_meta' to perform proper calculations
    for RAID.
    This error is only present in the function that checks for the needed
    space, not in the functions that do the actual allocation.

Comment 7 Nenad Peric 2013-10-22 11:31:12 UTC
/dev/sdl was used just short of one PE.
lvcreate -l2558 normal -n raid1 /dev/sdl

  PV         Start SSize
  /dev/sdl       0  2558
  /dev/sdl    2558     1

[root@virt-008 ~]# lvconvert --type raid1 -m1 normal/raid1 /dev/sdl /dev/sdm
[root@virt-008 ~]# pvs
  PV         VG         Fmt  Attr PSize  PFree 
  /dev/sde   normal     lvm2 a--  10.00g 10.00g
  /dev/sdf   normal     lvm2 a--  10.00g  6.99g
  /dev/sdg   normal     lvm2 a--  10.00g  6.98g
  /dev/sdh   normal     lvm2 a--  10.00g 10.00g
  /dev/sdj              lvm2 a--  10.00g 10.00g
  /dev/sdl   normal     lvm2 a--  10.00g     0 
  /dev/sdm   normal     lvm2 a--  10.00g     0 
  /dev/sdn              lvm2 a--  10.00g 10.00g
  /dev/vda2  vg_virt008 lvm2 a--   7.51g     0 

[root@virt-008 ~]# lvs normal/raid1 -o +seg_pe_ranges
  LV    VG     Attr       LSize Pool Origin Data%  Move Log Cpy%Sync Convert PE Ranges                                  
  raid1 normal rwi-a-r--- 9.99g                               100.00         raid1_rimage_0:0-2557 raid1_rimage_1:0-2557

Did not have any errors pop up. 
Marking VERIFIED with lvm2-2.02.100-6.el6.x86_64

Comment 8 errata-xmlrpc 2013-11-21 23:23:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.