Bug 720883 - Disk with an existing LVM physical volume on it only gets an LVM metadata size of 1M when creating a storage domain
Summary: Disk with an existing LVM physical volume on it only gets an LVM metadata siz...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: vdsm22
Version: 5.8
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Dan Kenigsberg
QA Contact: yeylon@redhat.com
URL:
Whiteboard: storage
Depends On:
Blocks: 729322
TreeView+ depends on / blocked
 
Reported: 2011-07-13 05:23 UTC by Mark Huth
Modified: 2018-11-27 20:21 UTC (History)
14 users (show)

Fixed In Version: vdsm-4.5-65.el5
Doc Type: Bug Fix
Doc Text:
If a disk already had an LVM physical volume defined on it and you defined a storage domain on the disk, Red Hat Enterprise Linux would create an LVM metadata area on the disk of only 1M instead of 100M because of an error in the code. The error has been fixed and now Red Hat Enterprise Linux creates LVM metadata areas that are always 100M in size by default.
Clone Of:
Environment:
Last Closed: 2012-02-21 04:52:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Legacy) 59530 0 None None None Never
Red Hat Product Errata RHBA-2012:0169 0 normal SHIPPED_LIVE vdsm bug fix update 2012-02-21 09:51:13 UTC

Description Mark Huth 2011-07-13 05:23:39 UTC
Description of problem:
If a disk to be used for a storage domain has an existing LVM physical volume on it, then vsdm (re)creates a 1MB LVM metadata area on the disk.  However if a disk doesn't have a physical volume on it, a 100M LVM metadata area gets created. 

Version-Release number of selected component (if applicable):
vdsm22-4.5-63.25.el5_6

How reproducible:
Always

Steps to Reproduce:
1. Create an LVM physical volume on a disk to be used for a RHEV storage domain
# pvs -o+pv_mda_size,pv_mda_free
  PV                          VG                                   Fmt  Attr PSize   PFree  PMdaSize  PMdaFree 
  /dev/rhevhead/rhev-22-data3                                      lvm2 --    50.00G 50.00G   188.00K    93.50K

... default metadata size is 188k when using vanilla pvcreate

2. Present this disk to RHEV to use as a new storage domain and it recreates the metadata size to be 1M:

# pvs -o+pv_mda_size,pv_mda_free
  PV                          VG                                   Fmt  Attr PSize   PFree  PMdaSize  PMdaFree 
  /dev/rhevhead/rhev-22-data3 15d45b37-bac8-46dc-8393-e39d29a1d7ac lvm2 a-    49.88G 46.00G     1.06M   539.00K

... metadata size is now 1M.

3. Remove the storage domain and existing physical volume so now there is no LVM data on the disk

# pvremove /dev/rhevhead/rhev-22-data3 
  Labels on physical volume "/dev/rhevhead/rhev-22-data3" successfully wiped

4. Present this disk to RHEV again to use as a new storage domain and now RHEV creates the LVM metadata size to be 100M:

# pvs -o+pv_mda_size,pv_mda_free
  PV                          VG                                   Fmt  Attr PSize   PFree  PMdaSize  PMdaFree 
  /dev/rhevhead/rhev-22-data3 885fe945-e570-4c54-b806-6fed994893dc lvm2 a-    49.88G 46.00G   100.06M    50.03M


Actual results:
Existing PV gets recreated with 1M LVM metadata size.

Expected results:
Shouldn't all LVM PVs be created as 100M, or at least with something larger than 1M?  

Additional info:
1M of LVM metadata space is relatively easy to fill up in RHEV when using iSCSI or FC domains.  Create a bunch of VMs, add some disks to them, take some snapshots, make some templates, use COW disks that have LVM LV segments added to them regularly and you are at risk of filling up the LVM buffer and encountering 'metadata too large for circular buffer' errors.  As we did with a recent customer issue.

In vdsm/storage/pv.py:
        if self.initialized:
            cmd = [constants.EXT_PVCREATE, "--force", "--metadatasize", "1M", self.devname]
        else:
            cmd = [constants.EXT_PVCREATE, "--metadatasize", "100M", self.devname]

... I am curious as to why only 1M?

Comment 1 Dan Kenigsberg 2011-07-14 18:42:18 UTC
I guess it was plain carelessness of commit c3fa5e7ce1e2f4d249a20c803894d2d7b6413db7 (for bug 514283 - increase the vg metadata size).

and should be fixed by:

http://gerrit.usersys.redhat.com/710

Author: Dan Kenigsberg <danken>
Date:   Thu Jul 14 21:32:57 2011 +0300

    BZ#720883 re-create pv with reasonable metadatasize
    
    No need to have a different metadata size for bare luns and recreated
    pvs.
    
    Change-Id: I820765e86e17d3222e42702fef452830949a6e0d


would the recent customer issues have been averted if we had this patch?

Comment 4 Mark Huth 2011-07-21 05:18:53 UTC
Running 'vgs -o vg_name,vg_tags,vg_mda_size,vg_mda_free' on a hypervisor will show if any of the storage domains are susceptible (like e0bb0e61-b274-473f-9392-e741629c2489 here):

[root@h2 ~]# vgs -o vg_name,vg_tags,vg_mda_size,vg_mda_free 
  VG                                   VG Tags             VMdaSize  VMdaFree 
  563c53f3-9c11-4852-922e-9350cf5bbdd4 RHAT_storage_domain   100.06M    50.02M
  HostVG                                                     188.00K    91.00K
  e0bb0e61-b274-473f-9392-e741629c2489 RHAT_storage_domain     1.06M   538.50K

Comment 5 Dan Kenigsberg 2011-07-21 07:41:52 UTC
To test patch of comment 1, create storage domains out of raw LUNs and out of pre-existing PVs. In all cases, metadata size as reported by

vgs -o vg_name,vg_tags,vg_mda_size,vg_mda_free 

should be 100M (and never 1M).

Comment 7 Daniel Paikov 2011-07-25 09:14:02 UTC
Checked on 4.5-65.

Comment 9 Kate Grainger 2011-08-24 05:12:27 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
If a disk already had an LVM physical volume defined on it and you defined a storage doman on the disk, Red Hat Enterprise Linux would create an LVM metadata area on the disk of only 1M instead of 100M because of an error in the code. The error has been fixed and now Red Hat Enterprise Linux creates LVM metadata areas that are always 100M in size by default.

Comment 10 Kate Grainger 2011-08-24 05:13:36 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -1 +1 @@
-If a disk already had an LVM physical volume defined on it and you defined a storage doman on the disk, Red Hat Enterprise Linux would create an LVM metadata area on the disk of only 1M instead of 100M because of an error in the code. The error has been fixed and now Red Hat Enterprise Linux creates LVM metadata areas that are always 100M in size by default.+If a disk already had an LVM physical volume defined on it and you defined a storage domain on the disk, Red Hat Enterprise Linux would create an LVM metadata area on the disk of only 1M instead of 100M because of an error in the code. The error has been fixed and now Red Hat Enterprise Linux creates LVM metadata areas that are always 100M in size by default.

Comment 11 errata-xmlrpc 2012-02-21 04:52:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0169.html


Note You need to log in before you can comment on or make changes to this bug.