RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 822248 - VG containing raid volume should not be allowed to be converted to a clustered VG
Summary: VG containing raid volume should not be allowed to be converted to a clustere...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-05-16 19:49 UTC by Corey Marthaler
Modified: 2013-02-21 08:10 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.98-1.el6
Doc Type: Bug Fix
Doc Text:
RAID logical volumes can become corrupted if they are activated in a clustered volume group. Thus, if there are RAID logical volumes in a volume group, that volume group is no longer allowed to be changed to a clustered volume group.
Clone Of:
Environment:
Last Closed: 2013-02-21 08:10:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0501 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-02-20 21:30:45 UTC

Description Corey Marthaler 2012-05-16 19:49:53 UTC
Description of problem:
[root@hayes-01 ~]# pvscan
  PV /dev/etherd/e1.1p1   lvm2 [4.43 TiB]
  PV /dev/etherd/e1.1p2   lvm2 [4.43 TiB]

[root@hayes-01 ~]# vgcreate test /dev/etherd/e1.1p*
  Clustered volume group "test" successfully created

[root@hayes-01 ~]# vgchange -cn test
  Volume group "test" successfully changed

[root@hayes-01 ~]# lvcreate --type raid1 -m 1 -n raid -L 100M test
  Logical volume "raid" created

[root@hayes-01 ~]# vgchange -cy test
  RAID logical volumes must be inactive when changing the cluster attribute.

[root@hayes-01 ~]# vgchange -an test
  0 logical volume(s) in volume group "test" now active

[root@hayes-01 ~]# vgchange -cy test
  Volume group "test" successfully changed

[root@hayes-01 ~]# vgchange -ay test
  1 logical volume(s) in volume group "test" now active

Version-Release number of selected component (if applicable):
2.6.32-269.el6.x86_64
lvm2-2.02.95-9.el6    BUILT: Wed May 16 10:34:14 CDT 2012
lvm2-libs-2.02.95-9.el6    BUILT: Wed May 16 10:34:14 CDT 2012
lvm2-cluster-2.02.95-9.el6    BUILT: Wed May 16 10:34:14 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-9.el6    BUILT: Wed May 16 10:34:14 CDT 2012
device-mapper-libs-1.02.74-9.el6    BUILT: Wed May 16 10:34:14 CDT 2012
device-mapper-event-1.02.74-9.el6    BUILT: Wed May 16 10:34:14 CDT 2012
device-mapper-event-libs-1.02.74-9.el6    BUILT: Wed May 16 10:34:14 CDT 2012
cmirror-2.02.95-9.el6    BUILT: Wed May 16 10:34:14 CDT 2012

Comment 3 Jonathan Earl Brassow 2012-10-03 20:49:34 UTC
Unit test (VG cannot change to cluster when there is an active or inactive RAID LV present):

[root@hayes-02 lvm2]# lvcreate --type raid1 -L 100M -n lv vg 
  Logical volume "lv" created

[root@hayes-02 lvm2]# vgchange -cy vg
  RAID logical volumes are not allowed in a cluster volume group.

[root@hayes-02 lvm2]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  vg           8   1   0 wz--n-  5.07t 5.07t
  vg_hayes02   1   3   0 wz--n- 74.01g    0 

[root@hayes-02 lvm2]# vgchange -an vg
  0 logical volume(s) in volume group "vg" now active

[root@hayes-02 lvm2]# vgchange -cy vg
  RAID logical volumes are not allowed in a cluster volume group.

[root@hayes-02 lvm2]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  vg           8   1   0 wz--n-  5.07t 5.07t
  vg_hayes02   1   3   0 wz--n- 74.01g    0

Comment 4 Jonathan Earl Brassow 2012-10-03 20:57:48 UTC
commit 9efd3fb604ae34803bb2d4c94080c57d3efdba81
Author: Jonathan Brassow <jbrassow>
Date:   Wed Oct 3 15:52:54 2012 -0500

    RAID:  Do not allow RAID LVs in a cluster volume group.
    
    It would be possible to activate a RAID LV exclusively in a cluster
    volume group, but for now we do not allow RAID LVs to exist in a
    clustered volume group at all.  This has two components:
        1) Do not allow RAID LVs to be created in a clustered VG
        2) Do not allow changing a VG from single-machine to clustered
           if there are RAID LVs present.

Comment 6 Nenad Peric 2012-10-30 09:01:10 UTC
Verified normal behavior with:

lvm2-2.02.98-2.el6.x86_64

Comment 7 errata-xmlrpc 2013-02-21 08:10:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html


Note You need to log in before you can comment on or make changes to this bug.