RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 865035 - lvchange/vgchange: --[add|del]tag arguments should be allowed when the LV/VG is partial
Summary: lvchange/vgchange: --[add|del]tag arguments should be allowed when the LV/VG...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 824153
TreeView+ depends on / blocked
 
Reported: 2012-10-10 16:40 UTC by Jonathan Earl Brassow
Modified: 2013-02-21 08:14 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.02.98-1.el6
Doc Type: Bug Fix
Doc Text:
Previously, when a device was missing from a volume group or logical volume, it was impossible to add or remove tags from the logical volume. If the activation of the logical volume was based on tagging and the 'volume_list' parameter in the configuration file (lvm.conf), it would be impossible to activate a logical volume. This is an important case because it affects HA-LVM. Without the ability to add or remove tags while a device was missing, it was impossible to use RAID logical volumes with HA-LVM.
Clone Of:
Environment:
Last Closed: 2013-02-21 08:14:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0501 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-02-20 21:30:45 UTC

Description Jonathan Earl Brassow 2012-10-10 16:40:32 UTC
The --[add|del]tag arguments for vgchange and lvchange should be allowed to update metadata even when "partial".

The necessity for this is given in the upstream commit message here:
commit 3501f17fd0fcec2a1fbb8aeecf228e86ee022d99
Author: Jonathan Brassow <jbrassow>
Date:   Wed Oct 10 11:33:10 2012 -0500

    [lv|vg]change:  Allow limited metadata changes when PVs are missing
    
    A while back, the behavior of LVM changed from allowing metadata changes
    when PVs were missing to not allowing changes.  Until recently, this
    change was tolerated by HA-LVM by forcing a 'vgreduce --removemissing'
    before trying (again) to add tags to an LV and then activate it.  LVM
    mirroring requires that failed devices are removed anyway, so this was
    largely harmless.  However, RAID LVs do not require devices to be removed
    from the array in order to be activated.  In fact, in an HA-LVM
    environment this would be very undesirable.  Device failures in such an
    environment can often be transient and it would be much better to restore
    the device to the array than synchronize an entirely new device.
    
    There are two methods that can be used to setup an HA-LVM environment:
    "clvm" or "tagging".  For RAID LVs, "clvm" is out of the question because
    RAID LVs are not supported in clustered VGs - not even in an exclusively
    activated manner.  That leaves "tagging".  HA-LVM uses tagging - coupled
    with 'volume_list' - to ensure that only one machine can have an LV active
    at a time.  If updates are not allowed when a PV is missing, it is
    impossible to add or remove tags to allow for activation.  This removes
    one of the most basic functionalities of HA-LVM - site redundancy.  If
    mirroring or RAID is used to replicate the storage in two data centers
    and one of them goes down, a server and a storage device are lost.  When
    the service fails-over to the alternate site, the VG will be "partial".
    Unable to add a tag to the VG/LV, the RAID device will be unable to
    activate.
    
    The solution is to allow vgchange and lvchange to alter the LVM metadata
    for a limited set of options - --[add|del]tag included.  The set of
    allowable options are ones that do not cause changes to the DM kernel
    target (like --resync would) or could alter the structure of the LV
    (like allocation or conversion).

Comment 1 Jonathan Earl Brassow 2012-10-10 16:45:08 UTC
The ability to use RAID LVs in an HA-LVM environment (bug 824153) has two components:
   1) The LVM specific changes outlined in this bug
   2) The resource-agent changes necessary in bug 824153

This bug is created to ensure that the necessary LVM changes are picked up.

Comment 2 Jonathan Earl Brassow 2012-10-11 15:24:51 UTC
Unit testing performed can be found in lvm2/test/shell/*change-partial.sh.

To test this change, do the following:
1) Create volume group
2) create RAID LV (lvcreate --type raid1 -m 1 -l 2 -n $lv1 $vg)
3) deactivate LV (lvchange -an $vg/$lv1)
4) disable one of the RAID devices
5) Perform "tag" operations on LV (lvchange --addtag foo $vg/$lv1)
6) Perform "tag" operations on VG (vgchange --addtag foo $vg)

The tag operations will succeed if the change has been implemented.  Otherwise, the tag operations will fail.

Comment 4 Nenad Peric 2012-12-20 15:29:17 UTC
Tested with raid1, raid4, raid5 and raid6


output from raid4:

(09:13:17) [root@r6-node01:~]$ lvs -a -o +lv_tags,devices
  Couldn't find device with uuid jOoMPR-MO4F-9qlc-D9fP-aOmU-2zB6-KnYhDc.
  LV                 VG       Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert LV Tags Devices                                                                        
  lv_root            VolGroup -wi-ao---  7.54g                                                      /dev/vda2(0)                                                                   
  lv_swap            VolGroup -wi-ao---  1.97g                                                      /dev/vda2(1930)                                                                
  test_lv            test_vg  rwi-a-r-p 12.00m                               100.00         foo     test_lv_rimage_0(0),test_lv_rimage_1(0),test_lv_rimage_2(0),test_lv_rimage_3(0)
  [test_lv_rimage_0] test_vg  iwi-aor--  4.00m                                                      /dev/sda1(1)                                                                   
  [test_lv_rimage_1] test_vg  iwi-a-r-p  4.00m                                                      unknown device(1)                                                              
  [test_lv_rimage_2] test_vg  iwi-aor--  4.00m                                                      /dev/sdc1(1)                                                                   
  [test_lv_rimage_3] test_vg  iwi-aor--  4.00m                                                      /dev/sdd1(1)                                                                   
  [test_lv_rmeta_0]  test_vg  ewi-aor--  4.00m                                                      /dev/sda1(0)                                                                   
  [test_lv_rmeta_1]  test_vg  ewi-a-r-p  4.00m                                                      unknown device(0)                                                              
  [test_lv_rmeta_2]  test_vg  ewi-aor--  4.00m                                                      /dev/sdc1(0)                                                                   
  [test_lv_rmeta_3]  test_vg  ewi-aor--  4.00m                                                      /dev/sdd1(0)            

Tag was added after the device was disabled. 

Tagging operation works on partial active and inactive LVs 

Installed packages:

lvm2-2.02.98-6.el6.x86_64
device-mapper-1.02.77-6.el6.x86_64
kernel-2.6.32-347.el6.x86_64

Comment 5 errata-xmlrpc 2013-02-21 08:14:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html


Note You need to log in before you can comment on or make changes to this bug.