Bug 865035 - lvchange/vgchange: --[add|del]tag arguments should be allowed when the LV/VG is partial
lvchange/vgchange: --[add|del]tag arguments should be allowed when the LV/VG...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.3
Unspecified Unspecified
high Severity high
: rc
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
:
Depends On:
Blocks: 824153
  Show dependency treegraph
 
Reported: 2012-10-10 12:40 EDT by Jonathan Earl Brassow
Modified: 2013-02-21 03:14 EST (History)
12 users (show)

See Also:
Fixed In Version: lvm2-2.02.98-1.el6
Doc Type: Bug Fix
Doc Text:
Previously, when a device was missing from a volume group or logical volume, it was impossible to add or remove tags from the logical volume. If the activation of the logical volume was based on tagging and the 'volume_list' parameter in the configuration file (lvm.conf), it would be impossible to activate a logical volume. This is an important case because it affects HA-LVM. Without the ability to add or remove tags while a device was missing, it was impossible to use RAID logical volumes with HA-LVM.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-21 03:14:27 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jonathan Earl Brassow 2012-10-10 12:40:32 EDT
The --[add|del]tag arguments for vgchange and lvchange should be allowed to update metadata even when "partial".

The necessity for this is given in the upstream commit message here:
commit 3501f17fd0fcec2a1fbb8aeecf228e86ee022d99
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Wed Oct 10 11:33:10 2012 -0500

    [lv|vg]change:  Allow limited metadata changes when PVs are missing
    
    A while back, the behavior of LVM changed from allowing metadata changes
    when PVs were missing to not allowing changes.  Until recently, this
    change was tolerated by HA-LVM by forcing a 'vgreduce --removemissing'
    before trying (again) to add tags to an LV and then activate it.  LVM
    mirroring requires that failed devices are removed anyway, so this was
    largely harmless.  However, RAID LVs do not require devices to be removed
    from the array in order to be activated.  In fact, in an HA-LVM
    environment this would be very undesirable.  Device failures in such an
    environment can often be transient and it would be much better to restore
    the device to the array than synchronize an entirely new device.
    
    There are two methods that can be used to setup an HA-LVM environment:
    "clvm" or "tagging".  For RAID LVs, "clvm" is out of the question because
    RAID LVs are not supported in clustered VGs - not even in an exclusively
    activated manner.  That leaves "tagging".  HA-LVM uses tagging - coupled
    with 'volume_list' - to ensure that only one machine can have an LV active
    at a time.  If updates are not allowed when a PV is missing, it is
    impossible to add or remove tags to allow for activation.  This removes
    one of the most basic functionalities of HA-LVM - site redundancy.  If
    mirroring or RAID is used to replicate the storage in two data centers
    and one of them goes down, a server and a storage device are lost.  When
    the service fails-over to the alternate site, the VG will be "partial".
    Unable to add a tag to the VG/LV, the RAID device will be unable to
    activate.
    
    The solution is to allow vgchange and lvchange to alter the LVM metadata
    for a limited set of options - --[add|del]tag included.  The set of
    allowable options are ones that do not cause changes to the DM kernel
    target (like --resync would) or could alter the structure of the LV
    (like allocation or conversion).
Comment 1 Jonathan Earl Brassow 2012-10-10 12:45:08 EDT
The ability to use RAID LVs in an HA-LVM environment (bug 824153) has two components:
   1) The LVM specific changes outlined in this bug
   2) The resource-agent changes necessary in bug 824153

This bug is created to ensure that the necessary LVM changes are picked up.
Comment 2 Jonathan Earl Brassow 2012-10-11 11:24:51 EDT
Unit testing performed can be found in lvm2/test/shell/*change-partial.sh.

To test this change, do the following:
1) Create volume group
2) create RAID LV (lvcreate --type raid1 -m 1 -l 2 -n $lv1 $vg)
3) deactivate LV (lvchange -an $vg/$lv1)
4) disable one of the RAID devices
5) Perform "tag" operations on LV (lvchange --addtag foo $vg/$lv1)
6) Perform "tag" operations on VG (vgchange --addtag foo $vg)

The tag operations will succeed if the change has been implemented.  Otherwise, the tag operations will fail.
Comment 4 Nenad Peric 2012-12-20 10:29:17 EST
Tested with raid1, raid4, raid5 and raid6


output from raid4:

(09:13:17) [root@r6-node01:~]$ lvs -a -o +lv_tags,devices
  Couldn't find device with uuid jOoMPR-MO4F-9qlc-D9fP-aOmU-2zB6-KnYhDc.
  LV                 VG       Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert LV Tags Devices                                                                        
  lv_root            VolGroup -wi-ao---  7.54g                                                      /dev/vda2(0)                                                                   
  lv_swap            VolGroup -wi-ao---  1.97g                                                      /dev/vda2(1930)                                                                
  test_lv            test_vg  rwi-a-r-p 12.00m                               100.00         foo     test_lv_rimage_0(0),test_lv_rimage_1(0),test_lv_rimage_2(0),test_lv_rimage_3(0)
  [test_lv_rimage_0] test_vg  iwi-aor--  4.00m                                                      /dev/sda1(1)                                                                   
  [test_lv_rimage_1] test_vg  iwi-a-r-p  4.00m                                                      unknown device(1)                                                              
  [test_lv_rimage_2] test_vg  iwi-aor--  4.00m                                                      /dev/sdc1(1)                                                                   
  [test_lv_rimage_3] test_vg  iwi-aor--  4.00m                                                      /dev/sdd1(1)                                                                   
  [test_lv_rmeta_0]  test_vg  ewi-aor--  4.00m                                                      /dev/sda1(0)                                                                   
  [test_lv_rmeta_1]  test_vg  ewi-a-r-p  4.00m                                                      unknown device(0)                                                              
  [test_lv_rmeta_2]  test_vg  ewi-aor--  4.00m                                                      /dev/sdc1(0)                                                                   
  [test_lv_rmeta_3]  test_vg  ewi-aor--  4.00m                                                      /dev/sdd1(0)            

Tag was added after the device was disabled. 

Tagging operation works on partial active and inactive LVs 

Installed packages:

lvm2-2.02.98-6.el6.x86_64
device-mapper-1.02.77-6.el6.x86_64
kernel-2.6.32-347.el6.x86_64
Comment 5 errata-xmlrpc 2013-02-21 03:14:27 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html

Note You need to log in before you can comment on or make changes to this bug.