RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1020877 - lvm2 does mishandle implicit availability changes
Summary: lvm2 does mishandle implicit availability changes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-18 12:23 UTC by Zdenek Kabelac
Modified: 2014-10-14 08:24 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.107-1.el6
Doc Type: Bug Fix
Doc Text:
Cause: In the cluster many volume types support only exclusive activation (snapshot, thins...). lmv2 tool has been automatically converting then any non-exclusive activation into 'exclusive' activation. Consequence: Such conversion however is not always correct. When a user has requested local activation, volume could have been activated exclusively on a different node in cluster. Fix: Local activation is properly converted into local-exclusive activation. Result: When the local activation is successful the volume is locally and exclusively active.
Clone Of:
Environment:
Last Closed: 2014-10-14 08:24:50 UTC
Target Upstream Version:
Embargoed:
nperic: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1387 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2014-10-14 01:39:47 UTC

Description Zdenek Kabelac 2013-10-18 12:23:14 UTC
Description of problem:

Activation and deactivation logic for 'local' flag is not working properly.

Current state:
For some volume types like snapshots/origins, thins, raids  - local activation is converted implicitly into exclusive activation - but this is a bug,
since the user requested local activation, and we may exclusively activate
device on a different node via lvm.conf tags.

Desired state:
Local activation should be implicitly converted (for selected types) to local exclusive activation - which may fail to activate exclusively if i.e. tags setting prevent exclusive activation on the local node.

--

We hit similar problem on deactivation as well, where we even influence non-clustered VG. Local deactivation is refused in non-clustered VG and in clustered VG it's converted to deactivation. 

Desired state:

In non-clustered VG deactivation always need to work  (-aln == -an)
In clustered VG we may deactivate LV only if it's activated locally,
so exclusively activated snapshot on a different node must stay 
running for lvchange -aln and command needs to return error.


Version-Release number of selected component (if applicable):
<=2.02.102

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Zdenek Kabelac 2013-12-05 14:04:47 UTC
This has been improved with upstream patch:
https://www.redhat.com/archives/lvm-devel/2013-November/msg00002.html

Comment 3 Nenad Peric 2014-07-30 12:35:18 UTC
Tested activation and deactivation of raid1, thin pools, ordinary LVs and snapshots of ordinary LVs and thin LVs. 

There was a small issue with LVM behavior if an already inactive LV was being told to deactivate, which I opened a separate bug for (Bug #1124766)

Additional issues with commands which are handling activation/deactivation are:

-aen == -an which may not be what a user wants (it deactivates the LV on ALL the nodes, regardless of the fact that there may be no exclusively activated LVs)

-aly fails silently, and returns 0 (even though id doesn't do anything) in case the volume_list check does not pass:

[root@virt-064 ~]# grep "  volume_list" /etc/lvm/lvm.conf
    volume_list = [ "vg1", "@tag1", "cluster/linear" ]
[root@virt-064 ~]# lvs
  LV      VG         Attr       LSize   Pool Origin  Data%  Meta%  Move Log Cpy%Sync Convert
  linear  cluster    -wi-------   1.00g                                                     
  lvol1   cluster    Vwi---tz-k  10.00g pool thin_lv                                        
  pool    cluster    twi---tz--   2.00g              0.00   1.17                            
  raid1   cluster    rwi---r---   2.00g                                                     
  thin_lv cluster    Vwi-a-tz--  10.00g pool         0.00                                   
  lv_root vg_virt064 -wi-ao----   6.71g                                                     
  lv_swap vg_virt064 -wi-ao---- 816.00m                                                     
[root@virt-064 ~]# lvchange -aly cluster/raid1
[root@virt-064 ~]# echo $?
0
[root@virt-064 ~]# lvs
  LV      VG         Attr       LSize   Pool Origin  Data%  Meta%  Move Log Cpy%Sync Convert
  linear  cluster    -wi-------   1.00g                                                     
  lvol1   cluster    Vwi---tz-k  10.00g pool thin_lv                                        
  pool    cluster    twi---tz--   2.00g              0.00   1.17                            
  raid1   cluster    rwi---r---   2.00g                                                     
  thin_lv cluster    Vwi-a-tz--  10.00g pool         0.00                                   
  lv_root vg_virt064 -wi-ao----   6.71g                                                     
  lv_swap vg_virt064 -wi-ao---- 816.00m                                                     
[root@virt-064 ~]# 


It should try and at least warn the user that it didn't actually DO anything. 

Should this all be split into more bug reports or can it be handled inside this one which is related to handling the availability changes?

Comment 4 Nenad Peric 2014-07-30 12:40:32 UTC
Changing the needinfo to another address.

Comment 5 Nenad Peric 2014-08-07 10:47:16 UTC
Closing this bug as VERIFIED since the intended behaviour can be observed. 
However, will open a new bug for the funny things with CLI arguments.

Marking it verified with:

lvm2-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
lvm2-libs-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
lvm2-cluster-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
udev-147-2.57.el6    BUILT: Thu Jul 24 15:48:47 CEST 2014
device-mapper-1.02.88-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
device-mapper-libs-1.02.88-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
device-mapper-event-1.02.88-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
device-mapper-event-libs-1.02.88-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 15:43:06 CEST 2014
cmirror-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014

Comment 6 errata-xmlrpc 2014-10-14 08:24:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1387.html


Note You need to log in before you can comment on or make changes to this bug.