RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1685301 - native vdo creation|activation should not be allowed in shared activation mode
Summary: native vdo creation|activation should not be allowed in shared activation mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.0
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: 8.0
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-04 22:42 UTC by Corey Marthaler
Modified: 2021-09-07 11:48 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.03.11-0.2.20201103git8801a86.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 15:01:41 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2019-03-04 22:42:49 UTC
Description of problem:
Online: [ host-083 host-084 host-085 ]

Full list of resources:

 fence-host-083 (stonith:fence_xvm):    Started host-083
 fence-host-084 (stonith:fence_xvm):    Started host-084
 fence-host-085 (stonith:fence_xvm):    Started host-085
 Clone Set: locking-clone [locking]
     Started: [ host-083 host-084 host-085 ]


# This should *not* be supported w/ shared activation
[root@host-083 ~]# lvcreate --activate sy --type vdo -n my_vdo -L 4G activator1
  Logical volume "my_vdo" created.

# This *should* be supported w/ shared activation
[root@host-083 ~]# lvcreate --activate ey --type vdo -n my_vdo2 -L 4G activator1
  Logical volume "my_vdo2" created.


# Other volume types attempted w/ and w/o shared activation

# Works w/ shared activation
[root@host-083 ~]# lvcreate --activate sy --type linear -n my_linear -L 100M activator1
  Logical volume "my_linear" created.
[root@host-083 ~]# lvcreate --activate sy --type striped -n my_stripe -L 100M activator1
  Logical volume "my_stripe" created.
[root@host-083 ~]# lvcreate --activate sy --type mirror -m 1 -n my_miror -L 100M activator1
  Logical volume "my_miror" created.

# Should *not* work w/ shared activation
[root@host-083 ~]# lvcreate --activate sy --type raid1 -n my_raid -L 100M activator1
  Shared activation not compatible with LV type raid1 of activator1/my_raid
  Failed to lock logical volume activator1/my_raid.
  Failed to activate new LV activator1/my_raid.
[root@host-083 ~]# lvcreate --activate sy --thinpool my_pool -L 100M activator1
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Shared activation not compatible with LV type thin-pool of activator1/my_pool
  Failed to lock logical volume activator1/my_pool.
  Failed to activate new LV activator1/my_pool.

# With exclusive activation should work
[root@host-083 ~]# lvcreate --activate ey --thinpool my_pool -L 100M activator1
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "my_pool" created.
[root@host-083 ~]# lvcreate --activate ey --type raid1 -n my_raid -L 100M activator1
  Logical volume "my_raid" created.


Version-Release number of selected component (if applicable):
kernel-4.18.0-74.el8    BUILT: Wed Feb 27 12:52:17 CST 2019
lvm2-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
lvm2-libs-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
lvm2-dbusd-2.03.02-6.el8    BUILT: Fri Feb 22 04:50:28 CST 2019
lvm2-lockd-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
boom-boot-0.9-7.el8    BUILT: Mon Jan 14 14:00:54 CST 2019
cmirror-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-libs-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-event-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-event-libs-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-persistent-data-0.7.6-1.el8    BUILT: Sun Aug 12 04:21:55 CDT 2018
sanlock-3.6.0-5.el8    BUILT: Thu Dec  6 13:31:26 CST 2018
sanlock-lib-3.6.0-5.el8    BUILT: Thu Dec  6 13:31:26 CST 2018

Comment 1 David Teigland 2020-09-29 19:53:12 UTC
pushed to master
https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=82e270c18a76d68e2efc28a194bca2e428c18fae

$ lvcreate --type vdo -n vv -L 5G test
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vv" created.

$ lvchange -an test/vv

$ lvchange -asy test/vv
  Shared activation not compatible with LV type vdo-pool of test/vpool0
  Failed to lock logical volume test/vv.

Comment 7 Corey Marthaler 2020-12-16 01:42:17 UTC
Fix verified in the latest rpms.

kernel-4.18.0-259.el8.dt4    BUILT: Sat Dec 12 14:40:07 CST 2020
lvm2-2.03.11-0.3.20201210git9fe7aba.el8    BUILT: Thu Dec 10 09:44:53 CST 2020
lvm2-libs-2.03.11-0.3.20201210git9fe7aba.el8    BUILT: Thu Dec 10 09:44:53 CST 2020

[root@host-087 ~]# lvcreate --activate sy --type vdo -n my_vdo -L 6G testvg
WARNING: vdo signature detected on /dev/testvg/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/testvg/vpool0.
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Shared activation not compatible with LV type vdo-pool of testvg/vpool0
  Failed to lock logical volume testvg/my_vdo.
  Failed to activate new LV testvg/my_vdo.

[root@host-087 ~]# lvcreate --activate ey --type vdo -n my_vdo -L 6G testvg
WARNING: vdo signature detected on /dev/testvg/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/testvg/vpool0.
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "my_vdo" created.

[root@host-087 ~]# lvs -a -o +devices
  LV              VG       Attr       LSize   Pool   Origin Data%   Devices        
  my_vdo          testvg   vwi-a-v---   1.99g vpool0        0.00    vpool0(0)      
  vpool0          testvg   dwi-------   6.00g               66.69   vpool0_vdata(0)
  [vpool0_vdata]  testvg   Dwi-ao----   6.00g                       /dev/sda1(0)   

[root@host-087 ~]# lvchange -asy testvg/my_vdo
  Shared activation not compatible with LV type vdo-pool of testvg/vpool0
  Failed to lock logical volume testvg/my_vdo.

Comment 9 errata-xmlrpc 2021-05-18 15:01:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1659


Note You need to log in before you can comment on or make changes to this bug.