RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1391090 - able to mount vg-lv after successful vgexport of vg
Summary: able to mount vg-lv after successful vgexport of vg
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1385242
TreeView+ depends on / blocked
 
Reported: 2016-11-02 14:27 UTC by Nathaniel Weddle
Modified: 2021-09-03 12:53 UTC (History)
14 users (show)

Fixed In Version: lvm2-2.02.169-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 21:49:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2222 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2017-08-01 18:42:41 UTC

Description Nathaniel Weddle 2016-11-02 14:27:31 UTC
Description of problem:
following successful vgexport of vg, vg-lv may be mounted manually, or will be mounted by fstab after reboot

Version-Release number of selected component (if applicable):
RHEL7.0 to current release

How reproducible:
100%

Steps to Reproduce:
1.vgchange -an myvg
2.vgexport myvg
3.mount myvg-lv /mnt/test

Actual results:
myvg-lv mounts successfully after successful vgexport

Expected results:
myvg-lv should not be accessible after successful vgexport

Additional info:
bug persists after reboot; 
set use_lvmetad=0 seems to workaround the bug

Comment 1 Zdenek Kabelac 2016-11-02 14:50:16 UTC
(In reply to Nathaniel Weddle from comment #0)
> Description of problem:
> following successful vgexport of vg, vg-lv may be mounted manually, or will
> be mounted by fstab after reboot
> 
> Version-Release number of selected component (if applicable):
> RHEL7.0 to current release
> 
> How reproducible:
> 100%
> 
> Steps to Reproduce:
> 1.vgchange -an myvg
> 2.vgexport myvg
> 3.mount myvg-lv /mnt/test

Was the deactivation successfully finish (return code 0) ?
No error reported.

Can you capture  'dmsetup table' before vgexport ?

It's for now  unclear how you would be able to mount  'inactive' LV ?

Comment 2 David Teigland 2016-11-02 14:58:03 UTC
It works correctly for me with 7.2 and the latest code.  In addition to the commands that Zdenek suggested, I'd also run the following to ensure they are all reporting what we expect:

before vgexport:

vgchange -an myvg
vgs myvg
lvs myvg
ls /dev/myvg
dmsetup table

after vgexport:

ls /dev/myvg
dmsetup table
vgs myvg
lvs myvg
lvchange -ay myvg/lv

Comment 4 John Pittman 2016-11-02 20:45:47 UTC
Zdenek, David,

I've attached my reproduction on this.  And here is the cli output:

Reproduction; use_lvmetad must be set to 1:
===========================================

[root@localhost ~]# lvs
  LV     VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home   rhel   -wi-ao----    1.00g                                                    
  root   rhel   -wi-ao----    5.02g                                                    
  swap   rhel   -wi-ao---- 1000.00m                                                    
  testlv testvg -wi-ao---- 1020.00m     
                                               
[root@localhost ~]# umount /test1

[root@localhost ~]# vgchange -an testvg
  0 logical volume(s) in volume group "testvg" now active

[root@localhost ~]# vgexport testvg
  Volume group "testvg" successfully exported

......reboot.....

[root@localhost ~]# lvs
  Volume group testvg is exported
  LV   VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home rhel -wi-ao----    1.00g                                                    
  root rhel -wi-ao----    5.02g                                                    
  swap rhel -wi-ao---- 1000.00m         

[root@localhost ~]# ls /dev/mapper | grep test
testvg-testlv

[root@localhost ~]# mount | grep test
/dev/mapper/testvg-testlv on /test1 type ext4 (rw,relatime,seclabel,data=ordered)

I will look further in the morning and provide the info you've requested.

Comment 5 John Pittman 2016-11-02 20:59:18 UTC
Zdenek, David,
Had some extra time... Here is the info requested.

    [root@localhost ~]# umount /test1/
     
    [root@localhost ~]# vgchange -an testvg
      0 logical volume(s) in volume group "testvg" now active
     
    [root@localhost ~]# vgs testvg
      VG     #PV #LV #SN Attr   VSize    VFree
      testvg   1   1   0 wz--n- 1020.00m    0
     
    [root@localhost ~]# lvs testvg
      LV     VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      testlv testvg -wi------- 1020.00m        
     
    [root@localhost ~]# ls /dev/testvg
    ls: cannot access /dev/testvg: No such file or directory
     
    [root@localhost ~]# dmsetup table
    rhel-home: 0 2097152 linear 252:2 10520576
    testvg-testlv: 0 2088960 linear 8:1 2048
    rhel-swap: 0 2048000 linear 252:2 12617728
    rhel-root: 0 10518528 linear 252:2 2048
     
    [root@localhost ~]# vgexport testvg
      Volume group "testvg" successfully exported
     
    [root@localhost ~]# echo $?
    0
     
    [root@localhost ~]# ls /dev/testvg
    ls: cannot access /dev/testvg: No such file or directory
     
    [root@localhost ~]# dmsetup table
    rhel-home: 0 2097152 linear 252:2 10520576
    rhel-swap: 0 2048000 linear 252:2 12617728
    rhel-root: 0 10518528 linear 252:2 2048
     
    [root@localhost ~]# vgs testvg
      VG     #PV #LV #SN Attr   VSize    VFree
      testvg   1   1   0 wzx-n- 1020.00m    0
     
    [root@localhost ~]# lvs testvg
      Volume group testvg is exported
     
    [root@localhost ~]# lvchange -ay testvg/testlv
      Volume group testvg is exported
     
    [root@localhost ~]# rpm -qa | egrep 'lvm*|udev*|device-mapper*|kernel*'
    device-mapper-multipath-0.4.9-85.el7_2.6.x86_64
    device-mapper-libs-1.02.107-5.el7_2.5.x86_64
    lvm2-libs-2.02.130-5.el7_2.5.x86_64
    device-mapper-persistent-data-0.6.2-1.el7_2.x86_64
    kernel-3.10.0-327.el7.x86_64
    device-mapper-multipath-libs-0.4.9-85.el7_2.6.x86_64
    libgudev1-219-19.el7_2.13.x86_64
    python-gudev-147.2-7.el7.x86_64
    device-mapper-1.02.107-5.el7_2.5.x86_64
    device-mapper-event-1.02.107-5.el7_2.5.x86_64
    kernel-tools-libs-3.10.0-327.36.3.el7.x86_64
    lvm2-2.02.130-5.el7_2.5.x86_64
    kernel-3.10.0-327.36.3.el7.x86_64
    python-pyudev-0.15-7.el7_2.1.noarch
    device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64
    kernel-tools-3.10.0-327.36.3.el7.x86_64

Comment 6 David Teigland 2016-11-02 21:12:53 UTC
Thanks, you're correct, I see the same here.  The 'pvscan --cache -aay' activation path is failing to respect the exported state of the VG.

Comment 13 errata-xmlrpc 2017-08-01 21:49:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2222


Note You need to log in before you can comment on or make changes to this bug.