RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1149266 - pvscan --aay does not honor lvm filter
Summary: pvscan --aay does not honor lvm filter
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Petr Rockai
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-03 15:50 UTC by Jack Waterworth
Modified: 2021-09-03 12:40 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-04 13:50:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
pvscan --ay --vvvv (148.25 KB, text/plain)
2014-10-03 15:52 UTC, Jack Waterworth
no flags Details

Description Jack Waterworth 2014-10-03 15:50:52 UTC
Description of problem:
devices that are rejected via the filter in lvm.conf are still activated at boot time when lvmetad is enabled

Version-Release number of selected component (if applicable):
kernel-3.10.0-123.8.1.el7.x86_64
lvm2-2.02.105-14.el7.x86_64

How reproducible:
every time


Steps to Reproduce:
1. create a pv, vg, and lv on top of an existing lv
2. create a filter to ONLY accept sdX devices
3. reboot the machine, or run: lvm pvscan --background --cache --activate ay

Actual results:
the LV on top of the device that should not be accessible is activated


Expected results:
the LV on top of the device that should not accessible should be ignored.


Additional info:

In my reproducer, bad_vg-lvol0 is the device that SHOULD be filter and NOT activated:

[root@jack-rhel7 rules.d]# dmsetup info -c
Name             Maj Min Stat Open Targ Event  UUID                                                                
rhel_unused-swap 253   1 L--w    2    1      0 LVM-deev1gwAqEe9YYgUH1Zqv8uL2ETKp2kOwD4u3FqpM3fETmh0v2wDOQXr7cyCbZjz
mpatha           253   4 L--w    0    1      0 mpath-36001405057e565c4e8b45058aabda3c2                             
rhel_unused-root 253   0 L--w    1    1      0 LVM-deev1gwAqEe9YYgUH1Zqv8uL2ETKp2kOAUAIqW0NAvTMCk5lRuOg4123RUMHeN0u
testing-lvol0    253   2 L--w    1    1      0 LVM-C1vUnOiBkBKLkxlRCwHQrgBo1EucmNwaU7ZCWKfTWAM9TKZju5lBPtuIdXZ88qRG
bad_vg-lvol0     253   3 L--w    0    1      0 LVM-y4CroTdZzfXob1pQZFjjkJ9ZrB7Pip9LmTwK7pinWzXdgoVYc8mTUkOZlnLlaXrk

[root@jack-rhel7 rules.d]# dmsetup remove bad_vg-lvol0

[root@jack-rhel7 rules.d]# dmsetup info -c
Name             Maj Min Stat Open Targ Event  UUID                                                                
rhel_unused-swap 253   1 L--w    2    1      0 LVM-deev1gwAqEe9YYgUH1Zqv8uL2ETKp2kOwD4u3FqpM3fETmh0v2wDOQXr7cyCbZjz
mpatha           253   4 L--w    0    1      0 mpath-36001405057e565c4e8b45058aabda3c2                             
rhel_unused-root 253   0 L--w    1    1      0 LVM-deev1gwAqEe9YYgUH1Zqv8uL2ETKp2kOAUAIqW0NAvTMCk5lRuOg4123RUMHeN0u
testing-lvol0    253   2 L--w    0    1      0 LVM-C1vUnOiBkBKLkxlRCwHQrgBo1EucmNwaU7ZCWKfTWAM9TKZju5lBPtuIdXZ88qRG

[root@jack-rhel7 rules.d]# lvm pvscan --background --cache --activate ay

[root@jack-rhel7 rules.d]# dmsetup info -c
Name             Maj Min Stat Open Targ Event  UUID                                                                
rhel_unused-swap 253   1 L--w    2    1      0 LVM-deev1gwAqEe9YYgUH1Zqv8uL2ETKp2kOwD4u3FqpM3fETmh0v2wDOQXr7cyCbZjz
mpatha           253   4 L--w    0    1      0 mpath-36001405057e565c4e8b45058aabda3c2                             
rhel_unused-root 253   0 L--w    1    1      0 LVM-deev1gwAqEe9YYgUH1Zqv8uL2ETKp2kOAUAIqW0NAvTMCk5lRuOg4123RUMHeN0u
testing-lvol0    253   2 L--w    1    1      0 LVM-C1vUnOiBkBKLkxlRCwHQrgBo1EucmNwaU7ZCWKfTWAM9TKZju5lBPtuIdXZ88qRG
bad_vg-lvol0     253   3 L--w    0    1      0 LVM-y4CroTdZzfXob1pQZFjjkJ9ZrB7Pip9LmTwK7pinWzXdgoVYc8mTUkOZlnLlaXrk

[root@jack-rhel7 rules.d]# dmsetup remove bad_vg-lvol0
[root@jack-rhel7 rules.d]# vgchange -aay
  No device found for PV Fc6UXb-BIDZ-l4bX-diYI-m70C-WMcq-3SGbWy.
  1 logical volume(s) in volume group "testing" now active
  2 logical volume(s) in volume group "rhel_unused" now active

[root@jack-rhel7 rules.d]# dmsetup info -c
Name             Maj Min Stat Open Targ Event  UUID                                                                
rhel_unused-swap 253   1 L--w    2    1      0 LVM-deev1gwAqEe9YYgUH1Zqv8uL2ETKp2kOwD4u3FqpM3fETmh0v2wDOQXr7cyCbZjz
mpatha           253   4 L--w    0    1      0 mpath-36001405057e565c4e8b45058aabda3c2                             
rhel_unused-root 253   0 L--w    1    1      0 LVM-deev1gwAqEe9YYgUH1Zqv8uL2ETKp2kOAUAIqW0NAvTMCk5lRuOg4123RUMHeN0u
testing-lvol0    253   2 L--w    0    1      0 LVM-C1vUnOiBkBKLkxlRCwHQrgBo1EucmNwaU7ZCWKfTWAM9TKZju5lBPtuIdXZ88qRG

[root@jack-rhel7 rules.d]# grep "  filter" /etc/lvm/lvm.conf
    filter = [ "a|/dev/vd.*|", "r|.*|" ]

[root@jack-rhel7 rules.d]# lvs -o +devices --config 'devices{ filter = [ "a|.*|" ] }'
  LV    VG          Attr       LSize  Pool Origin Data%  Move Log Cpy%Sync Convert Devices              
  lvol0 bad_vg      -wi-------  9.99g                                              /dev/testing/lvol0(0)
  root  rhel_unused -wi-ao----  8.51g                                              /dev/vda2(256)       
  swap  rhel_unused -wi-ao----  1.00g                                              /dev/vda2(0)         
  lvol0 testing     -wi-a----- 10.00g                                              /dev/vdd(0)          

[root@jack-rhel7 rules.d]# lvm pvscan --activate ay --cache
  No device found for PV Fc6UXb-BIDZ-l4bX-diYI-m70C-WMcq-3SGbWy.

[root@jack-rhel7 rules.d]# lvs -o +devices --config 'devices{ filter = [ "a|.*|" ] }'
  LV    VG          Attr       LSize  Pool Origin Data%  Move Log Cpy%Sync Convert Devices              
  lvol0 bad_vg      -wi-a-----  9.99g                                              /dev/testing/lvol0(0)
  root  rhel_unused -wi-ao----  8.51g                                              /dev/vda2(256)       
  swap  rhel_unused -wi-ao----  1.00g                                              /dev/vda2(0)         
  lvol0 testing     -wi-ao---- 10.00g                                              /dev/vdd(0)

Comment 1 Jack Waterworth 2014-10-03 15:52:22 UTC
Created attachment 943756 [details]
pvscan --ay --vvvv

Comment 3 Peter Rajnoha 2014-10-06 11:05:13 UTC
Actually, this is not a bug - "pvscan --cache" respects only global_filter by design and the same applies for the autoactivation part. So currently, what happens during pvscan --cache is:

  1) device scanned

  2) device checked against "devices/global_filter" and if it's not passing, exit immediately

  3) lvmetad updated/notified about metadata found on the device

  4) response received from lvmetad as confirmation about the update/notification - this also contains information about VG completeness (whether all PVs that make up the VG are present in the system)

  5) if the VG is complete, the VG is activated if it passes both activation/volume_list and activation/auto_activation_volume_list, the VG/LV is activated (volume_list is for activation in general, auto_activation_volume_list is checked in addition when --activate ay (or -aay in short) is used - this is the case of "pvscan --cache -aay" used in the udev rule)


Now this is a bit internal stuff, but it exactly explains why we're not checking against devices/filter in pvscan --cache -aay:

If we wanted to apply "devices/filter" - that would be exactly part of step 5). But in that case we'd need to iterate over *all* the PVs in the VG to check them against "devices/filter" and this would happen on each "pvscan --cache -aay" call. This is because some of the PVs in the VG could be filtered by devices/filter, while others not and we never know the order in which the events come (the order in which PVs appear in the system). And if any of the PV in the VG is filtered, we don't want to activate the VG/LV - we just don't do partial activations during autoactivation.

For example:
  - let's have PV A filtered via devices/filter (but not devices/global_filter)
  - PV A appears first in the system, it passes devices/global_filter, it's scanned, lvmetad updated, it does not pass devices/filter - but that's doesn't matter now since the VG is not yet complete so we're not activating anyway
  - PV B appears second in the system, it passes devices/global_filter, it's scanned, lvmetad updated, VG is complete now (lvmetad has returned this info), B passes devices/filter, autoactivation is triggered even though A is not filtered out via devices/filter! And this would be a bug!

And that last step is the reason we'd need to iterate over all PVs in the VG to check them against devices/filter if we used it in addition. Simply, this is against "pvscan's --cache <single_device>" design which is responsible for processing only the single device at a time.


To sum it up:

  A) when using lvmetad, the filter that is evaluated is:

     - devices/global_filter for lvmetad update - this is pvscan --cache (manual or automatic via udev rule) OR the very first LVM command run that updates new lvmetad instance that is not yet initialized - this is actually the same code as pvscan --cache. Reason: lvmetad can't update itself at its start when it's executed.

     - otherwise, devices/filter on LVM tools side for information returned from lvmetad (if lvmetad is already initialized)

  B) when lvmetad is not used, the filter that is evaluated is:
     - devices/global_filter and then devices/filter just after that for all LVM commands (lvmetad is simply not in the game here)


When looking at A, you can see that lvmetad caches all the information from all PVs unless these are filtered by devices/global_filter. Each lvmetad client (including all LVM tools *except pvscan --cache*) can provide more granular filtering by using devices/filter. The reason for introducing global_filter was that lvmetad can't handle duplicates (duplicate PV UUIDs) and these must be filtered globally! The most common appeareance of duplicate PV UUIDs is during copying virtual machine images to clone a new guest out of the original image... Another reason for using global_filter is when users don't want LVM to scan and process some devices at all for whatever reason.

This devices/global_filter and devices/filter separation is important for using LVM with lvmetad properly in cases as described above.


(There are actually more filters evaluated than devices/global_filter and devices/filter - but these are built-in filters like mpath filter or md component filter etc. - this is not important here now... But I'll try to document how all these filters are evaluated better, if all the changes we're doing in this area settle down.)

Comment 4 Peter Rajnoha 2014-10-06 11:10:16 UTC
(In reply to Jack Waterworth from comment #0)
> Description of problem:
> devices that are rejected via the filter in lvm.conf are still activated at
> boot time when lvmetad is enabled

If you want LVM to completely ignore the device so it's not scanned and cached by LVM at all, use global_filter. If you still want LVM to process the device, but you just don't want LVM to autoactivate it, you need to define auto_activation_volume_list to activate only volumes needed (or alternatively, you can mark LVs to be skipped on activation directly - see also lvcreate/lvchange -k/--setactivationskip {y|n} in lvcreate/lvchange man page).

Comment 5 Petr Rockai 2014-11-04 13:50:28 UTC
As explained in detail by Peter Rajnoha, this is not a bug. Please use the global filter or the autoactivation list to restrict access/activation to devices globally.


Note You need to log in before you can comment on or make changes to this bug.