Bug 817866
Summary: | Fine-grained device activation control | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Alasdair Kergon <agk> |
Component: | lvm2 | Assignee: | Peter Rajnoha <prajnoha> |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
Severity: | high | Docs Contact: | |
Priority: | urgent | ||
Version: | 6.4 | CC: | agk, cmarthal, dbayly, dwysocha, heinzm, hfuchi, hui.xiao, jamorgan, jane.lv, jbrassow, jkurik, john.ronciak, jvillalo, jwilleford, kavindya.s.deegala, msnitzer, nperic, prajnoha, prockai, robert.w.love, ruwang, salmy, skito, thornber, xiaolong.wang, zkabelac |
Target Milestone: | rc | Keywords: | FutureFeature |
Target Release: | 6.4 | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.98-1.el6 | Doc Type: | Enhancement |
Doc Text: |
Feature:
LVM2 now lets you specify precisely which Logical Volumes should be activated at boot time and which ones should not. With the assistance of udev and lvmetad, specified devices are activated automatically as soon as all the Physical Volumes making up the Volume Group appear on the system. LVM2 calls this 'autoactivation' and it is triggered by device hotplug. Currently, it supports non-clustered and complete VGs: the VG must have all its PVs in place to activate the VG/LV.
Reason:
Before this change, it was not possible to have the VG/LV activated automatically once all the PVs are attached to the system - users had to activate the VG/LV manually by calling vgchange/lvchange -ay on the command line directly. The LVM2 autoactivation feature removes this need.
Usage:
To make use of the autoactivation, lvmetad must be enabled (global/use_lvmetad=1 LVM2 configuration option). A new LVM2 configuration option has been added that determines which VGs/LVs will be autoactivated: activation/auto_activation_volume_list. By default, if the list is not specified, all volumes are autoactivated. Users can specify VG names, LV names and VG/LV tags in the auto_activation_volume_list.
Other:
The 'vgchange/lvchange -a/--activate' command has been enhanced and to support the autoactivation, it recognizes a new '-aa/--activate a' activation option. When using this option, the volumes are activated only if there is a match against the auto_activation_volume_list. In addition, pvscan command also recognizes a new '-a/--activate ay' option that causes the PV to be scanned and if it's the last PV that makes up a VG, the autoactivation is triggered.
The pvscan --cache -aay is called automatically (from udev rules) for each PV that appears in the system.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2013-02-21 08:09:41 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 621375, 746047 |
Description
Alasdair Kergon
2012-05-01 16:10:26 UTC
Notes from mornfall: Automatic device assembly by udev ================================= We want to asynchronously assemble and activate devices as their components become available. Eventually, the complete storage stack should be covered, including: multipath, cryptsetup, LVM, mdadm. Each of these can be addressed more or less separately. The general plan of action is to simply provide udev rules for each of the device "type": for MD component devices, PVs, LUKS/crypto volumes and for multipathed SCSI devices. There's no compelling reason to have a daemon do these things: all systems that actually need to assemble multiple devices into a single entity already either support incremental assembly or will do so shortly. Whenever in this document we talk about udev rules, these may include helper programs that implement a multi-step process. In many cases, it can be expected that the functionality can be implemented in couple lines of shell (or couple hundred of C). Multipath --------- For multipath, we will need to rely on SCSI IDs for now, until we have a better scheme of things, since multipath devices can't be identified until the second path appears, and unfortunately we need to decide whether a device is multipath when the *first* path appears. Anyway, the multipath folks need to sort this out, but it shouldn't bee too hard. Just bring up multipathing on anything that appears and is set up for multipathing. LVM --- For LVM, the crucial piece of the puzzle is lvmetad, which allows us to build up VGs from PVs as they appear, and at the same time collect information on what is already available. A command, pvscan --cache is expected to be used to implement udev rules. It is relatively easy to make this command print out a list of VGs (and possibly LVs) that have been made available by adding any particular device to the set of visible devices. In othe words, udev says "hey, /dev/sdb just appeared", calls pvscan --cache, which talks to lvmetad, which says "cool, that makes vg0 complete". Pvscan takes this info and prints it out, and the udev rule can then somehow decide whether anything needs to be done about this "vg0". Presumably a table of devices that need to be activated automatically is made available somewhere in /etc (probably just a simple list of volume groups or logical volumes, given by name or UUID, globbing possible). The udev rule can then consult this file. Cryptsetup ---------- This may be the trickiest of the lot: the obvious hurdle here is that crypto volumes need to somehow obtain a key (passphrase, physical token or such), meaning there is interactivity involved. On the upside, dm-crypt is a 1:1 system: one encrypted device results in one decrypted device, so no assembly or notification needs to be done. While interactivity is a challenge, there are at least partial solutions around. (TODO: Milan should probably elaborate here.) (For LUKS devices, these can probably be detected automatically. I suppose that non-LUKS devices can be looked up in crypttab by the rule, to decide what is the appropriate action to take.) MD -- Fortunately, MD (namely mdadm) already comes with a mechanism for incremental assembly (mdadm -I or such). We can assume that this fits with the rest of stack nicely. Filesystem &c. discovery ======================== Considering other requirements that exist for storage systems (namely large-scale storage deployments), it is absolutely not feasible to have the system hunt automatically for filesystems based on their UUIDs. In a number of cases, this could mean activating tens of thousands of volumes. On small systems, asking for all volumes to be brought up automatically is probably the best route anyway, and once all storage devices are activated, scanning for filesystems is no different from today. In effect, no action is required on this count: only filesystems that are available on already active devices can be mounted by their UUID. Activating volumes by naming a filesystem UUID is useless, since to read the UUID the volume needs to be active first. Also, we need to take care of the existing boot scheme where the "vgchange" is called in the rc.sysinit script (the same applies for systemd's fedora-storage-init service, clvmd init script, netfs init script...). So these external scripts need to read the "activation config" as well and they should not activate a VG which a user has not explicitly chosen in the config. ...also, everything needs to work normally even with use_lvmetad=0 (as lvmetad is not compulsory) Since RHEL 6.3 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux. Patches are upstream now (lvm v2.02.97). The rc.sysinit should now call "vgchange -aay" ("autoactivate") instead of "vgchange -ay" to make use of this (I'll open a BZ for initscripts to change that once we have an lvm2 build ready). There's a new activation/auto_activation_volume_list in lvm.conf that works exactly the same as existing activation/volume_list, but tells lvm which volumes should be autoactivated instead. The "--activate/-a ay" (--activate and --available are synonyms now) is now defined for: - pvscan (called within lvmetad udev rule to autoactivate volumes on PV appearance) -lvchange (to activate LVs passing the auto_activation_volume_list only) -vgchange (to activate all volumes in a VG passing the auto_activation_volume_list only) - lvcreate (to activate or not activate the new LV based on auto_activation_volume_list) If auto_activation_volume_list is not defined, all volumes are considered for autoactivation. To make use of the LV autoactivation based on PV appearance, lvmetad daemon must be running. Autoactivation is not yet supported for partial and clustered volume groups (but this may change for 6.4 if we come with additional patches in time). Adding QA ack for 6.4. Devel will need to provide unit testing results however before this bug can be ultimately verified by QA. This also requires initscripts to properly activate LVM volumes (by calling "vgchange -a ay" instead of "vgchange -a y" that would normally activate all volumes no matter what the auto_activation_volume_list setting is), filed as bug #856209, scheduled for 6.4. === 'activation/auto_activation_volume_list' commented out in lvm.conf === [root@rhel6-a ~]# pvcreate /dev/sda /dev/sdb Physical volume "/dev/sda" successfully created Physical volume "/dev/sdb" successfully created [root@rhel6-a ~]# vgcreate vg1 /dev/sda Volume group "vg1" successfully created [root@rhel6-a ~]# vgcreate vg2 /dev/sdb Volume group "vg2" successfully created [root@rhel6-a ~]# lvcreate -l1 vg1 Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 -aay vg1 WARNING: "lvol1" not zeroed Logical volume "lvol1" created [root@rhel6-a ~]# lvcreate -l1 vg2 Logical volume "lvol0" created [root@rhel6-a ~]# lvcreate -l1 -a ay vg2 WARNING: "lvol1" not zeroed Logical volume "lvol1" created [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol0 vg1 -wi-a--- 4.00m lvol1 vg1 -wi-a--- 4.00m lvol0 vg2 -wi-a--- 4.00m lvol1 vg2 -wi-a--- 4.00m === REBOOT ==== [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol0 vg1 -wi-a--- 4.00m lvol1 vg1 -wi-a--- 4.00m lvol0 vg2 -wi-a--- 4.00m lvol1 vg2 -wi-a--- 4.00m === Set 'activation/auto_activation_volume_list = [ "vg1" ]' === === REBOOT === [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol0 vg1 -wi-a--- 4.00m lvol1 vg1 -wi-a--- 4.00m lvol0 vg2 -wi----- 4.00m lvol1 vg2 -wi----- 4.00m (only vg1 volumes activated!) [root@rhel6-a ~]# lvcreate -l1 vg1 Logical volume "lvol2" created [root@rhel6-a ~]# lvcreate -l1 -a ay vg1 WARNING: "lvol3" not zeroed Logical volume "lvol3" created [root@rhel6-a ~]# lvcreate -l1 vg2 Logical volume "lvol2" created [root@rhel6-a ~]# lvcreate -l1 -a ay vg2 WARNING: "lvol3" not zeroed Logical volume "lvol3" created [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol0 vg1 -wi-a--- 4.00m lvol1 vg1 -wi-a--- 4.00m lvol2 vg1 -wi-a--- 4.00m lvol3 vg1 -wi-a--- 4.00m lvol0 vg2 -wi----- 4.00m lvol1 vg2 -wi----- 4.00m lvol2 vg2 -wi-a--- 4.00m lvol3 vg2 -wi----- 4.00m (all volumes in vg1 activated no matter if -aay or -ay is used, vg2/lvol2 activated only as that was created by implicit -ay, vg2/lvol3 not activated as that was created with -aay and it did not pass the auto_activation_volume_list) === Set 'activation/auto_activation_volume_list = [ "vg1", "vg2/lvol4" ]' === [root@rhel6-a ~]# lvcreate -l1 -aay vg2 WARNING: "lvol4" not zeroed Logical volume "lvol4" created [root@rhel6-a ~]# lvcreate -l1 -aay vg2 WARNING: "lvol5" not zeroed Logical volume "lvol5" created [root@rhel6-a ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvol0 vg1 -wi-a--- 4.00m lvol1 vg1 -wi-a--- 4.00m lvol2 vg1 -wi-a--- 4.00m lvol3 vg1 -wi-a--- 4.00m lvol0 vg2 -wi----- 4.00m lvol1 vg2 -wi----- 4.00m lvol2 vg2 -wi-a--- 4.00m lvol3 vg2 -wi----- 4.00m lvol4 vg2 -wi-a--- 4.00m lvol5 vg2 -wi----- 4.00m (vg2/lvol4 activated as it passes the auto_activation_volume_list, vg2/lvol5 not activated as it does not pass the auto_activation_volume_list) === Set 'activation/auto_activation_volume_list = [ "vg1", "vg2/lvol4", "@activate_this" ]' === [root@rhel6-a ~]# lvcreate -l1 -aay --addtag activate_this vg2 WARNING: "lvol6" not zeroed Logical volume "lvol6" created [root@rhel6-a ~]# lvs -o +tags LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert LV Tags lvol0 vg1 -wi-a--- 4.00m lvol1 vg1 -wi-a--- 4.00m lvol2 vg1 -wi-a--- 4.00m lvol3 vg1 -wi-a--- 4.00m lvol0 vg2 -wi----- 4.00m lvol1 vg2 -wi----- 4.00m lvol2 vg2 -wi----- 4.00m lvol3 vg2 -wi----- 4.00m lvol4 vg2 -wi-a--- 4.00m lvol5 vg2 -wi----- 4.00m lvol6 vg2 -wi-a--- 4.00m activate_this (vg/lvol6 activated as the 'activate_this' tag is on the auto_activation_volume_list) === REBOOT === [root@rhel6-a ~]# lvs -o +tags LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert LV Tags lvol0 vg1 -wi-a--- 4.00m lvol1 vg1 -wi-a--- 4.00m lvol2 vg1 -wi-a--- 4.00m lvol3 vg1 -wi-a--- 4.00m lvol0 vg2 -wi----- 4.00m lvol1 vg2 -wi----- 4.00m lvol2 vg2 -wi----- 4.00m lvol3 vg2 -wi----- 4.00m lvol4 vg2 -wi-a--- 4.00m lvol5 vg2 -wi----- 4.00m lvol6 vg2 -wi-a--- 4.00m activate_this (all should be properly activated based on the auto_activation_volume_list after reboot as well) Just a note... (In reply to comment #13) > [root@rhel6-a ~]# lvcreate -l1 -a ay vg2 > WARNING: "lvol1" not zeroed > Logical volume "lvol1" created The "not zeroed" is expected as documented in "man lvcreate": "For autoactivated logical volumes, --zero n is always assumed and it can't be overridden." This is for consistency, otherwise we'd end up with part of the volumes zeroed and part of the volumes not zeroed based on the auto_activation_volume_list setting. The condition to not zero all volumes that are autoactivated makes it clear so one can expect deterministic behaviour not bound to auto_activation_volume_list that can be changed at will anytime. vgchange -aay does not work for tagged LVs unless a _specific_ tag is added to the auto_activation_volume_list. The tag has to be specified precisely, so it does not seem to activate taggeed LVs not named in the lvm.conf. (ie. * does not count) I am mentioning this because the default lvm.conf contains the following: # auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ] @tag works, but the last one with "@*" did not work for me. The instruction in the config file says: # "@tag" matches any tag set in the LV or VG. # "@*" matches if any tag defined on the host is also set in the LV or VG so, @tag means ANY tag or a tag named "tag"? (I would presume the latter since that makes sense) and @* matches if any tag defined on the host... where exactly in the host can it be set? (except on the LV itself and in the lvm.conf) Should this sentence be something along the lines of: # "@*" matches if any tag defined in the auto_activation configuration is also set on the LV or VG If so, this situation does not work for me as it is now. Tested with: lvm2-2.02.98-3.el6 lvm:conf: auto_activation_volume_list = [ "test_vg1", "@*", "test1_vg/lvol01" ] after running vgchange -aay: (07:02:39) [root@r6-node01:~]$ lvs -a -o +lv_tags LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert LV Tags lv_root VolGroup -wi-ao--- 7.54g lv_swap VolGroup -wi-ao--- 1.97g lvol01 test1_vg -wi-a---- 1.00g lvol1 test1_vg -wi------ 1.00g tagged test1_vg -wi------ 1.00g tagged lvol_test test_vg1 -wi-a---- 1.00g Marking this as verified with a note that documentation has to be improved to explain the consequences and workings of default settings of volume_list. Meaning when volume_list is NOT set in lvm.conf, what one can expect from auto activation list and its behavior while using host tags (tags section) and wildcard (@*) for matching. https://lists.fedorahosted.org/pipermail/lvm2-commits/2012-November/000390.html Document that lvm.conf activation/volume_list defaults to @* when there's a host tag. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0501.html |