Bug 1139441
Summary: | LVM should not autoactivate nested LVs (see comment 6) | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Nitin Yewale <nyewale> |
Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> |
lvm2 sub component: | Default / Unclassified | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED NOTABUG | Docs Contact: | |
Severity: | medium | ||
Priority: | medium | CC: | agk, agrover, cjcr.cruz, dizoupas, dklotz87, heinzm, jbrassow, msnitzer, prajnoha, prockai, redhat, sales, thornber, wfurmank, zkabelac |
Version: | 7.0 | Keywords: | Triaged |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-11-25 10:56:41 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nitin Yewale
2014-09-08 22:45:51 UTC
I have the same problem in my case the VG is: c5ed4b6c-e20c-4c9c-ba63-3c78cca09d7c This is not a TOTAL solution but works fine for me, till they find a patch or a better solution or find something else. Create an script only replace c5ed4b6c-e20c-4c9c-ba63-3c78cca09d7c with the VG you whant to desactivate vi tgtclifix #!/bin/bash /usr/sbin/lvchange -an c5ed4b6c-e20c-4c9c-ba63-3c78cca09d7c ## the big number its the name of the VG you dont want ## ********************************** Make it executable and copy to /usr/local/bin chmod u+x tgtclifix cp tgtclifix /usr/local/bin In /usr/lib/systemd/system create a tgtclifix.service cd /usr/lib/systemd/system vi tgtclifix.service [Unit] Description=Provisional fix targetcli lvm bug Requires=sys-kernel-config.mount After=sys-kernel-config.mount network.target local-fs.target [Service] Type=oneshot ExecStart=/usr/local/bin/tgtclifix [Install] WantedBy=multi-user.target **************************************************************************** Now we need to modify target.service and add tgtclifix.service in the After= parameter. vi /usr/lib/systemd/system/target.service [Unit] Description=Restore LIO kernel target configuration Requires=sys-kernel-config.mount After=sys-kernel-config.mount network.target local-fs.target tgtclifix.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/targetctl restore ExecStop=/usr/bin/targetctl clear SyslogIdentifier=target [Install] WantedBy=multi-user.target **************************************************** Hope works for you, I can confirm this bug as well, also occurs when the target backstore block device is a disk (/dev/sdX /dev/disk/by-id/scsi-XXXX etc.) When the initiator system formats it in any way and the target system is rebooted configuration is lost with the same "device already in use error". More specific info available if needed. Also unfortunately the walkaround from Elcanchee doesn't work in that case since LVM is not used. Dimitris A typical use scenario is e.g. iscsi shares for oVirt/RHEV. The storage domains on oVirt/RHEV are lvm based and get blocked/unusable after a reboot due to this issue. In an oVirt/RHEV environment this is a blocker. Also can confirm. Basically this means every time you reboot an openstack cinder server it blows away the instances. LVM is looking inside LVs for LVM PV, VG, and LV signatures and recursively activating LVs. Other than special cases like thinpool LVs, IMHO it should not be doing this. Or at least it should not default to doing this. Or at least there should be a way to turn it off. "auto_activation_volume_list" or "filter" in lvm.conf mentioned as possible workarounds or solutions, but ideally the solution would not place limitations or fail mysteriously if the guest LVM names coincide with the host's LVM configuration. Changing component to LVM. You have two options: 1) specify ONLY what lvm should activate 2) specify what lvm should NOT activate There are various ways of specifying both of those and it's hard to suggest which is most appropriate without understanding the way the actual system concerned is being used. Is there any multipath or md involved, for example? lvm.conf settings to consider include activation/volume_list and auto_activation_volume_list and devices/global_filter which can filter based on symlinks in /dev. Is lvmetad being used? What worked for me in /etc/lvm/lvm.conf was global_filter = ["r|^/dev/vg0|"] This ignores PVs found in LVs within vg0, but LVs within vg0 are still activated (because PVs composing vg0 are not *within* vg0). CC'd people want to try this and see how it works? I think this becomes a documentation issue, for either lvm, or targetcli, or openstack: "if you're using LVM in both host and guest, you need to do this". It's configuration issue - admin has to ensure host's lvm2 command will not manipulate with 'guest' lvm2 - so filtering needs to be set. So far lvm2 doesn't have any other support - although we consider something like 'subsystem' configurable option for some future version of lvm2. Filter didn't work; it seems counter intuitive that it should break; it should be able to scan and see what it CAN activate, as opposed to having it break and not work at all: rtslib.utils.RTSLibError: Device is not a TYPE_DISK block device. is pretty uninformative considering it doesn't even let you know what device it was having trouble with.. it should ignore anything it can't activate and continue to work. (In reply to Dave Klotz from comment #10) > Filter didn't work; it seems counter intuitive that it should break; it > should be able to scan and see what it CAN activate, as opposed to having it > break and not work at all: LVM is scanning everything and seeing what it can activate, and that's really the issue, because in this case (LVM-backed target LUNs, guest also using LVM) we want the target machine to *not* activate anything that is actually meant to be seen only by the guest. > rtslib.utils.RTSLibError: Device is not a TYPE_DISK block device. > is pretty uninformative considering it doesn't even let you know what device > it was having trouble with.. > > it should ignore anything it can't activate and continue to work. It sounds like you're having a different problem if you're seeing a different exception? I've addressed this in git by changing raise RTSLibError("Device is not a TYPE_DISK block device") at line 679 of /usr/lib/python2.7/site-packages/rtslib/tcm.py to: raise RTSLibError("Device %s is not a TYPE_DISK block device" % dev) so you might try that, and consider opening a fresh BZ since it's a different exception you're seeing from rtslib. I am suffering from the same problem, my configuration disappears, and I am not using LVM, I am exporting a full disk device. Suddenly, my /etc/target/saveconfig.json is back to nothing. I end up by replacing it with one of ten 10 old copies in /etc/target/backup/saveconfig-20141227-16:52:54.json This started yesterday, after I did yum update. Any idea what can I do. This may force me to get a commercial solution, which I cannot afford, really. I went though the issue when installing RHEV on RHEL 7.1 with iscsi storage. Here is a fix which works for me: https://github.com/wfurmank/targetctlfix/blob/master/targetctlfix Enjoy ! Wojciech (In reply to Wojciech Furmankiewicz from comment #13) > I went though the issue when installing RHEV on RHEL 7.1 with iscsi storage. > Here is a fix which works for me: > https://github.com/wfurmank/targetctlfix/blob/master/targetctlfix This is not recommended. Either use the filter as described in comment 8 or if there's another issue then please open a fresh BZ. (In reply to Andy Grover from comment #14) > (In reply to Wojciech Furmankiewicz from comment #13) > > I went though the issue when installing RHEV on RHEL 7.1 with iscsi storage. > > Here is a fix which works for me: > > https://github.com/wfurmank/targetctlfix/blob/master/targetctlfix > > This is not recommended. Either use the filter as described in comment 8 or > if there's another issue then please open a fresh BZ. Ops .. just verified, the global_filter works fine. I'm feeling slow now :) OK, in case someone else didnt understand, this is what i did: 1. Exposed /dev/vgdata/iscsi1 LV to RHEV 3.5 cluster using targetd on RHEL7.1. 2. After reboot the config disappeared, exactly as described above. 3. It works fine after applying global_filter = ["r|^/dev/vgdata|"] in /etc/lvm/lvm.conf No need for new BZ :) Thanks, Wojciech |