RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1139441 - LVM should not autoactivate nested LVs (see comment 6)
Summary: LVM should not autoactivate nested LVs (see comment 6)
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-08 22:45 UTC by Nitin Yewale
Modified: 2023-03-08 07:26 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-25 10:56:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Nitin Yewale 2014-09-08 22:45:51 UTC
Description of problem:

- Create a BLOCK backstore from a LV in targetd server.
- Export that as an iscsi device to iscsi client
- Create an LVM device on this device in iscsi client.
- LVM device created in iscsi client gets reflected in targetd server and also after reboot, the target.service does not start. All the configuration created is lost even if `targetcli saveconfig` was used to save the configuration.


Version-Release number of selected component (if applicable):

[root@targetdhost ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.0 (Maipo)
[root@targetdhost ~]# rpm -qa |grep targetcli
targetcli-2.1.fb34-1.el7.noarch
[root@targetdhost ~]# 

How reproducible:
Every time. Tried two times.

Steps to Reproduce:
1. initial configuration in targetd server from where the block backstore is exported to iscsi client


[root@targetdhost ~]# pvs ; vgs ; lvs ; dmsetup info -c
  PV                        VG           Fmt  Attr PSize  PFree 
  /dev/testcluster/rhel5ha1              lvm2 a--   1.00g  1.00g
  /dev/vda2                 rhel         lvm2 a--  13.68g  4.00m
  /dev/vdb                  targetclivg1 lvm2 a--  16.00g  4.00g
  /dev/vdc                  testcluster  lvm2 a--  80.00g 74.00g
  VG           #PV #LV #SN Attr   VSize  VFree 
  rhel           1   2   0 wz--n- 13.68g  4.00m
  targetclivg1   1   3   0 wz--n- 16.00g  4.00g
  testcluster    1   4   0 wz--n- 80.00g 74.00g
  LV        VG           Attr       LSize    Pool Origin Data%  Move Log Cpy%Sync Convert
  root      rhel         -wi-ao----   12.70g                                             
  swap      rhel         -wi-ao---- 1000.00m                                             
  lv1       targetclivg1 -wi-ao----    4.00g                                             
  lv2       targetclivg1 -wi-ao----    4.00g                                             
  lv3       targetclivg1 -wi-ao----    4.00g                                             
  rhel5gfs1 testcluster  -wi-a-----    1.00g                                             
  rhel5gfs2 testcluster  -wi-a-----    2.00g                                             
  rhel5ha1  testcluster  -wi-ao----    1.00g       <<<<<<<<<<  We see this device open as its being used in iscsi client.                                    
  rhel5ha2  testcluster  -wi-a-----    2.00g                                             
Name                  Maj Min Stat Open Targ Event  UUID                                                                
targetclivg1-lv3      253   4 L--w    1    1      0 LVM-Itetfwigw0ON8PZndQJSyYYBttrXdiBDnCzgLrXAGFAMkesHTTZN8awogxnT6jpW
targetclivg1-lv2      253   3 L--w    1    1      0 LVM-Itetfwigw0ON8PZndQJSyYYBttrXdiBD3F87CZGanoHg2MdjCUgF4wsmkk7oDMyl
targetclivg1-lv1      253   2 L--w    1    1      0 LVM-Itetfwigw0ON8PZndQJSyYYBttrXdiBD9lBuDBzbCor4D7qo6jPIVk9mj0P4Tmg4
rhel-swap             253   1 L--w    2    1      0 LVM-NG0e0eqormcg4JwwdGHwhCwkblvRxSWCzASEPVJbtBKglFQm84JH99HuwF5eFivT
rhel-root             253   0 L--w    1    1      0 LVM-NG0e0eqormcg4JwwdGHwhCwkblvRxSWC7Q6K0yDkIXY6dkBXovEN3P7Gr47N9I5L
testcluster-rhel5ha2  253   6 L--w    0    1      0 LVM-naQKk24Tq2Ygcw32LpSr9dOARnpYqGuELVxA8qcPiOFMJL2Hu6F9PEG3NDgYp2J4
testcluster-rhel5gfs2 253   8 L--w    0    1      0 LVM-naQKk24Tq2Ygcw32LpSr9dOARnpYqGuE5lTAvj1bmZ0LUXB06jcnwoTkY5aioEIF
testcluster-rhel5ha1  253   5 L--w    1    1      0 LVM-naQKk24Tq2Ygcw32LpSr9dOARnpYqGuERGM1TmQzVtCjHhq21jo2Mwyc2cAwwXJo
testcluster-rhel5gfs1 253   7 L--w    0    1      0 LVM-naQKk24Tq2Ygcw32LpSr9dOARnpYqGuE2Qykvq6N0lPDLJuwHZHN0Cu69ZgkcUAh
[root@targetdhost ~]# 


[root@targetdhost ~]# targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 4]
  | | o- block1 .............................................................. [/dev/targetclivg1/lv1 (4.0GiB) write-thru activated]
  | | o- block2 .............................................................. [/dev/targetclivg1/lv2 (4.0GiB) write-thru activated]
  | | o- block3 .............................................................. [/dev/targetclivg1/lv3 (4.0GiB) write-thru activated]
  | | o- block4 .......................................................... [/dev/testcluster/rhel5ha1 (1.0GiB) write-thru activated]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 2]
  | o- iqn.2014-07.com.example.targetd:444 ............................................................................... [TPGs: 1]
  | | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  | |   o- acls .......................................................................................................... [ACLs: 1]
  | |   | o- iqn.2014-07.com.redhat:testinit ...................................................................... [Mapped LUNs: 3]
  | |   |   o- mapped_lun0 ................................................................................ [lun0 block/block1 (rw)]
  | |   |   o- mapped_lun1 ................................................................................ [lun1 block/block2 (rw)]
  | |   |   o- mapped_lun2 ................................................................................ [lun2 block/block3 (rw)]
  | |   o- luns .......................................................................................................... [LUNs: 3]
  | |   | o- lun0 ........................................................................... [block/block1 (/dev/targetclivg1/lv1)]
  | |   | o- lun1 ........................................................................... [block/block2 (/dev/targetclivg1/lv2)]
  | |   | o- lun2 ........................................................................... [block/block3 (/dev/targetclivg1/lv3)]
  | |   o- portals .................................................................................................... [Portals: 1]
  | |     o- 192.168.122.111:3260 ............................................................................................. [OK]
  | o- iqn.2014-09.com.example.com:rhel5ha ............................................................................... [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 1]
  |     | o- iqn.2014-09.com.example.com:rhel5ha:acl1 ............................................................. [Mapped LUNs: 1]
  |     |   o- mapped_lun0 ................................................................................ [lun0 block/block4 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 1]
  |     | o- lun0 ....................................................................... [block/block4 (/dev/testcluster/rhel5ha1)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 192.168.122.111:3260 ............................................................................................. [OK]
  o- loopback ......................................................................................................... [Targets: 0]
[root@targetdhost ~]# 



2. On iscsi client 

[root@node1 ~]# cat /proc/scsi/scsi 
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: LIO-ORG  Model: block4           Rev: 4.0 
  Type:   Direct-Access                    ANSI SCSI revision: 05

[root@node1 ~]# ls -lR /var/lib/iscsi/
/var/lib/iscsi/:
total 48
drwxr-xr-x 2 root root 4096 Oct 18  2012 ifaces
drwxr-xr-x 2 root root 4096 Oct 18  2012 isns
drwxr-xr-x 4 root root 4096 Sep  8 23:32 nodes
drwxr-xr-x 3 root root 4096 Sep  8 08:27 send_targets
drwxr-xr-x 2 root root 4096 Oct 18  2012 slp
drwxr-xr-x 2 root root 4096 Oct 18  2012 static

/var/lib/iscsi/ifaces:
total 0

/var/lib/iscsi/isns:
total 0

/var/lib/iscsi/nodes:
total 8
drw------- 3 root root 4096 Sep  8 23:32 iqn.2014-07.com.example.targetd:444
drw------- 3 root root 4096 Sep  8 23:32 iqn.2014-09.com.example.com:rhel5ha

/var/lib/iscsi/nodes/iqn.2014-07.com.example.targetd:444:
total 4
drw------- 2 root root 4096 Sep  8 23:32 192.168.122.111,3260,1

/var/lib/iscsi/nodes/iqn.2014-07.com.example.targetd:444/192.168.122.111,3260,1:
total 4
-rw------- 1 root root 1824 Sep  8 23:32 default

/var/lib/iscsi/nodes/iqn.2014-09.com.example.com:rhel5ha:
total 4
drw------- 2 root root 4096 Sep  8 23:32 192.168.122.111,3260,1

/var/lib/iscsi/nodes/iqn.2014-09.com.example.com:rhel5ha/192.168.122.111,3260,1:
total 4
-rw------- 1 root root 1824 Sep  8 23:32 default

/var/lib/iscsi/send_targets:
total 4
drw------- 2 root root 4096 Sep  8 23:32 192.168.122.111,3260

/var/lib/iscsi/send_targets/192.168.122.111,3260:
total 12
lrwxrwxrwx 1 root root  79 Sep  8 23:32 iqn.2014-07.com.example.targetd:444,192.168.122.111,3260,1,default -> /var/lib/iscsi/nodes/iqn.2014-07.com.example.targetd:444/192.168.122.111,3260,1
lrwxrwxrwx 1 root root  79 Sep  8 23:32 iqn.2014-09.com.example.com:rhel5ha,192.168.122.111,3260,1,default -> /var/lib/iscsi/nodes/iqn.2014-09.com.example.com:rhel5ha/192.168.122.111,3260,1
-rw------- 1 root root 556 Sep  8 23:32 st_config

/var/lib/iscsi/slp:
total 0

/var/lib/iscsi/static:
total 0
[root@node1 ~]# 

[nyewale@nyewale ~]$ rhel5ha1 
root.122.141's password: 
Last login: Mon Sep  8 23:28:26 2014 from 192.168.122.1
[root@node1 ~]# pvs ; vgs ;lvs 
  PV         VG         Fmt  Attr PSize    PFree
  /dev/sda   havg       lvm2 a--  1020.00M    0 
  /dev/vda2  VolGroup00 lvm2 a--    19.88G    0 
  VG         #PV #LV #SN Attr   VSize    VFree
  VolGroup00   1   2   0 wz--n-   19.88G    0 
  havg         1   1   0 wz--n- 1020.00M    0 
  LV       VG         Attr   LSize    Origin Snap%  Move Log Copy%  Convert
  LogVol00 VolGroup00 -wi-ao   17.91G                                      
  LogVol01 VolGroup00 -wi-ao    1.97G                                      
  halv1    havg       -wi-ao 1020.00M                                      
[root@node1 ~]# pvs ; echo "*************" ; vgs ; echo "*******************" ; lvs 
  PV         VG         Fmt  Attr PSize    PFree
  /dev/sda   havg       lvm2 a--  1020.00M    0 
  /dev/vda2  VolGroup00 lvm2 a--    19.88G    0 
*************
  VG         #PV #LV #SN Attr   VSize    VFree
  VolGroup00   1   2   0 wz--n-   19.88G    0 
  havg         1   1   0 wz--n- 1020.00M    0 
*******************
  LV       VG         Attr   LSize    Origin Snap%  Move Log Copy%  Convert
  LogVol00 VolGroup00 -wi-ao   17.91G                                      
  LogVol01 VolGroup00 -wi-ao    1.97G                                      
  halv1    havg       -wi-ao 1020.00M    

Created VG havg and LV halv1. This gets reflected in targetd server after the reboot of targetd server and the target.service does not start. Also all the configuration is lost on targetd server.

3. After reboot of targetd server

After reboot 

[root@targetdhost ~]# systemctl status target.service
target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: failed (Result: exit-code) since Tue 2014-09-09 02:04:07 IST; 1h 48min ago
  Process: 1187 ExecStart=/usr/bin/targetctl restore (code=exited, status=1/FAILURE)
 Main PID: 1187 (code=exited, status=1/FAILURE)
   CGroup: /system.slice/target.service

Sep 09 02:04:07 targetdhost target[1187]: File "/usr/lib/python2.7/site-packages/rtslib/root.py", line 196, in restore
Sep 09 02:04:07 targetdhost target[1187]: so_obj = so_cls(**kwargs)
Sep 09 02:04:07 targetdhost target[1187]: File "/usr/lib/python2.7/site-packages/rtslib/tcm.py", line 673, in __init__
Sep 09 02:04:07 targetdhost target[1187]: self._configure(dev, wwn, readonly, write_back)
Sep 09 02:04:07 targetdhost target[1187]: File "/usr/lib/python2.7/site-packages/rtslib/tcm.py", line 686, in _configure
Sep 09 02:04:07 targetdhost target[1187]: + "device %s is already in use." % dev)
Sep 09 02:04:07 targetdhost target[1187]: rtslib.utils.RTSLibError: Cannot configure StorageObject because device /dev/testcluster/rhel5ha1 is already in use.
Sep 09 02:04:07 targetdhost systemd[1]: target.service: main process exited, code=exited, status=1/FAILURE
Sep 09 02:04:07 targetdhost systemd[1]: Failed to start Restore LIO kernel target configuration.
Sep 09 02:04:07 targetdhost systemd[1]: Unit target.service entered failed state.
[root@targetdhost ~]# 



[root@targetdhost ~]# pvs ; vgs ; lvs
  PV                        VG           Fmt  Attr PSize    PFree 
  /dev/testcluster/rhel5ha1 havg         lvm2 a--  1020.00m     0 
  /dev/vda2                 rhel         lvm2 a--    13.68g  4.00m
  /dev/vdb                  targetclivg1 lvm2 a--    16.00g  4.00g
  /dev/vdc                  testcluster  lvm2 a--    80.00g 74.00g
  VG           #PV #LV #SN Attr   VSize    VFree 
  havg           1   1   0 wz--n- 1020.00m     0 
  rhel           1   2   0 wz--n-   13.68g  4.00m
  targetclivg1   1   3   0 wz--n-   16.00g  4.00g
  testcluster    1   4   0 wz--n-   80.00g 74.00g
  LV        VG           Attr       LSize    Pool Origin Data%  Move Log Cpy%Sync Convert
  halv1     havg         -wi-a----- 1020.00m                                             
  root      rhel         -wi-ao----   12.70g                                             
  swap      rhel         -wi-ao---- 1000.00m                                             
  lv1       targetclivg1 -wi-ao----    4.00g                                             
  lv2       targetclivg1 -wi-ao----    4.00g                                             
  lv3       targetclivg1 -wi-ao----    4.00g                                             
  rhel5gfs1 testcluster  -wi-a-----    1.00g                                             
  rhel5gfs2 testcluster  -wi-a-----    2.00g                                             
  rhel5ha1  testcluster  -wi-ao----    1.00g              <<<<<<<<<<<                               
  rhel5ha2  testcluster  -wi-a-----    2.00g                                             
[root@targetdhost ~]# dmsetup info -c
Name                  Maj Min Stat Open Targ Event  UUID                                                                
targetclivg1-lv3      253   4 L--w    1    1      0 LVM-Itetfwigw0ON8PZndQJSyYYBttrXdiBDnCzgLrXAGFAMkesHTTZN8awogxnT6jpW
targetclivg1-lv2      253   3 L--w    1    1      0 LVM-Itetfwigw0ON8PZndQJSyYYBttrXdiBD3F87CZGanoHg2MdjCUgF4wsmkk7oDMyl
targetclivg1-lv1      253   2 L--w    1    1      0 LVM-Itetfwigw0ON8PZndQJSyYYBttrXdiBD9lBuDBzbCor4D7qo6jPIVk9mj0P4Tmg4
rhel-swap             253   1 L--w    2    1      0 LVM-NG0e0eqormcg4JwwdGHwhCwkblvRxSWCzASEPVJbtBKglFQm84JH99HuwF5eFivT
rhel-root             253   0 L--w    1    1      0 LVM-NG0e0eqormcg4JwwdGHwhCwkblvRxSWC7Q6K0yDkIXY6dkBXovEN3P7Gr47N9I5L
testcluster-rhel5ha2  253   6 L--w    0    1      0 LVM-naQKk24Tq2Ygcw32LpSr9dOARnpYqGuELVxA8qcPiOFMJL2Hu6F9PEG3NDgYp2J4
testcluster-rhel5gfs2 253   8 L--w    0    1      0 LVM-naQKk24Tq2Ygcw32LpSr9dOARnpYqGuE5lTAvj1bmZ0LUXB06jcnwoTkY5aioEIF
testcluster-rhel5ha1  253   5 L--w    1    1      0 LVM-naQKk24Tq2Ygcw32LpSr9dOARnpYqGuERGM1TmQzVtCjHhq21jo2Mwyc2cAwwXJo
testcluster-rhel5gfs1 253   7 L--w    0    1      0 LVM-naQKk24Tq2Ygcw32LpSr9dOARnpYqGuE2Qykvq6N0lPDLJuwHZHN0Cu69ZgkcUAh
havg-halv1            253   9 L--w    0    1      0 LVM-GxuvzVgjrz9JgCjsd10zTVautXHVdPdSxQVep70Z1VcOcp65jxpTBCoKMrgelkBq <<<<<<<<<

This havg and halv1 is created on iscsi client.

Also `targetcli ls` shows 

[root@targetdhost ~]# targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 3]
  | | o- block1 ............................................................ [/dev/targetclivg1/lv1 (4.0GiB) write-thru deactivated]
  | | o- block2 ............................................................ [/dev/targetclivg1/lv2 (4.0GiB) write-thru deactivated]
  | | o- block3 ............................................................ [/dev/targetclivg1/lv3 (4.0GiB) write-thru deactivated]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 0]
  o- loopback ......................................................................................................... [Targets: 0]
[root@targetdhost ~]# 

[root@targetdhost ~]# systemctl status target.service
target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: failed (Result: exit-code) since Tue 2014-09-09 02:04:07 IST; 1h 48min ago
  Process: 1187 ExecStart=/usr/bin/targetctl restore (code=exited, status=1/FAILURE)
 Main PID: 1187 (code=exited, status=1/FAILURE)
   CGroup: /system.slice/target.service

Sep 09 02:04:07 targetdhost target[1187]: File "/usr/lib/python2.7/site-packages/rtslib/root.py", line 196, in restore
Sep 09 02:04:07 targetdhost target[1187]: so_obj = so_cls(**kwargs)
Sep 09 02:04:07 targetdhost target[1187]: File "/usr/lib/python2.7/site-packages/rtslib/tcm.py", line 673, in __init__
Sep 09 02:04:07 targetdhost target[1187]: self._configure(dev, wwn, readonly, write_back)
Sep 09 02:04:07 targetdhost target[1187]: File "/usr/lib/python2.7/site-packages/rtslib/tcm.py", line 686, in _configure
Sep 09 02:04:07 targetdhost target[1187]: + "device %s is already in use." % dev)
Sep 09 02:04:07 targetdhost target[1187]: rtslib.utils.RTSLibError: Cannot configure StorageObject because device /dev/testcluster/rhel5ha1 is already in use.
Sep 09 02:04:07 targetdhost systemd[1]: target.service: main process exited, code=exited, status=1/FAILURE
Sep 09 02:04:07 targetdhost systemd[1]: Failed to start Restore LIO kernel target configuration.
Sep 09 02:04:07 targetdhost systemd[1]: Unit target.service entered failed state.
[root@targetdhost ~]# 

[root@targetdhost ~]# systemctl list-unit-files |grep target.service
target.service                              enabled 


Actual results:
targetd configuration gets wiped of and the daemon does not start with the error, block backstore is in use.

Also, when we delete the havg and halv1 from targetd server, and run targetcli restoreconfig /etc/target/file, configuration gets restored. This should not be needed as this removes the device from iscsi client as well.

Expected results:

- targetd should not reflect the lvm devices created in iscsi client.
- Whenever a target.service is enabled, latest saved configuration should get loaded and the devices should get exported correctly.
- targetd should not show any block backstore as open (not sure of this)

Additional info:
Please let me know if anything is required.

Comment 2 Elcanchee 2014-09-29 20:37:02 UTC
I have the same problem in my case the VG is: 
 c5ed4b6c-e20c-4c9c-ba63-3c78cca09d7c

This is not a TOTAL solution but works fine for me, till they find a patch or a better solution or find something else. 

Create an script only replace c5ed4b6c-e20c-4c9c-ba63-3c78cca09d7c with the VG you whant to desactivate

vi tgtclifix

#!/bin/bash

/usr/sbin/lvchange -an c5ed4b6c-e20c-4c9c-ba63-3c78cca09d7c 
## the big number its the name of the VG you dont want
##

**********************************
Make it executable and copy to /usr/local/bin
 
chmod u+x tgtclifix
cp tgtclifix /usr/local/bin

In /usr/lib/systemd/system  create a tgtclifix.service

cd /usr/lib/systemd/system
vi tgtclifix.service

[Unit]
Description=Provisional fix targetcli lvm bug
Requires=sys-kernel-config.mount
After=sys-kernel-config.mount network.target local-fs.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/tgtclifix

[Install]
WantedBy=multi-user.target


****************************************************************************
Now we need to modify target.service and add tgtclifix.service in the After= parameter.  

vi /usr/lib/systemd/system/target.service

[Unit]
Description=Restore LIO kernel target configuration
Requires=sys-kernel-config.mount
After=sys-kernel-config.mount network.target local-fs.target tgtclifix.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/targetctl restore
ExecStop=/usr/bin/targetctl clear
SyslogIdentifier=target

[Install]
WantedBy=multi-user.target

****************************************************

Hope works for you,

Comment 3 Dimitris 2014-10-14 12:46:52 UTC
I can confirm this bug as well,
also occurs when the target backstore block device is a disk (/dev/sdX /dev/disk/by-id/scsi-XXXX etc.) When the initiator system formats it in any way and the target system is rebooted configuration is lost with the same "device already in use error". 
More specific info available if needed.
Also unfortunately the  walkaround from Elcanchee doesn't work in that case since LVM is not used.
Dimitris

Comment 4 Patrick Hurrelmann 2014-10-29 08:28:44 UTC
A typical use scenario is e.g. iscsi shares for oVirt/RHEV. The storage domains on oVirt/RHEV are lvm based and get blocked/unusable after a reboot due to this issue. In an oVirt/RHEV environment this is a blocker.

Comment 5 Dave Klotz 2014-11-11 20:07:38 UTC
Also can confirm. Basically this means every time you reboot an openstack cinder server it blows away the instances.

Comment 6 Andy Grover 2014-11-12 00:26:57 UTC
LVM is looking inside LVs for LVM PV, VG, and LV signatures and recursively activating LVs. Other than special cases like thinpool LVs, IMHO it should not be doing this. Or at least it should not default to doing this. Or at least there should be a way to turn it off.

"auto_activation_volume_list" or "filter" in lvm.conf mentioned as possible workarounds or solutions, but ideally the solution would not place limitations or fail mysteriously if the guest LVM names coincide with the host's LVM configuration.

Changing component to LVM.

Comment 7 Alasdair Kergon 2014-11-12 02:02:49 UTC
You have two options:

1) specify ONLY what lvm should activate
2) specify what lvm should NOT activate

There are various ways of specifying both of those and it's hard to suggest which is most appropriate without understanding the way the actual system concerned is being used.  Is there any multipath or md involved, for example?

lvm.conf settings to consider include activation/volume_list and auto_activation_volume_list and devices/global_filter which can filter based on symlinks in /dev.  Is lvmetad being used?

Comment 8 Andy Grover 2014-11-12 20:19:54 UTC
What worked for me in /etc/lvm/lvm.conf was 

global_filter = ["r|^/dev/vg0|"]

This ignores PVs found in LVs within vg0, but LVs within vg0 are still activated (because PVs composing vg0 are not *within* vg0).

CC'd people want to try this and see how it works?

I think this becomes a documentation issue, for either lvm, or targetcli, or openstack: "if you're using LVM in both host and guest, you need to do this".

Comment 9 Zdenek Kabelac 2014-11-25 10:56:41 UTC
It's configuration issue - admin has to ensure  host's  lvm2 command will not manipulate with  'guest' lvm2 -   so filtering needs to be set.

So far lvm2 doesn't have any other support - although we consider something like 'subsystem' configurable option for some future version of lvm2.

Comment 10 Dave Klotz 2014-11-26 17:59:29 UTC
Filter didn't work; it seems counter intuitive that it should break; it should be able to scan and see what it CAN activate, as opposed to having it break and not work at all:

 rtslib.utils.RTSLibError: Device is not a TYPE_DISK block device.
is pretty uninformative considering it doesn't even let you know what device it was having trouble with..

it should ignore anything it can't activate and continue to work.

Comment 11 Andy Grover 2014-11-26 18:42:36 UTC
(In reply to Dave Klotz from comment #10)
> Filter didn't work; it seems counter intuitive that it should break; it
> should be able to scan and see what it CAN activate, as opposed to having it
> break and not work at all:

LVM is scanning everything and seeing what it can activate, and that's really the issue, because in this case (LVM-backed target LUNs, guest also using LVM) we want the target machine to *not* activate anything that is actually meant to be seen only by the guest.

>  rtslib.utils.RTSLibError: Device is not a TYPE_DISK block device.
> is pretty uninformative considering it doesn't even let you know what device
> it was having trouble with..
> 
> it should ignore anything it can't activate and continue to work.

It sounds like you're having a different problem if you're seeing a different exception? I've addressed this in git by changing 

raise RTSLibError("Device is not a TYPE_DISK block device")

at line 679 of /usr/lib/python2.7/site-packages/rtslib/tcm.py to:

raise RTSLibError("Device %s is not a TYPE_DISK block device" % dev)

so you might try that, and consider opening a fresh BZ since it's a different exception you're seeing from rtslib.

Comment 12 Alves 2014-12-27 22:01:07 UTC
I am suffering from the same problem, my configuration disappears, and I am not using LVM, I am exporting a  full disk device. 

Suddenly,  my /etc/target/saveconfig.json is back to nothing. I end up by replacing it with one of ten 10 old copies in
 /etc/target/backup/saveconfig-20141227-16:52:54.json

This started yesterday, after I did yum update. Any idea what can I do. This may force me to get a commercial solution, which I cannot afford, really.

Comment 13 Wojciech Furmankiewicz 2015-04-02 18:24:51 UTC
I went though the issue when installing RHEV on RHEL 7.1 with iscsi storage.
Here is a fix which works for me: https://github.com/wfurmank/targetctlfix/blob/master/targetctlfix

Enjoy !
Wojciech

Comment 14 Andy Grover 2015-04-02 18:29:31 UTC
(In reply to Wojciech Furmankiewicz from comment #13)
> I went though the issue when installing RHEV on RHEL 7.1 with iscsi storage.
> Here is a fix which works for me:
> https://github.com/wfurmank/targetctlfix/blob/master/targetctlfix

This is not recommended. Either use the filter as described in comment 8 or if there's another issue then please open a fresh BZ.

Comment 15 Wojciech Furmankiewicz 2015-04-02 18:59:31 UTC
(In reply to Andy Grover from comment #14)
> (In reply to Wojciech Furmankiewicz from comment #13)
> > I went though the issue when installing RHEV on RHEL 7.1 with iscsi storage.
> > Here is a fix which works for me:
> > https://github.com/wfurmank/targetctlfix/blob/master/targetctlfix
> 
> This is not recommended. Either use the filter as described in comment 8 or
> if there's another issue then please open a fresh BZ.

Ops .. just verified, the global_filter works fine. I'm feeling slow now :)
OK, in case someone else didnt understand, this is what i did:

1. Exposed /dev/vgdata/iscsi1 LV to RHEV 3.5 cluster using targetd on RHEL7.1.
2. After reboot the config disappeared, exactly as described above.
3. It works fine after applying global_filter = ["r|^/dev/vgdata|"] in /etc/lvm/lvm.conf

No need for new BZ :)

Thanks,
Wojciech


Note You need to log in before you can comment on or make changes to this bug.