Bug 1261296 - iSCSI setup fails in hosted engine
iSCSI setup fails in hosted engine
Status: CLOSED NOTABUG
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: Build (Show other bugs)
---
Unspecified Unspecified
high Severity high (vote)
: ---
: ---
Assigned To: Sandro Bonazzola
Ilanit Stein
integration
: Automation
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-09 03:07 EDT by Sagi Shnaidman
Modified: 2015-10-07 10:55 EDT (History)
15 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-07 10:55:45 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
ylavi: ovirt‑3.6.z?
ylavi: ovirt‑4.0.0?
ylavi: planning_ack?
sbonazzo: devel_ack+
ylavi: testing_ack?


Attachments (Terms of Use)
Answers file (2.66 KB, text/plain)
2015-09-09 03:07 EDT, Sagi Shnaidman
no flags Details

  None (edit)
Description Sagi Shnaidman 2015-09-09 03:07:07 EDT
Created attachment 1071584 [details]
Answers file

Description of problem:

when running setup of iSCSI storage with hosted-engine it fails with error:

Error creating Volume Group: Failed to initialize physical device: ("['/dev/mapper/3600140529e0c7277b23496f8830dadd3']",)
          The selected device is already used.
          To create a vg on this device, you must use Force.

And using force doesn't help:
[ ERROR ] Error creating Volume Group: Failed to initialize physical device: ("['/dev/mapper/3600140529e0c7277b23496f8830dadd3']",)
[ ERROR ] Failed to execute stage 'Misc configuration': Failed to initialize physical device: ("['/dev/mapper/3600140529e0c7277b23496f8830dadd3']",)

iSCSI is created just before installation of hosted engine and for it only, so nobody touched this before.


Version-Release number of selected component (if applicable):
3.6

How reproducible:
100%


Steps to Reproduce:
1.
Create iSCSI target on the host:

export HE_IQN=${HE_IQN:-"iqn.2014-07.world.server:storage.target00"}
export HE_IQN_USERID=${HE_IQN_USERID:-"iqn.2014-07.world.server:client"}
export HE_IQN_PASSWORD=${HE_IQN_PASSWORD:-"password"}
yum install -y --nogpgcheck targetcli targetd
mkdir /iscsi
targetcli "/backstores/fileio/ create disk01 /iscsi/disk01.img 30G"
targetcli "/iscsi/ create ${HE_IQN}"
targetcli "/iscsi/${HE_IQN}/tpg1/portals create"
targetcli "/iscsi/${HE_IQN}/tpg1/luns create /backstores/fileio/disk01"
targetcli "/iscsi/${HE_IQN}/tpg1/ set attribute generate_node_acls=1"
targetcli "/iscsi/${HE_IQN}/tpg1/ set attribute cache_dynamic_acls=1"
targetcli "/iscsi/${HE_IQN}/tpg1 set auth userid=${HE_IQN_USERID} password=${HE_IQN_PASSWORD}"
targetcli "/iscsi/ set discovery_auth enable=1 userid=${HE_IQN_USERID} password=${HE_IQN_PASSWORD}"
targetcli "saveconfig"
systemctl enable target
systemctl start target
systemctl status target

modprobe dm_multipath
mpathconf --enable --with_multipathd y


systemctl start multipathd
systemctl enable multipathd
systemctl status multipathd

cat <<iscsiEOFiscsi>>/etc/iscsi/iscsid.conf
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = ${HE_IQN_USERID}
discovery.sendtargets.auth.password = ${HE_IQN_PASSWORD}
iscsiEOFiscsi




2.

Login to it to discover LUN ID:

[root@bla ~]# iscsiadm -m node --op=update --name=node.session.auth.authmethod --value=CHAP
iscsiadm: No records found
[root@bla ~]# iscsiadm -m node --op=update --name=node.session.auth.username --value=${HE_IQN_USERID}
iscsiadm: No records found
[root@bla ~]# iscsiadm -m node --op=update --name=node.session.auth.password --value=${HE_IQN_PASSWORD}
iscsiadm: No records found
[root@bla ~]# iscsiadm -m discovery -t sendtargets -p 127.0.0.1
127.0.0.1:3260,1 iqn.2014-07.world.server:storage.target00
[root@bla ~]# iscsiadm -m node -L all
Logging in to [iface: default, target: iqn.2014-07.world.server:storage.target00, portal: 127.0.0.1,3260] (multiple)
Login to [iface: default, target: iqn.2014-07.world.server:storage.target00, portal: 127.0.0.1,3260] successful.
[root@bla ~]# export LUN_ID="$(ls /dev/disk/by-id/scsi-* | cut -d'-' -f3)"
[root@bla ~]# echo "LUN ID=${LUN_ID}"
LUN ID=3600140529e0c7277b23496f8830dadd3
[root@bla ~]# iscsiadm -m node -u all
Logging out of session [sid: 1, target: iqn.2014-07.world.server:storage.target00, portal: 127.0.0.1,3260]
Logout of [sid: 1, target: iqn.2014-07.world.server:storage.target00, portal: 127.0.0.1,3260] successful.
[root@bla ~]# iscsiadm -m session -P3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-28
iscsiadm: No active sessions.

3.

Start hosted-engine installation:

[root@bla ~]# hosted-engine --deploy --config-append=answers_file 
[ INFO  ] Stage: Initializing
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
          Configuration files: ['/root/answers_file']
          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150908194250-zwidjq.log
          Version: otopi-1.4.0_master (otopi-1.4.0-0.0.master.20150821210019.gitbabbcae.el7)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Generating libvirt-spice certificates
[WARNING] Cannot locate gluster packages, Hyper Converged setup support will be disabled.
[ INFO  ] Please abort the setup and install vdsm-gluster, gluster-server >= 3.7.2 and restart vdsmd service in order to gain Hyper Converged setup support.
[ INFO  ] Stage: Environment customization

          --== STORAGE CONFIGURATION ==--

          During customization use CTRL-D to abort.
[ INFO  ] Discovering iSCSI node
[ INFO  ] Connecting to the storage server
          The following luns have been found on the requested target:
                [1]     3600140529e0c7277b23496f8830dadd3       30GiB   LIO-ORG disk01
                        status: free, paths: 1 active

[ INFO  ] Installing on first host

          --== SYSTEM CONFIGURATION ==--


          --== NETWORK CONFIGURATION ==--


          --== VM CONFIGURATION ==--

[ INFO  ] Checking OVF archive content (could take a few minutes depending on archive size)
[ INFO  ] Checking OVF XML content (could take a few minutes depending on archive size)
[WARNING] OVF does not contain a valid image description, using default.
[ INFO  ] The engine VM will be configured to use 192.168.100.3/24
          The following CPU types are supported by this host:
                 - model_Nehalem: Intel Nehalem Family
                 - model_Penryn: Intel Penryn Family
                 - model_Conroe: Intel Conroe Family
[WARNING] Minimum requirements for disk size not met

          --== HOSTED ENGINE CONFIGURATION ==--

[ INFO  ] Stage: Setup validation
[WARNING] Failed to resolve bla.redhat.com using DNS, it can be resolved only locally

          --== CONFIGURATION PREVIEW ==--

          Bridge interface                   : eth0
          Engine FQDN                        : hengine.ci.lab.tlv.redhat.com
          Bridge name                        : ovirtmgmt
          SSH daemon port                    : 22
          Firewall manager                   : iptables
          Gateway address                    : 192.168.100.1
          Host name for web application      : hosted_engine_1
          Host ID                            : 1
          Image alias                        : hosted_engine
          LUN ID                             : 3600140529e0c7277b23496f8830dadd3
          Image size GB                      : 10
          iSCSI Portal IP Address            : 192.168.100.210
          iSCSI Target Name                  : iqn.2014-07.world.server:storage.target00
          GlusterFS Share Name               : hosted_engine_glusterfs
          iSCSI Portal port                  : 3260
          GlusterFS Brick Provisioning       : False
          iSCSI Portal user                  : iqn.2014-07.world.server:client
          Console type                       : vnc
          Memory size MB                     : 4096
          MAC address                        : 00:AA:AA:AA:AA:02
          Boot type                          : disk
          Number of CPUs                     : 2
          OVF archive (for disk boot)        : /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-20150907.0-1.el7.centos.ova
          Restart engine VM after engine-setup: True
          CPU Type                           : model_Nehalem
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Configuring the management bridge
[ INFO  ] Creating Volume Group
[ ERROR ] Error creating Volume Group: Failed to initialize physical device: ("['/dev/mapper/3600140529e0c7277b23496f8830dadd3']",)
          The selected device is already used.
          To create a vg on this device, you must use Force.
          WARNING: This will destroy existing data on the device.
          (Force, Abort)[Abort]? Force
[ ERROR ] Error creating Volume Group: Failed to initialize physical device: ("['/dev/mapper/3600140529e0c7277b23496f8830dadd3']",)
[ ERROR ] Failed to execute stage 'Misc configuration': Failed to initialize physical device: ("['/dev/mapper/3600140529e0c7277b23496f8830dadd3']",)
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150908194420.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination


Actual results:

fails of installation

Additional info:

answers file is attached. LUN ID is unique for each installation, so you should run it interactively.
Comment 1 Sandro Bonazzola 2015-09-21 06:54:27 EDT
Simone is this a duplicate of the failure we already have seen on iSCSI due to change in the prepareImage call? Has this been already fixed?
Comment 2 Simone Tiraboschi 2015-09-21 07:46:55 EDT
No, it's a different issue.
It happens only in the CI env, not reproducible with a real deploy.

iSCSI on the CI jobs is a bit tricky: we are mounting in loopback a volume create on fly on the same host we are going to deploy.
Comment 3 Sandro Bonazzola 2015-09-21 08:14:57 EDT
(In reply to Simone Tiraboschi from comment #2)
> No, it's a different issue.
> It happens only in the CI env, not reproducible with a real deploy.

Dropping blocker here then


> 
> iSCSI on the CI jobs is a bit tricky: we are mounting in loopback a volume
> create on fly on the same host we are going to deploy.
Comment 4 Simone Tiraboschi 2015-09-22 10:48:55 EDT
The issue is in the iSCSI target setup witch results in an ready only LUN and so the issue.

Sagi, could you please modify the CI jobs to do this?

# configure iSCSI target on the host
yum install -y --nogpgcheck targetcli targetd
export HE_IQN=iqn.1994-05.com.redhat:hestorage
export HE_IQN_USERID=user
export HE_IQN_PASSWORD=password
mkdir /iscsi
targetcli "/backstores/fileio/ create disk01 /iscsi/disk01.img 18G"
targetcli "/iscsi/ create ${HE_IQN}"
targetcli "/iscsi/${HE_IQN}/tpg1/ set attribute authentication=1"
targetcli "/iscsi/${HE_IQN}/tpg1/ set auth userid=${HE_IQN_USERID}"
targetcli "/iscsi/${HE_IQN}/tpg1/ set auth password=${HE_IQN_PASSWORD}"
targetcli "/iscsi/${HE_IQN}/tpg1/ set attribute generate_node_acls=1"
targetcli "/iscsi/${HE_IQN}/tpg1/ set attribute cache_dynamic_acls=1"
targetcli "/iscsi/${HE_IQN}/tpg1/ set attribute demo_mode_write_protect=0"
targetcli "/iscsi/ set discovery_auth enable=1 userid=${HE_IQN_USERID} password=${HE_IQN_PASSWORD}"
targetcli "/iscsi/${HE_IQN}/tpg1/portals create"
targetcli "/iscsi/${HE_IQN}/tpg1/luns create /backstores/fileio/disk01"
targetcli "saveconfig"
systemctl enable target
systemctl restart target
systemctl status target

# Login to it to discover LUN ID
export HE_IQN_PORTALIP=192.168.10.2
iscsiadm -m node -T iqn.1994-05.com.redhat:hestorage -I default -p ${HE_IQN_PORTALIP}:3260,1 --op=new
iscsiadm -m node -T iqn.1994-05.com.redhat:hestorage -I default -p ${HE_IQN_PORTALIP}:3260,1 -n node.session.auth.authmethod -v 'CHAP' --op=update
iscsiadm -m node -T iqn.1994-05.com.redhat:hestorage -I default -p ${HE_IQN_PORTALIP}:3260,1 -n node.session.auth.username -v ${HE_IQN_USERID} --op=update
iscsiadm -m node -T iqn.1994-05.com.redhat:hestorage -I default -p ${HE_IQN_PORTALIP}:3260,1 -n node.session.auth.password -v ${HE_IQN_PASSWORD} --op=update
iscsiadm -m node -T iqn.1994-05.com.redhat:hestorage -I default -p ${HE_IQN_PORTALIP}:3260,1 -l
iscsiadm -m session -R
udevadm settle --timeout=5
export LUN_ID="$(ls /dev/disk/by-id/scsi-* | cut -d'-' -f3)"
echo "LUN ID=${LUN_ID}"
iscsiadm -m node -u all
iscsiadm -m node -T iqn.1994-05.com.redhat:hestorage -I default -p ${HE_IQN_PORTALIP}:3260,1 --op=delete
Comment 5 Sagi Shnaidman 2015-10-07 08:19:29 EDT
Thanks, Simone, it works now.

Note You need to log in before you can comment on or make changes to this bug.