Bug 1325357

Summary: When attempting to add an iSCSI domain from single host setup it won't recognize LUNS until another host is connected to first
Product: [oVirt] ovirt-engine Reporter: Bryan <bhughes>
Component: BLL.StorageAssignee: Liron Aravot <laravot>
Status: CLOSED NOTABUG QA Contact: Aharon Canan <acanan>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.6.3.3CC: amureini, bhughes, bugs, laravot, tnisan, ylavi
Target Milestone: ovirt-4.0.2Flags: ykaul: ovirt-4.0.z?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-16 08:37:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
Screenshot 1
none
screenshot 2
none
vdsm.log1
none
vdsmlog2 none

Description Bryan 2016-04-08 14:10:23 UTC
Created attachment 1145143 [details]
Screenshot 1

Description of problem: In our currently deployment we want to add three iSCSI LUNs from each Ovirt host as they check in.  Currently, I am on the single hosted engine server, it is NOT an all in one server.  The one server is currently serving 4 gluster domains into ovirt.  

I followed this guide:
https://fedorahosted.org/ovirt/wiki/ISCSISetup
http://www.ovirt.org/documentation/admin-guide/administration-guide/#Preparing_and_Adding_Block_Storage

I created the LUNS and target and can verify the LUNS appear when running tgt-admin --show.  THere is no issues here.  

Then I go into Ovirt and attempt to add the iSCSI storage domain and it discovers the target but it does not find any LUNS.  I currently have no ACLs set on the LUNS so they are wide open to the world so that is not the issue.  I tried to create a different LUN via LVM instead of a physical drive and had the same issue.  After exhausting many attempts and running the tgtadm commands many many times I decided to try from another server that is NOT part of my ovirt enviornment.

I went to node2 and ran the EXACT SAME tgtadm commands to set up the target and LUNS.  When I went into Ovirt and did a discover it was able to discover Node2 and Node1 was still in my discovery as well as I did not close the iscsi session.

When I clicked login on server2 from Storage gui, it logged into server2 and was able to see the LVM lun.  In doing this it was also able to see the LVM lun on server1 as well now.  

SO, I tried to add another physical device to server1 and manually closed the iscsi session on server1.  I did a rediscover from the GUI and the new LUN did not appear on server1.

Next I went to server2 and created another lun for a physical drive.  I did a rediscover and now I see two Luns under server1 and two luns under server2.

I have attached screenshots of the Ovirt UI.  The LUNS have the same ID so I am not sure if it is actually only seeing the LUNs from server2 and just seeing another path from server1 or what is actually going on.

If so, I am not able to add iSCSI LUNS directly from server1.


Version-Release number of selected component (if applicable):
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
ovirt-engine-appliance-3.6-20160301.1.el7.centos.noarch
vdsm-infra-4.17.23.2-0.el7.centos.noarch
vdsm-python-4.17.23.2-0.el7.centos.noarch
vdsm-gluster-4.17.23.2-0.el7.centos.noarch
vdsm-xmlrpc-4.17.23.2-0.el7.centos.noarch
vdsm-yajsonrpc-4.17.23.2-0.el7.centos.noarch
vdsm-cli-4.17.23.2-0.el7.centos.noarch
vdsm-jsonrpc-4.17.23.2-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.23.2-0.el7.centos.noarch
vdsm-4.17.23.2-0.el7.centos.noarch
iscsi-initiator-utils-iscsiuio-6.2.0.873-32.el7.x86_64
fence-agents-scsi-4.0.11-27.el7_2.5.x86_64
lsscsi-0.27-3.el7.x86_64
scsi-target-utils-1.0.55-4.el7.x86_64
libiscsi-1.9.0-6.el7.x86_64
iscsi-initiator-utils-6.2.0.873-32.el7.x86_64


How reproducible: 100%


Steps to Reproduce:
1. Stand up ovirt hosted engine on 1 server
2. Stand up iscsi on server 1 and serve out LUN
3. Attempt to add LUNs from server to Ovirt UI

Actual results: Ovirt does not discover Luns from server1


Expected results: iSCSI LUNS appear in UI


Additional info:

Comment 1 Bryan 2016-04-08 14:10:50 UTC
Created attachment 1145144 [details]
screenshot 2

Comment 2 Bryan 2016-04-08 14:15:58 UTC
Created attachment 1145145 [details]
vdsm.log1

Comment 3 Bryan 2016-04-08 14:16:13 UTC
Created attachment 1145146 [details]
vdsmlog2

Comment 4 Yaniv Kaul 2016-04-10 13:08:14 UTC
Bryan, I'm a bit confused with regards to the configuration:
1. You are mentioning hosted-engine with Gluster, but you are trying to add iSCSI targets - from the hosts themselves? 
2. You are using EL7, not tgt-admin and not targetcli?

Comment 5 Bryan 2016-04-11 18:41:00 UTC
(In reply to Yaniv Kaul from comment #4)
> Bryan, I'm a bit confused with regards to the configuration:
> 1. You are mentioning hosted-engine with Gluster, but you are trying to add
> iSCSI targets - from the hosts themselves? 
> 2. You are using EL7, not tgt-admin and not targetcli?

1) I am running the hosted engine itself on gluster.  I then have 3 other HDD on the physical machines I want to add to OVirt in order to be used for HDFS.  I don't want to run HDFS on gluster so I wanted to use direct access to the drives.  So, to sum it up, my server has 8 physical drives.  2 are for the bare metal OS, 3 are attached via gluster and the other 3 I want to share each as an iSCSI LUN for direct write via HDFS.

2)  I am using tgt-admin to do the setup, here are the commands I ran:

tgtadm --lld iscsi --op new --mode target --tid 1 -T host:storage
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/sde
tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 2 -b /dev/VG00/iscsitest
tgt-admin --dump  > /etc/tgt/targets.conf

Then I tried to add both to the Ovirt UI and got the errors from above.

Comment 6 Bryan 2016-04-11 19:24:28 UTC
I also tried to add it as a POSIX mount point and that fails as well.  I tried with POSIX after the iscsi failures.  I created an xfs filesystem on /dev/sde1 and attempted to add that as a POSIX storage domain.

I get these errors in vdsm.log on the host:

jsonrpc.Executor/5::INFO::2016-04-11 13:00:37,163::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/dev/sde1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'xfs', u'password': '********', u'port': u''}], options=None)
jsonrpc.Executor/5::DEBUG::2016-04-11 13:00:37,165::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/_dev_sde1 mode: None
jsonrpc.Executor/5::DEBUG::2016-04-11 13:00:37,166::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-55 /usr/bin/sudo -n /usr/bin/mount -t xfs /dev/sde1 /rhev/data-center/mnt/_dev_sde1 (cwd None)
jsonrpc.Executor/5::DEBUG::2016-04-11 13:00:37,197::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-55 /usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/_dev_sde1 (cwd None)
jsonrpc.Executor/5::ERROR::2016-04-11 13:00:37,210::hsm::2473::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
    conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 245, in connect
    six.reraise(t, v, tb)
  File "/usr/share/vdsm/storage/storageServer.py", line 238, in connect
    self.getMountObj().getRecord().fs_file)
  File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
    raise se.StorageServerAccessPermissionError(dirPath)
StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/_dev_sde1'

I ran all the commands in the log as vdsm user and they work with no errors manually.  I also tried to create the /rhev/data-center/mnt/_dev_sde1 prior to creating the domain and it just removes the directory I create.  So, I am not sure why I can't add it as POSIX either

Comment 7 Bryan 2016-04-18 13:44:36 UTC
Is there a way to mount a local drive to a VM in ovirt?

Essentially on a physical server I want to partition and drive and mount it to say /test.  I then want to map /test to VM1.

I know with virsh it is possible but wanted to see if this can be done in OVirt?

Thanks

Comment 8 Bryan 2016-04-29 15:05:58 UTC
Is there any way to get a response to comment #7?

Thanks,

Comment 9 Yaniv Lavi 2016-05-02 11:31:57 UTC
This seems like a use case that oVirt is not tested for and while we can try to help with this, it is not considered urgent. Therefore moving to 4.0 for research. Can you please provide the use case and topology you are trying to use, it is not clear.

Comment 10 Bryan 2016-05-02 16:27:22 UTC
WE are trying to install HDFS on top of VMs running in OVirt.  We have four physical servers to use as our stack.  Each server will consist of 8 hard drives.  

The drive layout is this:
Drives 0-1 : Raid 1 for physical server OS
Drives 2-4 : Gluster Drives, these drives will be added to the gluster storage domains
Drives 5-7:  We wanted to map these directly to the datanode VMs.  AKA, there would be three datanodes per physical machine.  Datanode1 would get drive 5, datanode2 would get 6 and datanode3 would get 7.  

I attempted to use iSCSI first to do this but it didn't work as seen above.  I then tried the POSIX FS but a Datacenter can't contain local and shared storage on the same server.

Is there a recommended way to accomplish this?  I know I could share the drives via NFS of Gluster but wanted to do direct to increase performance in HDFS.

Thanks

Comment 11 Yaniv Lavi 2016-05-23 13:15:15 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 12 Yaniv Lavi 2016-05-23 13:19:57 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 13 Liron Aravot 2016-06-06 18:42:03 UTC
(In reply to Bryan from comment #7)
> Is there a way to mount a local drive to a VM in ovirt?
> 
> Essentially on a physical server I want to partition and drive and mount it
> to say /test.  I then want to map /test to VM1.
> 
> I know with virsh it is possible but wanted to see if this can be done in
> OVirt?
> 

Hi Bryan,
it can be done by using a vdsm hook, but not from ovirt web ui/rest api.


(In reply to Bryan from comment #10)
> 
> I attempted to use iSCSI first to do this but it didn't work as seen above. 
> I then tried the POSIX FS but a Datacenter can't contain local and shared
> storage on the same server.
> 
> Is there a recommended way to accomplish this?  I know I could share the
> drives via NFS of Gluster but wanted to do direct to increase performance in
> HDFS.
> 
> Thanks

Currently shared storage can't be mixed in the same data center with local storage as far as i know.
adding needinfo? on Allon/Tal to update if there are any plans on that area.

thanks,
Liron.

Comment 14 Tal Nisan 2016-06-07 09:43:43 UTC
We have an RFE on the subject currently not in the road map for the next version, you can follow bug 1134318 for future progress on the issue.

Bryan, did you get all necessary info?

Comment 15 Liron Aravot 2016-06-16 08:37:53 UTC
Closing as seems that there's no standing issue here.
Bryan, please reopen if still relevant.

thanks!

Comment 16 Bryan 2016-07-18 11:20:49 UTC
Thanks, I will follow 1134318 as that is the same issue as I am waiting on