| Summary: | When attempting to add an iSCSI domain from single host setup it won't recognize LUNS until another host is connected to first | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Bryan <bhughes> | ||||||||||
| Component: | BLL.Storage | Assignee: | Liron Aravot <laravot> | ||||||||||
| Status: | CLOSED NOTABUG | QA Contact: | Aharon Canan <acanan> | ||||||||||
| Severity: | medium | Docs Contact: | |||||||||||
| Priority: | medium | ||||||||||||
| Version: | 3.6.3.3 | CC: | amureini, bhughes, bugs, laravot, tnisan, ylavi | ||||||||||
| Target Milestone: | ovirt-4.0.2 | Flags: | ykaul:
ovirt-4.0.z?
rule-engine: planning_ack? rule-engine: devel_ack? rule-engine: testing_ack? |
||||||||||
| Target Release: | --- | ||||||||||||
| Hardware: | x86_64 | ||||||||||||
| OS: | Linux | ||||||||||||
| Whiteboard: | |||||||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||||||
| Doc Text: | Story Points: | --- | |||||||||||
| Clone Of: | Environment: | ||||||||||||
| Last Closed: | 2016-06-16 08:37:53 UTC | Type: | Bug | ||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||
| Documentation: | --- | CRM: | |||||||||||
| Verified Versions: | Category: | --- | |||||||||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
| Attachments: |
|
||||||||||||
|
Description
Bryan
2016-04-08 14:10:23 UTC
Created attachment 1145144 [details]
screenshot 2
Created attachment 1145145 [details]
vdsm.log1
Created attachment 1145146 [details]
vdsmlog2
Bryan, I'm a bit confused with regards to the configuration: 1. You are mentioning hosted-engine with Gluster, but you are trying to add iSCSI targets - from the hosts themselves? 2. You are using EL7, not tgt-admin and not targetcli? (In reply to Yaniv Kaul from comment #4) > Bryan, I'm a bit confused with regards to the configuration: > 1. You are mentioning hosted-engine with Gluster, but you are trying to add > iSCSI targets - from the hosts themselves? > 2. You are using EL7, not tgt-admin and not targetcli? 1) I am running the hosted engine itself on gluster. I then have 3 other HDD on the physical machines I want to add to OVirt in order to be used for HDFS. I don't want to run HDFS on gluster so I wanted to use direct access to the drives. So, to sum it up, my server has 8 physical drives. 2 are for the bare metal OS, 3 are attached via gluster and the other 3 I want to share each as an iSCSI LUN for direct write via HDFS. 2) I am using tgt-admin to do the setup, here are the commands I ran: tgtadm --lld iscsi --op new --mode target --tid 1 -T host:storage tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/sde tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 2 -b /dev/VG00/iscsitest tgt-admin --dump > /etc/tgt/targets.conf Then I tried to add both to the Ovirt UI and got the errors from above. I also tried to add it as a POSIX mount point and that fails as well. I tried with POSIX after the iscsi failures. I created an xfs filesystem on /dev/sde1 and attempted to add that as a POSIX storage domain.
I get these errors in vdsm.log on the host:
jsonrpc.Executor/5::INFO::2016-04-11 13:00:37,163::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/dev/sde1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'xfs', u'password': '********', u'port': u''}], options=None)
jsonrpc.Executor/5::DEBUG::2016-04-11 13:00:37,165::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/_dev_sde1 mode: None
jsonrpc.Executor/5::DEBUG::2016-04-11 13:00:37,166::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-55 /usr/bin/sudo -n /usr/bin/mount -t xfs /dev/sde1 /rhev/data-center/mnt/_dev_sde1 (cwd None)
jsonrpc.Executor/5::DEBUG::2016-04-11 13:00:37,197::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-55 /usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/_dev_sde1 (cwd None)
jsonrpc.Executor/5::ERROR::2016-04-11 13:00:37,210::hsm::2473::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 245, in connect
six.reraise(t, v, tb)
File "/usr/share/vdsm/storage/storageServer.py", line 238, in connect
self.getMountObj().getRecord().fs_file)
File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess
raise se.StorageServerAccessPermissionError(dirPath)
StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/_dev_sde1'
I ran all the commands in the log as vdsm user and they work with no errors manually. I also tried to create the /rhev/data-center/mnt/_dev_sde1 prior to creating the domain and it just removes the directory I create. So, I am not sure why I can't add it as POSIX either
Is there a way to mount a local drive to a VM in ovirt? Essentially on a physical server I want to partition and drive and mount it to say /test. I then want to map /test to VM1. I know with virsh it is possible but wanted to see if this can be done in OVirt? Thanks Is there any way to get a response to comment #7? Thanks, This seems like a use case that oVirt is not tested for and while we can try to help with this, it is not considered urgent. Therefore moving to 4.0 for research. Can you please provide the use case and topology you are trying to use, it is not clear. WE are trying to install HDFS on top of VMs running in OVirt. We have four physical servers to use as our stack. Each server will consist of 8 hard drives. The drive layout is this: Drives 0-1 : Raid 1 for physical server OS Drives 2-4 : Gluster Drives, these drives will be added to the gluster storage domains Drives 5-7: We wanted to map these directly to the datanode VMs. AKA, there would be three datanodes per physical machine. Datanode1 would get drive 5, datanode2 would get 6 and datanode3 would get 7. I attempted to use iSCSI first to do this but it didn't work as seen above. I then tried the POSIX FS but a Datacenter can't contain local and shared storage on the same server. Is there a recommended way to accomplish this? I know I could share the drives via NFS of Gluster but wanted to do direct to increase performance in HDFS. Thanks oVirt 4.0 beta has been released, moving to RC milestone. oVirt 4.0 beta has been released, moving to RC milestone. (In reply to Bryan from comment #7) > Is there a way to mount a local drive to a VM in ovirt? > > Essentially on a physical server I want to partition and drive and mount it > to say /test. I then want to map /test to VM1. > > I know with virsh it is possible but wanted to see if this can be done in > OVirt? > Hi Bryan, it can be done by using a vdsm hook, but not from ovirt web ui/rest api. (In reply to Bryan from comment #10) > > I attempted to use iSCSI first to do this but it didn't work as seen above. > I then tried the POSIX FS but a Datacenter can't contain local and shared > storage on the same server. > > Is there a recommended way to accomplish this? I know I could share the > drives via NFS of Gluster but wanted to do direct to increase performance in > HDFS. > > Thanks Currently shared storage can't be mixed in the same data center with local storage as far as i know. adding needinfo? on Allon/Tal to update if there are any plans on that area. thanks, Liron. We have an RFE on the subject currently not in the road map for the next version, you can follow bug 1134318 for future progress on the issue. Bryan, did you get all necessary info? Closing as seems that there's no standing issue here. Bryan, please reopen if still relevant. thanks! Thanks, I will follow 1134318 as that is the same issue as I am waiting on |