Bug 1577529
Summary: | [RFE] Support multiple hosts in posix storage domain path for cephfs | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] vdsm | Reporter: | Sven Vogel <sven.vogel> | ||||||
Component: | General | Assignee: | bugs <bugs> | ||||||
Status: | CLOSED WONTFIX | QA Contact: | Avihai <aefrat> | ||||||
Severity: | high | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | 4.20.23 | CC: | 754267513, acanan, amureini, bailey, bugs, cjg9411, ebenahar, ehdeec, lbrown, matt.kimberley, nsoffer, stirabos, sven.vogel, tnisan | ||||||
Target Milestone: | --- | Keywords: | FutureFeature | ||||||
Target Release: | --- | Flags: | rule-engine:
planning_ack?
rule-engine: devel_ack? rule-engine: testing_ack? |
||||||
Hardware: | x86_64 | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | 1305529 | Environment: | |||||||
Last Closed: | 2020-10-08 10:19:27 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Sven Vogel
2018-05-12 18:55:48 UTC
i tried other things. if i use only one host path: host1.example.de filesystem: ceph fs options: e.g. noatime i get the a success but domain will not be added. /var/log/vdsm/supervdsm.log MainProcess|jsonrpc/7::DEBUG::2018-05-13 16:27:12,150::supervdsm_server::96::SuperVdsm.ServerCallback::(wrapper) call mount with (u'host1.example.de:6789:/', u'/rhev/data-center/mnt/host1.example.de:6789:_') {'vfstype': u'ceph', 'mntOpts': u'noatime', 'cgroup': None} MainProcess|jsonrpc/7::DEBUG::2018-05-13 16:27:12,150::commands::65::root::(execCmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/mount -t ceph -o noatime host1.example.de:6789:/ /rhev/data-center/mnt/host1.example.de:6789:_ (cwd None) MainProcess|jsonrpc/7::DEBUG::2018-05-13 16:27:12,172::commands::86::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 MainProcess|jsonrpc/7::DEBUG::2018-05-13 16:27:12,172::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) return mount with None i get an error in web ovirt like The error message for connection host1.example.de:6789:/ returned by VDSM was: General Exception mount seems to be created 192.168.102.90:6789:/ 442G 792M 441G 1% /rhev/data-center/mnt/host1.example.de:6789:_ normally one hosts seems not a good idea if we use ceph. the other problem i dont know why it will be added ... thanks Sven Idan, you've handled this issue for 4.0, can you have a look please? Hi Tal, i dont tried ovirt versio 4.0. i tried before and now with 4.2.2 and 4.2.3. thanks Sven (In reply to Tal Nisan from comment #2) > Idan, you've handled this issue for 4.0, can you have a look please? Sure, Sven can you please attach the full vdsm, supervdsm and engine logs? Hi Sven, From what I know, we don't support multiple hosts in the "Path" field. The right way to use a single remote server is "server:port:/path" where the "port:" part is not mandatory, and the "server" part can be a DNS name, an ipv4 address or an ipv6 address using quoted form. Nir, any idea if we can workaround this inability and use multiple hosts in this case? Idan, I don't know about supporting multiple hosts. Maybe the mount command should use a special mount options. Sven, we must have vdsm and supervdsm logs. Elad, do you have cephfs system for testing? do we test cephfs using posix storage domain? Sven, can you mount ceph using multiple hosts:post pairs in the path from the shell? If you can we need an example mount command. Currently we support: server:port:/path server:port:/ According to ceph docs http://docs.ceph.com/docs/cuttlefish/man/8/mount.ceph/ we need to support also: server1,server2,...:/ server1,server2,...:/path server1:port,server2:port,...:/ server1:port,server2:port,...:/path We can allow this format when using vfstype=ceph. Using multiple hosts allows mounting even if one of the hosts is down. Next steps: - get cephfs system for testing first - change current code to allow server:port only when vfstype=ceph - when vfstype=ceph, support multiple host[:port] separated by comma Thanks, Nir. Tal, can you please target this bug further to Nir's comment? Hi Idan, Hi Sir, yes i saw that you only support a simple mount but for future ha and usage of ceph it should be good to use multiple mount points. --> if i use one mount point i get an error like below. "The error message for connection host1.example.de:6789:/ returned by VDSM was: General Exception" i add the vdsm.log and supervdsm.log greets Sven Created attachment 1436784 [details]
vdsm.log
Created attachment 1436785 [details]
supervdsm.log
(In reply to Nir Soffer from comment #9) > Currently we support: > > server:port:/path > server:port:/ > > According to ceph docs http://docs.ceph.com/docs/cuttlefish/man/8/mount.ceph/ > we need to support also: > > server1,server2,...:/ > server1,server2,...:/path > server1:port,server2:port,...:/ > server1:port,server2:port,...:/path > > We can allow this format when using vfstype=ceph. > > Using multiple hosts allows mounting even if one of the hosts is down. > > Next steps: > - get cephfs system for testing first > - change current code to allow server:port only when vfstype=ceph > - when vfstype=ceph, support multiple host[:port] separated by comma this sounds good but it will not clear the problem why i get a error with server:port:/ ceph noatime :) (In reply to Sven Vogel from comment #14) ... > this sounds good but it will not clear the problem why i get a error with > > server:port:/ Current code supports server:port:/. If this does not work please file another bug. *** Bug 1557827 has been marked as a duplicate of this bug. *** Hi, We have the same issue with not being able to mount multiple ceph monitors in the target path, running under oVirt 4.2.7.5-1.el7. Using the format: host:port,host:port,host:port:/ oVirt fails to parse the mount point and fails. Using a single monitor: host:port:/ Ovirt successfully mounts the target. For both attempts, VFS type "ceph" and mount options "name=admin,secret=<secret>" were used. From a HA perspective as mentioned previously, upon loosing the mounted single Ceph Monitor all hosts continue to function with the mounted ceph storage domain until a host is rebooted (in the absence of the ceph monitor, the rebooted host cannot mount the ceph storage domain which is an issue from an availability point of view) I can happily provide logs if they would be useful We are also seeing this. Again, if logs are needed please feel free to reach out. I just encountered this issue while attempting to import a disk, and I was able to work around it by changing `FIELD_SEP = ","` to `FIELD_SEP = ";"` at line 81 of vdsm/storage/task.py on my SPM and restarting vdsm and supervdsm while in maintenance mode. Based on my brief reading of the code, the only impact this appears to have is changing the printed output from ParamList when casting it as a string. That being said, I'm not familiar enough with the VDSM codebase to know if this will have unexpected side effects. just poking this to see if any updates have been made in this area ahead of the oVirt 4.4.0 / VDSM 4.2.x release? This request is not currently committed to 4.4.z, moving it to 4.5 Our solution for Cinderlib support is via Cinderlib, you can add a Cinder domain as "Managed Block Storage" Just to add a comment for future reference on an possible work around: when dealing with CephFS specifically, the mount can be done with a single ceph monitor IP address and path, but the underlying kernel mount will resolve all monitors and work as expected. The issue with this is that if the specified monitor is unavailable when the mount is attempted it will fail during that mon unavailability. |