Description of problem: ----------------------- According to https://bugzilla.redhat.com/show_bug.cgi?id=1095615 we support CephFS as POSIX compliant FS to create Storage Domains in RHV. Although in a thin node RHV-H, there's no ceph-common package available for the hypervisor to mount the POSIX Ceph FS in the Hypervisor side. When trying to create the Domain in the Manager, on the vdsm.log we see: ~~~ 2018-08-10 14:18:07,364-0300 ERROR (jsonrpc/7) [storage.HSM] Could not connect to storageServer (hsm:2398) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2395, in connectStorageServer conObj.connect() File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 179, in connect six.reraise(t, v, tb) File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 171, in connect self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount cgroup=cgroup) File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in <lambda> **kwargs) File "<string>", line 2, in mount File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) MountError: (32, ';mount: wrong fs type, bad option, bad superblock on 192.168.0.2:/my/ceph/mount/point,\n missing codepage or helper program, or other error\n\n In some cases useful info is found in syslog - try\n dmesg | tail or so.\n') ~~~ Trying to mount it manually with 'mount -t ceph' gives the same: ~~~ [root@rhvh42 ~]# mount -t ceph 192.168.0.2:6789:/ /mnt/ -o name=cephfs,secretfile=/etc/ceph/user.secret mount: wrong fs type, bad option, bad superblock on 192.168.0.2:6789:/, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. ~~~ This means that we don't have the bits to be able to mount it on the system. According to https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_file_system_guide/#mounting-the-ceph-file-system-as-a-kernel-client we need to enable 'rhel-7-server-rhceph-3-tools-rpms' and, I presume, install ceph-common from that repo versus the one provided by the base OS repo since it's too old. More on that on https://bugzilla.redhat.com/show_bug.cgi?id=1421783 On a RHEL Host the manual mount works after installing the ceph-common package from 'rhel-7-server-rhceph-3-tools-rpms' repo. In a RHEL Host: yum install ceph-common Dependencies Resolved ============================================================================================================================================================================================================================================== Package Arch Version Repository Size ============================================================================================================================================================================================================================================== Installing: ceph-common x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 15 M Installing for dependencies: leveldb x86_64 1.12.0-7.el7cp rhel-7-server-rhceph-3-tools-rpms 161 k libbabeltrace x86_64 1.2.4-4.el7cp rhel-7-server-rhceph-3-tools-rpms 147 k libcephfs2 x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 462 k libradosstriper1 x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 360 k librgw2 x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 1.7 M lttng-ust x86_64 2.4.1-4.el7cp rhel-7-server-rhceph-3-tools-rpms 176 k python-cephfs x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 112 k python-prettytable noarch 0.7.2-3.el7 rhel-7-server-rpms 37 k python-rados x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 206 k python-rbd x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 136 k python-rgw x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 103 k userspace-rcu x86_64 0.7.16-1.el7cp rhel-7-server-rhceph-3-tools-rpms 73 k Updating for dependencies: librados2 x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 2.9 M librbd1 x86_64 2:12.2.4-42.el7cp rhel-7-server-rhceph-3-tools-rpms 1.1 M Transaction Summary ============================================================================================================================================================================================================================================== Install 1 Package (+12 Dependent packages) Upgrade ( 2 Dependent packages) Total download size: 23 M As a workaround for RHV-H we might be able configure a local repo with ceph-common and dependencies for customers to be able to install the bits. Thoughts ?
If cephfs is used as POSIX, wouldn't this go through the normal POSIX support in RHV and not require cephfs packages? My understanding of that RFE was CephFS as local storage on hosts through the POSIX API, not CephFS interaction itself, or CephFS tooling. Either way, moving this to ovirt-host as a common dependency tracker for RHV. Yaniv - do we actually want to support CephFS itself on RHV?
(In reply to Ryan Barry from comment #1) > If cephfs is used as POSIX, wouldn't this go through the normal POSIX > support in RHV and not require cephfs packages? My understanding of that RFE > was CephFS as local storage on hosts through the POSIX API, not CephFS > interaction itself, or CephFS tooling. > > Either way, moving this to ovirt-host as a common dependency tracker for RHV. > > Yaniv - do we actually want to support CephFS itself on RHV? The Red Hat Storage recommendation is not to use CephFS with RHV currently. I would consulate with them about this. If they approve I assume the customer would need to add the rpms post install to the image. The use case we tested in CephFS as a shared POSIX domain. Yaniv K, can you provide your input?
(In reply to Yaniv Lavi from comment #2) > (In reply to Ryan Barry from comment #1) > > If cephfs is used as POSIX, wouldn't this go through the normal POSIX > > support in RHV and not require cephfs packages? My understanding of that RFE > > was CephFS as local storage on hosts through the POSIX API, not CephFS > > interaction itself, or CephFS tooling. > > > > Either way, moving this to ovirt-host as a common dependency tracker for RHV. > > > > Yaniv - do we actually want to support CephFS itself on RHV? > > The Red Hat Storage recommendation is not to use CephFS with RHV currently. > I would consulate with them about this. > If they approve I assume the customer would need to add the rpms post > install to the image. > > The use case we tested in CephFS as a shared POSIX domain. What are the steps to get this working then? It's not very clear from the docs in https://red.ht/2OxNe9w & https://red.ht/2B6Ge1D In the second link, under step 8, we should enter the VFS Type which in this case would be ceph, this would not work if we don't have the bits to mount it (ceph-common) > Yaniv K, can you provide your input?
After discussing how to consume Ceph iSCSI and Ceph NFS or CephFS as POSIX FS, we are not going to include ceph-common + dependencies in RHV-H, closing wontfix.