Description of problem: We set up a hosted engine evaluation setup with three CentOS7.5 hypervisor hosts and a glusterfs volume replicated over one brick per host as the hosted_storage for the engine VM. All other domains (data, iso, export) were created as managed glusterfs storage domains via the Engine Admin UI, each one replicated over one brick per host. Since we deployed the eval enviornment based on ovirt-hosted-engine-setup-2.2.20-1.el7.centos.noarch, ovirt-engine-appliance-4.2-20180504.1.el7.centos.noarch and vdsm-gluster-4.20.27.1-1.el7.centos.x86_64, we had AttributeError exceptions reported by supervdsm related to "glusterVdoVolumeList". Today we updated the engine and all hypervisor hosts to ovirt-4.2.4.x. Sicne the update, instead of the AttributeError exceptions, we get a INTERNAL SERVER ERRORs related to glusterVdoVolumeList in the supervdsm logs. Version-Release number of selected component (if applicable): On the ovirt engine VM ovirt-engine-4.2.4.5-1.el7.noarch On hypervisor host test-ovirt-1: # rpm -qa | grep -i -e ovirt -e vdsm | sort cockpit-machines-ovirt-169-1.el7.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-host-4.2.3-1.el7.x86_64 ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 vdsm-4.20.32-1.el7.x86_64 vdsm-api-4.20.32-1.el7.noarch vdsm-client-4.20.32-1.el7.noarch vdsm-common-4.20.32-1.el7.noarch vdsm-gluster-4.20.32-1.el7.x86_64 vdsm-hook-ethtool-options-4.20.32-1.el7.noarch vdsm-hook-fcoe-4.20.32-1.el7.noarch vdsm-hook-openstacknet-4.20.32-1.el7.noarch vdsm-hook-vfio-mdev-4.20.32-1.el7.noarch vdsm-hook-vhostmd-4.20.32-1.el7.noarch vdsm-hook-vmfex-dev-4.20.32-1.el7.noarch vdsm-http-4.20.32-1.el7.noarch vdsm-jsonrpc-4.20.32-1.el7.noarch vdsm-network-4.20.32-1.el7.x86_64 vdsm-python-4.20.32-1.el7.noarch vdsm-yajsonrpc-4.20.32-1.el7.noarch How reproducible: Within each of our hosted-engine on gluster setups - 4 times so far. Steps to Reproduce: 1. Create a hosted engine setup with three hypervisor hosts based on CentOS7.5 and the current (2018-06-26) ovirt-4.2 repo with the hosted engine VM deployed to a gluster volume replicated over all three hosts. 2. Add additional replicated gluster volumes as managed storage domains for data, iso and export via Engine Admin UI. 3. Watch the supervdsm logs or the systemd journal. Actual results: Python exceptions related to call glusterVdoVolumeList initiated by the engine. Expected results: Successfull calls to glusterVdoVolumeList by the engine. Additional info:
Current supervdsm exceptions after the update to ovirt 4.2.4.x from journalctl: Jun 26 23:22:12 test-ovirt-1 vdsm[3262]: ERROR Internal server error Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in _dynamicMethod result = fn(*methodArgs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 91, in vdoVolumeList return self._gluster.vdoVolumeList() File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in wrapper rv = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 818, in vdoVolumeList status = self.svdsmProxy.glusterVdoVolumeList() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in <lambda> **kwargs) File "<string>", line 2, in glusterVdoVolumeList File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) OSError: [Errno 2] No such file or directory: vdo
I've seen it too in O-S-T on master.
Same issue here. Found it after updating from oVirt 4.1.9 to oVirt 4.2.3 and latest CentOS. Still not fixed with oVirt 4.2.4. I guess the newly introduced dependency to vdo is missing somewhere in a vdsm rpm.
Denis, will any functionality be affected due to this error..or can I push this out?
Thin device calculations will not work, but everything else should be fine. Feel free to close it.
Closing as per comment 5