Description of problem: ----------------------- Created the ISO domain backed with glusterfs volume, and tried to copy the ISO using SSH by using the key '--ssh-user=root' to the ovirt-iso-uploader tool. The tool was referring to the wrong location of the mount and failed calculating the total space of the volume which led the tool to error out Version-Release number of selected component (if applicable): -------------------------------------------------------------- RHV-4.1.1-6 ovirt-iso-uploader-4.0.2-1.el7ev.noarch How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create a ISO Domain backed with glusterfs volume 2. Try copying ISO to the domain using SSH Actual results: --------------- ovirt-iso-uploader fails to upload the image Expected results: ----------------- ISO should be uploaded to the domain sucessfully Additional info: ---------------- I think this is the issue with the any storage ( not only gluster ) as ovirt-iso-uploader refers to the wrong mount location when insisted copy using SSH
Is there a workaround for this?
(In reply to Sahina Bose from comment #3) > Is there a workaround for this? Not one that I know of.
Does it work without SSH?
(In reply to Yaniv Kaul from comment #5) > Does it work without SSH? Hi Yaniv, The default protocol used by ovirt-iso-uploader is NFS. Gluster volume, right from Gluster 3.8, has disabled NFS access mechanism, in favour of NFS Ganesha. So it would be a additional step required for user to enable NFS server on gluster volume # gluster volume set <vol> nfs.disable off Once NFS server is enabled, ovirt-iso-uploader works as expected.
Is the environment where this failure happened still available? If yes, may I have a look? If not, please provide detailed instructions on how to reproduce. I can infer this is RHCI deployment and you someohow provided an iso domain on gluster but I'd like to have a clear way to reproduce this.
In particular, I can't add glusterfs iso domain to engine 4.1.3 if not enabling nfs since I can't find the volumes listed in managed gluster volumes.
(In reply to Sandro Bonazzola from comment #8) > In particular, I can't add glusterfs iso domain to engine 4.1.3 if not > enabling nfs since I can't find the volumes listed in managed gluster > volumes. nevermind, added as unmanaged volume
I managed to reproduce. Issue is that when native glusterfs domain feature[1] has been introduced it has not been completed. iso uploader uses SDK for getting the host name and remote path for the given domain and the query for NFS returns host and directory correctly. For gluster it provides host and glusterfs logic volume while it's considered a physical path by iso uploader. The method to be changed is get_host_and_path_from_ISO_domain, it requires some SDK magic for retrieving the path. I assume that ssh method can't be used at all since the volume can be a replica 3 volume used as iso domain and writing with ssh a file in one brick without passing through the replica doesn't seems to be a good idea. Owners of the native glusterfs domain feature should complete it adding a glusterfs target. For the ssh option, it should just fail detecting the destination domain is glusterfs based. Sahina, can someone in gluster team take this one and open a RFE for adding glusterfs native support to iso uploader? Please note image uploader is affected as well. [1] http://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/
(In reply to Sandro Bonazzola from comment #10) > I managed to reproduce. Issue is that when native glusterfs domain > feature[1] has been introduced it has not been completed. > > iso uploader uses SDK for getting the host name and remote path for the > given domain and the query for NFS returns host and directory correctly. > For gluster it provides host and glusterfs logic volume while it's > considered a physical path by iso uploader. > > The method to be changed is get_host_and_path_from_ISO_domain, it requires > some SDK magic for retrieving the path. I assume that ssh method can't be > used at all since the volume can be a replica 3 volume used as iso domain > and writing with ssh a file in one brick without passing through the replica > doesn't seems to be a good idea. Anybody could 'scp' the file on to the glusterfs fuse mount, and that would get written to replica set ( set of 3 bricks ) synchronously. Its **not** required for application to write the image directly on to the brick.
Sandro, I've logged Bug 1462644 for native glusterfs support to upload images. For this bug, are you saying in get_host_and_path_from_ISO_domain method the path should have been returned as "/rhev/data-center/mnt/glusterSD/10.70.36.73:_ISO__Volume" rather than "/ISO_Volume"
(In reply to Sahina Bose from comment #12) > Sandro, I've logged Bug 1462644 for native glusterfs support to upload > images. > > For this bug, are you saying in get_host_and_path_from_ISO_domain method the > path should have been returned as > "/rhev/data-center/mnt/glusterSD/10.70.36.73:_ISO__Volume" rather than > "/ISO_Volume" No, it wouldn't have worked. iso uploader needs the physical path on the original location not the mounted one on the hypervisors. I guess there's no way to get it with replicated gluster storage. I think iso uploader should have just output an error saying scp to a gluster volume is not possible and glusterfs native support should be used when available.
*** Bug 1462644 has been marked as a duplicate of this bug. ***
ovirt-iso-uploader-4.1.0-0.0.master.20170801125233.git9a2e41d.el7.centos.noarch glusterfs-libs-3.12.1-2.el7.x86_64 somewhere where remote_path is constructed from iso_domain_data, imo it is not taken care what brick path of gluster volume is # ovirt-iso-uploader -vvv -k /etc/pki/ovirt-engine/keys/engine_id_rsa --ssh-user=root --iso-domain=iso upload cd62.iso --user=admin@internal Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): DEBUG: API Vendor(ovirt.org) API Version(4.2.0) DEBUG: id=fe24845d-2fe9-46e1-b46e-0a34fdd3de7d address=10-37-138-193.example.com path=/iso Uploading, please wait... INFO: Start uploading cd62.iso DEBUG: file (cd62.iso) DEBUG: /usr/bin/ssh -p 22 -i /etc/pki/ovirt-engine/keys/engine_id_rsa root.com "/usr/bin/test -e /iso/fe24845d-2fe9-46e1-b46e-0a34fdd3de7d/images/11111111-1111-1111-1111-111 111111111/cd62.iso" DEBUG: /usr/bin/ssh -p 22 -i /etc/pki/ovirt-engine/keys/engine_id_rsa root.com "/usr/bin/test -e /iso/fe24845d-2fe9-46e1-b46e-0a34fdd3de7d/images/11111111-1111-1111-1111-111 111111111/cd62.iso" DEBUG: _cmds(['/usr/bin/ssh', '-p', '22', '-i', '/etc/pki/ovirt-engine/keys/engine_id_rsa', 'root.com', '/usr/bin/test -e /iso/fe24845d-2fe9-46e1-b46e-0a34fdd3de7d/images/111 11111-1111-1111-1111-111111111111/cd62.iso']) DEBUG: returncode(1) DEBUG: STDOUT() DEBUG: STDERR() DEBUG: exists returning false DEBUG: Mount point size test command is (/usr/bin/ssh -p 22 -i /etc/pki/ovirt-engine/keys/engine_id_rsa root.com "/usr/bin/python -c 'import os; dir_stat = os.statvfs(\"/iso \"); print (dir_stat.f_bavail * dir_stat.f_frsize)'" ) DEBUG: /usr/bin/ssh -p 22 -i /etc/pki/ovirt-engine/keys/engine_id_rsa root.com "/usr/bin/python -c 'import os; dir_stat = os.statvfs(\"/iso\"); print (dir_stat.f_bavail * di r_stat.f_frsize)'" DEBUG: _cmds(['/usr/bin/ssh', '-p', '22', '-i', '/etc/pki/ovirt-engine/keys/engine_id_rsa', 'root.com', '/usr/bin/python -c \'import os; dir_stat = os.statvfs("/iso"); print (dir_stat.f_bavail * dir_stat.f_frsize)\'']) DEBUG: returncode(0) DEBUG: STDOUT(14287589376 ) DEBUG: STDERR() DEBUG: Size of cd62.iso: 9953280 bytes 9720.0 1K-blocks 9.0 MB DEBUG: Available space in /iso: 14287589376 bytes 13952724.0 1K-blocks 13625.7 MB DEBUG: SCP command is (/usr/bin/scp -P 22 -i /etc/pki/ovirt-engine/keys/engine_id_rsa cd62.iso root.com:/iso/fe24845d-2fe9-46e1-b46e-0a34fdd3de7d/images/11111111-1111-1111-1 111-111111111111/.cd62.iso) DEBUG: /usr/bin/scp -P 22 -i /etc/pki/ovirt-engine/keys/engine_id_rsa cd62.iso root.com:/iso/fe24845d-2fe9-46e1-b46e-0a34fdd3de7d/images/11111111-1111-1111-1111-111111111111 /.cd62.iso DEBUG: _cmds(['/usr/bin/scp', '-P', '22', '-i', '/etc/pki/ovirt-engine/keys/engine_id_rsa', 'cd62.iso', 'root.com:/iso/fe24845d-2fe9-46e1-b46e-0a34fdd3de7d/images/11111111-11 11-1111-1111-111111111111/.cd62.iso']) DEBUG: returncode(1) DEBUG: STDOUT() DEBUG: STDERR(scp: /iso/fe24845d-2fe9-46e1-b46e-0a34fdd3de7d/images/11111111-1111-1111-1111-111111111111/.cd62.iso: No such file or directory ) ERROR: Unable to copy cd62.iso to ISO storage domain on iso. ERROR: Error message is "scp: /iso/fe24845d-2fe9-46e1-b46e-0a34fdd3de7d/images/11111111-1111-1111-1111-111111111111/.cd62.iso: No such file or directory" without ssh it fails as well: ovirt-iso-uploader -vvv --iso-domain=iso upload cd62.iso --user=admin@internal Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): DEBUG: API Vendor(ovirt.org) API Version(4.2.0) DEBUG: id=fe24845d-2fe9-46e1-b46e-0a34fdd3de7d address=10-37-138-193.example.com path=/iso Uploading, please wait... INFO: Start uploading cd62.iso ERROR: glfs_init failed: Success
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
not brick but rhev path of iso domain # mount -t fuse.glusterfs | grep '/iso on /rhev' 10-37-138-193.example.com:/iso on /rhev/data-center/mnt/glusterSD/10-37-138-193.example.com:_iso type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
Can't reproduce it: # ovirt-iso-uploader -vvv --iso-domain=ISO upload systemrescuecd-x86-5.1.1.iso --user=admin@internal Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): DEBUG: API Vendor(ovirt.org) API Version(4.2.0) DEBUG: id=1787917a-f28f-45e0-b580-9d5fb4314824 address=hc-lion.eng.lab.tlv.redhat.com path=/iso Uploading, please wait... INFO: Start uploading systemrescuecd-x86-5.1.1.iso
(In reply to Denis Chaplygin from comment #19) > Can't reproduce it: > > # ovirt-iso-uploader -vvv --iso-domain=ISO upload > systemrescuecd-x86-5.1.1.iso --user=admin@internal > Please provide the REST API password for the admin@internal oVirt Engine > user (CTRL+D to abort): > DEBUG: API Vendor(ovirt.org) API Version(4.2.0) > DEBUG: id=1787917a-f28f-45e0-b580-9d5fb4314824 > address=hc-lion.eng.lab.tlv.redhat.com path=/iso > Uploading, please wait... > INFO: Start uploading systemrescuecd-x86-5.1.1.iso still fails for me, 3 replica 1 arbiter # ovirt-iso-uploader -vvv --iso-domain=testiso upload cd62.iso --user=admin@internal Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): DEBUG: API Vendor(ovirt.org) API Version(4.2.0) DEBUG: id=3623a079-e7d5-4820-ad25-52e1adac7e8c address=brq-gluster01.example.com path=/testiso Uploading, please wait... INFO: Start uploading cd62.iso ERROR: glfs_init failed: Success # rpm -qa glusterfs ovirt-iso-uploader ovirt-iso-uploader-4.1.0-0.0.master.20170801125233.git9a2e41d.el7.centos.noarch glusterfs-3.12.1-2.el7.x86_64
ok, ovirt-iso-uploader-4.1.0-0.0.master.20171106130347.git64c591e.el7.centos.noarch # ovirt-iso-uploader -vvv --iso-domain=testiso --user=admin@internal upload test.iso Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): DEBUG: API Vendor(ovirt.org) API Version(4.2.0) DEBUG: id=3623a079-e7d5-4820-ad25-52e1adac7e8c address=brq-gluster01.example.com path=/testiso Uploading, please wait... INFO: Start uploading test.iso # find /data/testiso/brick1/brick/ -name 'test.iso' -ls 25245838 9708 -rw-r--r-- 2 root root 9938944 Nov 15 14:51 /data/testiso/brick1/brick/3623a079-e7d5-4820-ad25-52e1adac7e8c/images/11111111-1111-1111-1111-111111111111/test.iso
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017. Since the problem described in this bug report should be resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.