Bug 1645229 - Export as ova fails in oVirt-engine
Summary: Export as ova fails in oVirt-engine
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Backend.Core
Version: 4.2.7.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.3.2
: ---
Assignee: Shmuel Melamud
QA Contact: Nisim Simsolo
URL:
Whiteboard:
Depends On: 1684140
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-01 17:02 UTC by sangeetha
Modified: 2020-02-25 09:16 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-26 07:20:49 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.3+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4656931 0 None None None 2019-12-13 20:54:36 UTC
oVirt gerrit 98141 0 'None' MERGED core: Export disks under vdsm user when exporting OVA 2021-01-15 15:03:00 UTC

Description sangeetha 2018-11-01 17:02:06 UTC
Description of problem:
1. create a vm
2. Stop the vm
3. Select the vm . and click "Export as OVA: from web-UI
4. Enter the target directory for the host as /tmp

Fails to create ova image

Engine creates a copy of the image in storage domain . As a process of creating ova, it fails to open the disk image which get created in storage doamin due to  permission issue. The image created by vdsm user. But the ansible playbook which opens the image uses root user.

Storage : NFS
Note: Export to Domain works fine

From Engine log
2018-10-26 12:11:36,657-04 ERROR
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-37) [484d3716] EngineException: ENGINE
(Failed with error ENGINE and code 5001):
org.ovirt.engine.core.common.errors.EngineException: EngineException: ENGINE (Failed with
error ENGINE and code 5001)
at
org.ovirt.engine.core.bll.exportimport.ExportOvaCommand.createOva(ExportOvaCommand.java:301)
[bll.jar:]
at
org.ovirt.engine.core.bll.exportimport.ExportOvaCommand.executeNextOperation(ExportOvaCommand.java:285)
[bll.jar:]
at
org.ovirt.engine.core.bll.exportimport.ExportOvaCommand.performNextOperation(ExportOvaCommand.java:277)
[bll.jar:]
at
org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
[bll.jar:]
at
org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:68)
[bll.jar:]
at
org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:146)
[bll.jar:]
at
org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:107)
[bll.jar:]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_181]


From Ansible log:

Error message:
writing disk:
path=/rhev/data-center/mnt/ca-ovmstor101:_export_sanrathi_brml11g04-oVirt-nfs-01/aec73d1f-ea3d-4228-a151-a41b20b6b67b/images/7af3ed03-9be5-4c29-b1f2-2566a00671ba/f4a9a516-12b1-47ff-b2f9-e5d792669b3c
size=196624
Traceback (most recent call last):
File
"/root/.ansible/tmp/ansible-tmp-1540582852.05-258889562906077/pack_ova.py", line
96, in <module>
write_disks(ova_path, disks_info.split('+'))
File
"/root/.ansible/tmp/ansible-tmp-1540582852.05-258889562906077/pack_ova.py", line
79, in write_disks
write_disk(ova_path, disk_path, disk_size)
File
"/root/.ansible/tmp/ansible-tmp-1540582852.05-258889562906077/pack_ova.py", line
59, in write_disk
fd = os.open(disk_path, os.O_RDONLY | os.O_DIRECT)
OSError: [Errno 13] Permission denied:
'/rhev/data-center/mnt/ca-ovmstor101:_export_sanrathi_brml11g04-oVirt-nfs-01/aec73d1f-ea3d-4228-a151-a41b20b6b67b/images/7af3ed03-9be5-4c29-b1f2-2566a00671ba/f4a9a516-12b1-47ff-b2f9-e5d792669b3c'


1. File permission:
[root@ca-ovsx131 images]# ll
drwxr-xr-x. 2 vdsm kvm 5 Oct 26 13:40 7af3ed03-9be5-4c29-b1f2-2566a00671ba

[root@ca-ovsx131 images]# ll f83b2026-099d-4983-9130-fa999bd2a782/
total 1028
-rw-rw----. 1 vdsm kvm 1073741824 Oct 26 13:31 6eb8cd98-ec3f-47a4-908c-e17f45e6cf6a
-rw-rw----. 1 vdsm kvm 1048576 Oct 26 13:31 6eb8cd98-ec3f-47a4-908c-e17f45e6cf6a.lease
-rw-r--r--. 1 vdsm kvm 319 Oct 26 13:31 6eb8cd98-ec3f-47a4-908c-e17f45e6cf6a.meta


2.File Format:
bash-4.2$ qemu-img info
7af3ed03-9be5-4c29-b1f2-2566a00671ba/f4a9a516-12b1-47ff-b2f9-e5d792669b3c
image: 7af3ed03-9be5-4c29-b1f2-2566a00671ba/f4a9a516-12b1-47ff-b2f9-e5d792669b3c
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 259K
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

Comment 1 Michal Skrivanek 2018-11-02 12:54:15 UTC
isn't it supposed to run as vdsm?

Comment 2 sangeetha 2018-11-02 14:10:54 UTC
I am exporting OVA through UI. 
After checking  the code and added debugger in /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/ovirt_ova_pack.py.


print ("Printing user %s" % os.getlogin())
fd = os.open(disk_path, os.O_RDONLY | os.O_DIRECT)

It is is trying to read the disk as root user and it fails.

I did set up the NFS storage permission as per this document https://www.ovirt.org/documentation/admin-guide/chap-Storage/

I am new to the oVirt product. I am just exploring the ova option. Please Let me know, If I miss anything.

Comment 3 Michal Skrivanek 2018-11-06 09:15:10 UTC
I think you're share is not writable to root (or rather squashed to nobody:nobody)
We should actually
- change the user running the ova role to "vdsm" instead of root
- create a chroot environment so you cannot overwrite other files by mistake
- probably define a vdc_option with a path on hypervisors and document it - e.g. ExportOvaPath=/mnt

Comment 6 Ryan Barry 2019-01-21 14:53:58 UTC
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both

Comment 7 Sandro Bonazzola 2019-03-13 15:47:19 UTC
This bug seems to be already fixed in 4.3.2 RC2 release, can you please check status of this bug and update target milestone to 4.3.2 if already included?

Comment 9 Nisim Simsolo 2019-03-19 10:13:02 UTC
Verified:
ovirt-engine-4.4.0-0.0.master.20190318180517.git576124b.el7
vdsm-4.40.0-96.gite291014.el7.x86_64
libvirt-client-4.5.0-10.el7_6.6.x86_64
qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
sanlock-3.6.0-1.el7.x86_64

Verification scenario:
1. Export VM as OVA to NFS share with root_squash option, for example:
# cat /etc/exports
/root_squash_NFS 1.1.1.1/255.0.0.0(rw,root_squash)
2. Verify VM exported as OVA successfully.
Import exported OVA, verify import succeeds, run VM and verify VM is running properly.
3. Export VM as OVA to NFS share with no_root_squash option, for example: 
# cat /etc/exports
/no_root_squash_NFS 1.1.1.1/255.0.0.0(rw,no_root_squash)
4. Verify VM exported as OVA successfully.
Import exported OVA, verify import succeeds, run VM and verify VM is running properly.
5. Export VM as OVA to NFS share with no_root_squash option and a folder inside it with nfsnobody:nfsnobody ownership, for example:
# cat /etc/exports
/no_root_squash_NFS 1.1.1.1/255.0.0.0(rw,no_root_squash)
# ls -l /mnt
drwxr-xr-x.  9 nfsnobody nfsnobody 4.0K Aug  6  2017 nfs_folder
6. verify VM exported as OVA successfully.
Import exported OVA, verify import succeeds, run VM and verify VM is running properly.

Comment 10 Sandro Bonazzola 2019-03-26 07:20:49 UTC
This bugzilla is included in oVirt 4.3.2 release, published on March 19th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.