Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1898207

Summary: Cannot copy or move disks
Product: [oVirt] ovirt-engine Reporter: Ferradeira <suporte>
Component: BLL.GlusterAssignee: Gobinda Das <godas>
Status: CLOSED INSUFFICIENT_DATA QA Contact: SATHEESARAN <sasundar>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.4.3.12CC: bugs, godas, vharihar
Target Milestone: ---Flags: suporte: needinfo-
suporte: ci_coverage_complete?
suporte: requirements_defined?
suporte: testing_plan_complete?
suporte: planning_ack?
suporte: devel_ack?
suporte: testing_ack?
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-04-19 07:16:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ferradeira 2020-11-16 16:32:31 UTC
Description of problem:
 I cannot copy or move disks

Version-Release number of selected component (if applicable):
glusterfs 7.8

How reproducible:
Move or copy disks in gluster storage domain results in an error

Steps to Reproduce:
1. Storage - Domains, click on gluster domain - disks, select the disk and try to move it to another gluster storage domain
2. Storage - Domains, click on gluster domain - disks, select the disk and try to copy it (the VM is down)
3.

Actual results:
Here is what I found on vdsm.log:


2020-11-14 14:08:16,917+0000 INFO  (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO  (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last):  File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run    self._run()  File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run    self._operation.run()  File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run    for data in self._operation.watch():  File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch    self._finalize(b"", err)  File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize    raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while reading sector 260177858: No such file or directory\n') 2020-11-14 14:08:17,017+0000 INFO  (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO  (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO  (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow_id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)


Expected results:


Additional info:

Comment 2 RHEL Program Management 2020-11-17 11:09:33 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 3 Gobinda Das 2021-01-19 13:10:12 UTC
Setting needinfo for Comment#1

Comment 4 Gobinda Das 2021-04-19 07:16:18 UTC
Closing this for now as we did not get requested logs. Will reopen if will see later.

Comment 5 Red Hat Bugzilla 2023-09-15 00:51:16 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days