Bug 2001136 - when transfer image fails due to filesystem full the backup is not finalized
Summary: when transfer image fails due to filesystem full the backup is not finalized
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.4.7.7
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Eyal Shenitzky
QA Contact: Avihai
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-03 21:57 UTC by Gianluca Cecchi
Modified: 2021-09-09 12:34 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-09 12:34:18 UTC
oVirt Team: Storage
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-43372 0 None None None 2021-09-03 21:58:35 UTC

Description Gianluca Cecchi 2021-09-03 21:57:09 UTC
Description of problem:

Using /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py without "--backup-dir" argument causes images to be written into the current directory generating filesystem full.
Backup fails and VM remains in backup, because runnig the command again I get:

ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot backup VM. The VM is during a backup operation.]". HTTP response code is 409.

Version-Release number of selected component (if applicable):

I run the backup_vm.py from engine itself. Engine is with
ovirt-engine-4.4.7.7-1.el8.noarch
ovirt-engine-setup-plugin-imageio-4.4.7.7-1.el8.noarch
ovirt-imageio-common-2.2.0-1.el8.x86_64
ovirt-imageio-client-2.2.0-1.el8.x86_64
ovirt-imageio-daemon-2.2.0-1.el8.x86_64
python3-ovirt-engine-sdk4-4.4.13-1.el8.x86_64

How reproducible:


Steps to Reproduce:
1. run backup without "--backup-dir" argument, so that virtual disk files are downloaded in the current directory

2. due to VM disks size this generates filesystem full

3. backup fails with message:
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device

Actual results:

If I then try to run the backup python script again for the same VM I get

ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot backup VM. The VM is during a backup operation.]". HTTP response code is 409.

Expected results:

first failed command to still finalize the backup so that I can fix my arguments and start backup again

Additional info:

Output of first command for a VM with two disks of 20Gb each
My current directory is my home dir 

$ df -h .
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/cl-root   43G  5.1G   36G  13% /

I put the backup-dir parameter in ovirt.conf but currently it is not a parsed one by backup_vm.py.
So I think it is ignored and the backup creates the downloaded disk images into my current dir.

[g.cecchi@ovmgr1 ~]$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py -c ovmgr1 full dc386237-1e98-40c8-9d3d-45658163d1e2 
[   0.0 ] Starting full backup for VM 'dc386237-1e98-40c8-9d3d-45658163d1e2'
[   1.4 ] Waiting until backup '68f83141-9d03-4cb0-84d4-e71fdd8753bb' is ready
[  20.6 ] Created checkpoint 'd31e35b6-bd16-46d2-a053-eabb26d283f5' (to use in --from-checkpoint-uuid for the next incremental backup)
[  20.7 ] Creating image transfer for disk '96b0e701-7595-4f04-8569-fb1c72e6f8e0'
[  21.9 ] Image transfer '518bfafb-8b54-4bd7-983d-5ddad39b950a' is ready
[ 100.00% ] 20.00 GiB, 133.73 seconds, 153.14 MiB/s                            
[ 155.7 ] Finalizing image transfer
[ 156.7 ] Creating image transfer for disk '33b0f6fb-a855-465d-a628-5fce9b64496a'
[ 157.8 ] Image transfer 'ccc386d3-9f9d-4727-832a-56d355d60a95' is ready
--- Logging error ---, 105.02 seconds, 147.48 MiB/s                            
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 242, in _run
    handler.copy(req)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 286, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 216, in write_to
    writer.write(view[:n])
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 118, in write
    self._client.write(self._position, buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 445, in write
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 980, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1031, in _recv_reply_chunk
    self._handle_error_chunk(length, flags)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1144, in _handle_error_chunk
    raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 998, in emit
    self.flush()
  File "/usr/lib64/python3.6/logging/__init__.py", line 978, in flush
    self.stream.flush()
OSError: [Errno 28] No space left on device
Call stack:
  File "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib64/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 251, in _run
    log.exception("Worker %s failed", self._name)
Message: 'Worker %s failed'
Arguments: ('download/2',)
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 242, in _run
    handler.copy(req)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 286, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 216, in write_to
    writer.write(view[:n])
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 118, in write
    self._client.write(self._position, buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 445, in write
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 980, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1031, in _recv_reply_chunk
    self._handle_error_chunk(length, flags)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1144, in _handle_error_chunk
    raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 998, in emit
    self.flush()
  File "/usr/lib64/python3.6/logging/__init__.py", line 978, in flush
    self.stream.flush()
OSError: [Errno 28] No space left on device
Call stack:
  File "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib64/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 245, in _run
    break
  File "/usr/lib64/python3.6/contextlib.py", line 185, in __exit__
    self.thing.close()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 301, in close
    self._dst.close()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 149, in close
    log.info("Close backend address=%r", self._client.address)
Message: 'Close backend address=%r'
Arguments: ('/tmp/imageio-qm5mrq25/sock',)
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 242, in _run
    handler.copy(req)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 286, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 216, in write_to
    writer.write(view[:n])
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 118, in write
    self._client.write(self._position, buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 445, in write
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 980, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1031, in _recv_reply_chunk
    self._handle_error_chunk(length, flags)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1144, in _handle_error_chunk
    raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 998, in emit
    self.flush()
  File "/usr/lib64/python3.6/logging/__init__.py", line 978, in flush
    self.stream.flush()
OSError: [Errno 28] No space left on device
Call stack:
  File "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib64/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 245, in _run
    break
  File "/usr/lib64/python3.6/contextlib.py", line 185, in __exit__
    self.thing.close()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 304, in close
    self._src.close()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 339, in close
    self.url.netloc, self.url.path)
Message: 'Close backend netloc=%r path=%r'
Arguments: ('ov200.datacenter.polimi.it:54322', '/images/a354de86-f148-4cbf-b10f-313692e3273f')
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 242, in _run
    handler.copy(req)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 286, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 216, in write_to
    writer.write(view[:n])
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 118, in write
    self._client.write(self._position, buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 445, in write
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 980, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1031, in _recv_reply_chunk
    self._handle_error_chunk(length, flags)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1144, in _handle_error_chunk
    raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 998, in emit
    self.flush()
  File "/usr/lib64/python3.6/logging/__init__.py", line 978, in flush
    self.stream.flush()
OSError: [Errno 28] No space left on device
Call stack:
  File "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib64/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 251, in _run
    log.exception("Worker %s failed", self._name)
Message: 'Worker %s failed'
Arguments: ('download/3',)
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 242, in _run
    handler.copy(req)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 286, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 216, in write_to
    writer.write(view[:n])
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 118, in write
    self._client.write(self._position, buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 445, in write
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 980, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1031, in _recv_reply_chunk
    self._handle_error_chunk(length, flags)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1144, in _handle_error_chunk
    raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 998, in emit
    self.flush()
  File "/usr/lib64/python3.6/logging/__init__.py", line 978, in flush
    self.stream.flush()
OSError: [Errno 28] No space left on device
Call stack:
  File "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib64/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 245, in _run
    break
  File "/usr/lib64/python3.6/contextlib.py", line 185, in __exit__
    self.thing.close()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 301, in close
    self._dst.close()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 149, in close
    log.info("Close backend address=%r", self._client.address)
Message: 'Close backend address=%r'
Arguments: ('/tmp/imageio-qm5mrq25/sock',)
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 242, in _run
    handler.copy(req)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 286, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 216, in write_to
    writer.write(view[:n])
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 118, in write
    self._client.write(self._position, buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 445, in write
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 980, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1031, in _recv_reply_chunk
    self._handle_error_chunk(length, flags)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1144, in _handle_error_chunk
    raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 998, in emit
    self.flush()
  File "/usr/lib64/python3.6/logging/__init__.py", line 978, in flush
    self.stream.flush()
OSError: [Errno 28] No space left on device
Call stack:
  File "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib64/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 245, in _run
    break
  File "/usr/lib64/python3.6/contextlib.py", line 185, in __exit__
    self.thing.close()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 304, in close
    self._src.close()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 339, in close
    self.url.netloc, self.url.path)
Message: 'Close backend netloc=%r path=%r'
Arguments: ('ov200.datacenter.polimi.it:54322', '/images/a354de86-f148-4cbf-b10f-313692e3273f')
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 242, in _run
    handler.copy(req)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 286, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 216, in write_to
    writer.write(view[:n])
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 118, in write
    self._client.write(self._position, buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 445, in write
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 980, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1031, in _recv_reply_chunk
    self._handle_error_chunk(length, flags)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1144, in _handle_error_chunk
    raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 998, in emit
    self.flush()
  File "/usr/lib64/python3.6/logging/__init__.py", line 978, in flush
    self.stream.flush()
OSError: [Errno 28] No space left on device
Call stack:
  File "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib64/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 251, in _run
    log.exception("Worker %s failed", self._name)
Message: 'Worker %s failed'
Arguments: ('download/0',)
[  75.62% ] 15.12 GiB, 106.18 seconds, 145.87 MiB/s                            
[ 264.0 ] Finalizing image transfer
[ 273.1 ] Finalizing backup
[ 273.2 ] Waiting until backup is being finalized
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 493, in <module>
    main()
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 176, in main
    args.command(args)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 192, in cmd_full
    download_backup(connection, backup.id, args)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 384, in download_backup
    connection, backup_uuid, disk, disk_path, args, incremental=download_incremental)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 428, in download_disk
    **extra_args)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/client/_api.py", line 186, in download
    name="download")
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 69, in copy
    log.debug("Executor failed")
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 193, in __exit__
    self.stop()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 170, in stop
    raise self._errors[0]
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 242, in _run
    handler.copy(req)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py", line 286, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py", line 216, in write_to
    writer.write(view[:n])
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 118, in write
    self._client.write(self._position, buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 445, in write
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 980, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1031, in _recv_reply_chunk
    self._handle_error_chunk(length, flags)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1144, in _handle_error_chunk
    raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28] No space left on device
[g.cecchi@ovmgr1 ~]$

Now inside my home dir:

[g.cecchi@ovmgr1 ~]$ ll
total 36983004
-rw-r--r--. 1 g.cecchi g.cecchi 16392286208 Sep  3 15:35 33b0f6fb-a855-465d-a628-5fce9b64496a.202109031531.full.qcow2
-rw-r--r--. 1 g.cecchi g.cecchi 21478375424 Sep  3 15:33 96b0e701-7595-4f04-8569-fb1c72e6f8e0.202109031531.full.qcow2

I remove the files and try again and I get:

[g.cecchi@ovmgr1 ~]$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py -c ovmgr1 full dc386237-1e98-40c8-9d3d-45658163d1e2 
[   0.0 ] Starting full backup for VM 'dc386237-1e98-40c8-9d3d-45658163d1e2'
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 493, in <module>
    main()
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 176, in main
    args.command(args)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 190, in cmd_full
    backup = start_backup(connection, args)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 319, in start_backup
    from_checkpoint_id=args.from_checkpoint_uuid
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 34139, in add
    return self._internal_add(backup, headers, query, wait)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
    return future.wait() if wait else future
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback
    self._check_fault(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot backup VM. The VM is during a backup operation.]". HTTP response code is 409.
[g.cecchi@ovmgr1 ~]$


Via REST API I have this now:

https://ovmgr1.mydomain/ovirt-engine/api/vms/dc386237-1e98-40c8-9d3d-45658163d1e2/backups

<backups>
<backup href="/ovirt-engine/api/vms/dc386237-1e98-40c8-9d3d-45658163d1e2/backups/68f83141-9d03-4cb0-84d4-e71fdd8753bb" id="68f83141-9d03-4cb0-84d4-e71fdd8753bb">
<actions>
<link href="/ovirt-engine/api/vms/dc386237-1e98-40c8-9d3d-45658163d1e2/backups/68f83141-9d03-4cb0-84d4-e71fdd8753bb/finalize" rel="finalize"/>
</actions>
<link href="/ovirt-engine/api/vms/dc386237-1e98-40c8-9d3d-45658163d1e2/backups/68f83141-9d03-4cb0-84d4-e71fdd8753bb/disks" rel="disks"/>
<creation_date>2021-09-03T15:31:11.447+02:00</creation_date>
<phase>finalizing</phase>
<to_checkpoint_id>d31e35b6-bd16-46d2-a053-eabb26d283f5</to_checkpoint_id>
<host href="/ovirt-engine/api/hosts/cc241ec7-64fc-4c93-8cec-9e0e7005a49d" id="cc241ec7-64fc-4c93-8cec-9e0e7005a49d"/>
<vm href="/ovirt-engine/api/vms/dc386237-1e98-40c8-9d3d-45658163d1e2" id="dc386237-1e98-40c8-9d3d-45658163d1e2"/>
</backup>
</backups>

https://ovmgr1.mydomains/ovirt-engine/api/vms/dc386237-1e98-40c8-9d3d-45658163d1e2/checkpoints

returns empty

Comment 1 Gianluca Cecchi 2021-09-03 22:10:32 UTC
If I try to stop the backup I get:

[g.cecchi@ovmgr1 ~]$ /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py -c ovmgr1 stop dc386237-1e98-40c8-9d3d-45658163d1e2 68f83141-9d03-4cb0-84d4-e71fdd8753bb
[   0.0 ] Finalizing backup '68f83141-9d03-4cb0-84d4-e71fdd8753bb'
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 493, in <module>
    main()
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 176, in main
    args.command(args)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 262, in cmd_stop
    stop_backup(connection, args.backup_uuid, args)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py", line 345, in stop_backup
    backup_service.finalize()
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 33869, in finalize
    return self._internal_action(action, 'finalize', None, headers, query, wait)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 299, in _internal_action
    return future.wait() if wait else future
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 296, in callback
    self._check_fault(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot stop VM backup. The VM backup is not in READY phase, backup phase is FINALIZING. Please try again when the backup is in READY phase.]". HTTP response code is 409.
[g.cecchi@ovmgr1 ~]$

Comment 2 RHEL Program Management 2021-09-06 07:14:57 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 4 Nir Soffer 2021-09-09 12:31:04 UTC
Looking in engine log:

$ grep 68f83141-9d03-4cb0-84d4-e71fdd8753bb engine.log
...

1. Backup started

2021-09-03 15:31:11,551+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-50) [b302cff2-fb05-4f10-9d02-aa03b10b10e1] EVENT_ID:
VM_BACKUP_STARTED(10,790), Backup 68f83141-9d03-4cb0-84d4-e71fdd8753bb
for VM c8server started (User: tekka@mydomain@mydomain).

2. Creating scratch disk for disk c8_data_c8server1

2021-09-03 15:31:12,550+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-39)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] START, CreateVolumeVDSCommand(
CreateVolumeVDSCommandParameters:{storagePoolId='ef17cad6-7724-4cd8-96e3-9af6e529db51',
ignoreFailoverLimit='false',
storageDomainId='1de66cb8-c9f6-49cd-8a35-d5da8adad570',
imageGroupId='a6ce101a-f7ce-4944-93a5-e71f32dd6c12',
imageSizeInBytes='21474836480', volumeFormat='COW',
newImageId='33aa1bac-4152-492d-8a4a-b6d6c0337fec', imageType='Sparse',
newImageDescription='{"DiskAlias":"VM c8server backup
68f83141-9d03-4cb0-84d4-e71fdd8753bb scratch disk for
c8_data_c8server1","DiskDescription":"Backup
68f83141-9d03-4cb0-84d4-e71fdd8753bb scratch disk"}',
imageInitialSizeInBytes='1073741824',
imageId='00000000-0000-0000-0000-000000000000',
sourceImageGroupId='00000000-0000-0000-0000-000000000000',
shouldAddBitmaps='false'}), log id: 164ff0c7

3. Creating scratch disk for disk c8_bootdisk_c8server1

2021-09-03 15:31:12,880+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-39)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] START, CreateVolumeVDSCommand(
CreateVolumeVDSCommandParameters:{storagePoolId='ef17cad6-7724-4cd8-96e3-9af6e529db51',
ignoreFailoverLimit='false',
storageDomainId='1de66cb8-c9f6-49cd-8a35-d5da8adad570',
imageGroupId='c9521211-8e24-46ae-aa2e-6f76503527dc',
imageSizeInBytes='21474836480', volumeFormat='COW',
newImageId='48244767-c8dc-4821-be21-935207068e69', imageType='Sparse',
newImageDescription='{"DiskAlias":"VM c8server backup
68f83141-9d03-4cb0-84d4-e71fdd8753bb scratch disk for
c8_bootdisk_c8server1","DiskDescription":"Backup
68f83141-9d03-4cb0-84d4-e71fdd8753bb scratch disk"}',
imageInitialSizeInBytes='1073741824',
imageId='00000000-0000-0000-0000-000000000000',
sourceImageGroupId='00000000-0000-0000-0000-000000000000',
shouldAddBitmaps='false'}), log id: 367fe98d

We can grep for the scratch disk UUIDs:
- a6ce101a-f7ce-4944-93a5-e71f32dd6c12
- c9521211-8e24-46ae-aa2e-6f76503527dc

But let's first understand what happens to this backup...

4. Backup was started

2021-09-03 15:31:29,883+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.StartVmBackupVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-32)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] START,
StartVmBackupVDSCommand(HostName = ov200,
VmBackupVDSParameters:{hostId='cc241ec7-64fc-4c93-8cec-9e0e7005a49d',
backupId='68f83141-9d03-4cb0-84d4-e71fdd8753bb',
requireConsistency='false'}), log id: 154dbdc5
{96b0e701-7595-4f04-8569-fb1c72e6f8e0=nbd:unix:/run/vdsm/backup/68f83141-9d03-4cb0-84d4-e71fdd8753bb:exportname=sdb,
33b0f6fb-a855-465d-a628-5fce9b64496a=nbd:unix:/run/vdsm/backup/68f83141-9d03-4cb0-84d4-e71fdd8753bb:exportname=sda}<domaincheckpoint>
  <description>checkpoint for backup
&apos;68f83141-9d03-4cb0-84d4-e71fdd8753bb&apos;</description>

The next step is creating image transfer for downloading the disks.
Based on your mail:

[ 157.8 ] Image transfer 'ccc386d3-9f9d-4727-832a-56d355d60a95' is ready

We can follow the image transfer UUID ccc386d3-9f9d-4727-832a-56d355d60a95:

5. Creating image transfer

2021-09-03 15:33:46,892+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-48) [a79ec359-6da7-4b21-a018-1a9360a2f7d8] Creating
ImageTransfer entity for command
'ccc386d3-9f9d-4727-832a-56d355d60a95', proxyEnabled: true

6. Image transfer is ready

2021-09-03 15:33:46,922+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(default task-48) [a79ec359-6da7-4b21-a018-1a9360a2f7d8] Updating
image transfer ccc386d3-9f9d-4727-832a-56d355d60a95 (image
33b0f6fb-a855-465d-a628-5fce9b64496a) phase to Transferring

The next step is finalizing this transfer, after data was downloaded,
or download
failed...

7. Image transfer finalized

2021-09-03 15:35:34,141+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(default task-48) [cb1714ef-e36c-4984-aee8-3ca2185be724] Updating
image transfer ccc386d3-9f9d-4727-832a-56d355d60a95 (image
33b0f6fb-a855-465d-a628-5fce9b64496a) phase to Finalizing Success

8. Image transfer marked as finished...

2021-09-03 15:35:42,337+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-87)
[a79ec359-6da7-4b21-a018-1a9360a2f7d8] Updating image transfer
ccc386d3-9f9d-4727-832a-56d355d60a95 (image
33b0f6fb-a855-465d-a628-5fce9b64496a) phase to Finished Success

9. Backup is stopped

99-4fd7-8ab3-e2a0f4b13f06] Running command: StopVmBackupCommand
internal: false. Entities affected :  ID:
dc386237-1e98-40c8-9d3d-45658163d1e2 Type: VMAction group BACKUP_DISK
with role type ADMIN,  ID: 96b0e701-7595-4f04-8569-fb1c72e6f8e0 Type:
DiskAction group BACKUP_DISK with role type ADMIN,  ID:
33b0f6fb-a855-465d-a628-5fce9b64496a Type: DiskAction group
BACKUP_DISK with role type ADMIN

10. Stopping backup on the host

2021-09-03 15:35:43,364+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.StopVmBackupVDSCommand]
(default task-48) [893250d4-f599-4fd7-8ab3-e2a0f4b13f06] FINISH,
StopVmBackupVDSCommand, return: , log id: 228ad88b

2021-09-03 15:35:43,374+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-48) [893250d4-f599-4fd7-8ab3-e2a0f4b13f06] EVENT_ID:
VM_BACKUP_FINALIZED(10,794), Backup
68f83141-9d03-4cb0-84d4-e71fdd8753bb for VM c8server finalized (User:
tekka@mydomain@mydomain).

11. Error in the backup command - not sure why...

2021-09-03 15:35:44,390+02 INFO
[org.ovirt.engine.core.utils.transaction.TransactionSupport]
(EE-ManagedScheduledExecutorServ
ice-engineScheduledThreadPool-Thread-7)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] transaction rolled back
2021-09-03 15:35:44,390+02 ERROR
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(EE-ManagedScheduledExecutor
Service-engineScheduledThreadPool-Thread-7)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] Command 'StartVmBackup' id:
'68f83141-9d03-
4cb0-84d4-e71fdd8753bb' with children
[c98edaa9-e3e9-48bc-a775-39d09f34f17c,
0767fbc6-d966-4d0d-bd96-33da00dd7797] failed when
 attempting to perform the next operation, marking as 'ACTIVE'
Template.java:1050)
        ... 30 more

12. Errors writing to database - no space left

2021-09-03 15:35:45,422+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] EVENT_ID:
VM_BACKUP_FAILED(10,792), Backup 68f83141-9d03-4cb0-84d4-e71fdd8753bb
for VM c8server failed (User: tekka@mydomain@mydomain).
2021-09-03 15:35:45,422+02 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] Failed invoking callback end
method 'onFailed' for command '68f83141-9d03-4cb0-84d4-e71fdd8753bb'
with exception 'CallableStatementCallback; SQL [{call
updatestatusofimagesbyimagegroupid(?, ?)}ERROR: could not extend file
"base/16385/1398189_fsm": No space left on device
  Hint: Check free disk space.
  Where: SQL statement "UPDATE images
    SET imageStatus = v_status
    WHERE image_group_id = v_image_group_id"
PL/pgSQL function updatestatusofimagesbyimagegroupid(uuid,integer)
line 3 at SQL statement; nested exception is
org.postgresql.util.PSQLException: ERROR: could not extend file
"base/16385/1398189_fsm": No space left on device
  Hint: Check free disk space.
  Where: SQL statement "UPDATE images
    SET imageStatus = v_status
    WHERE image_group_id = v_image_group_id"
PL/pgSQL function updatestatusofimagesbyimagegroupid(uuid,integer)
line 3 at SQL statement', the callback is marked for end method
retries

This seems to be the root cause for the engine failure - engine cannot
write to the
database, so it cannot complete handling of the backup command.

Back to the scratch disk, if we grep the UUIDs we see:

$ grep a6ce101a-f7ce-4944-93a5-e71f32dd6c12 engine.log
...

2021-09-03 15:36:19,625+02 INFO
[org.ovirt.engine.core.bll.storage.backup.RemoveScratchDisksCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] Teardown
'a6ce101a-f7ce-4944-93a5-e71f32dd6c12/33aa1bac-4152-492d-8a4a-b6d6c0337fec'
on the VM host 'cc241ec7-64fc-4c93-8cec-9e0e7005a49d'

2021-09-03 15:36:20,423+02 INFO
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[4dd3b3c7] Lock Acquired to object
'EngineLock:{exclusiveLocks='[a6ce101a-f7ce-4944-93a5-e71f32dd6c12=DISK]',
sharedLocks=''}'

2021-09-03 15:36:20,429+02 INFO
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[4dd3b3c7] Running command: RemoveDiskCommand internal: true. Entities
affected :  ID: a6ce101a-f7ce-4944-93a5-e71f32dd6c12 Type: DiskAction
group DELETE_DISK with role type USER

2021-09-03 15:36:20,483+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[4dd3b3c7] START, DeleteImageGroupVDSCommand(
DeleteImageGroupVDSCommandParameters:{storagePoolId='ef17cad6-7724-4cd8-96e3-9af6e529db51',
ignoreFailoverLimit='false',
storageDomainId='1de66cb8-c9f6-49cd-8a35-d5da8adad570',
imageGroupId='a6ce101a-f7ce-4944-93a5-e71f32dd6c12',
postZeros='false', discard='false', forceDelete='false'}), log id:
63ed36b8

2021-09-03 15:36:21,041+02 INFO
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[4dd3b3c7] Lock freed to object
'EngineLock:{exclusiveLocks='[a6ce101a-f7ce-4944-93a5-e71f32dd6c12=DISK]',
sharedLocks=''}'

$ grep c9521211-8e24-46ae-aa2e-6f76503527dc engine.log
...
2021-09-03 15:36:20,084+02 INFO
[org.ovirt.engine.core.bll.storage.backup.RemoveScratchDisksCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[b302cff2-fb05-4f10-9d02-aa03b10b10e1] Teardown
'c9521211-8e24-46ae-aa2e-6f76503527dc/48244767-c8dc-4821-be21-935207068e69'
on the VM host 'cc241ec7-64fc-4c93-8cec-9e0e7005a49d'

2021-09-03 15:36:21,043+02 INFO
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[79f71902] Lock Acquired to object
'EngineLock:{exclusiveLocks='[c9521211-8e24-46ae-aa2e-6f76503527dc=DISK]',
sharedLocks=''}'

2021-09-03 15:36:21,048+02 INFO
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[79f71902] Running command: RemoveDiskCommand internal: true. Entities
affected :  ID: c9521211-8e24-46ae-aa2e-6f76503527dc Type: DiskAction
group DELETE_DISK with role type USER

2021-09-03 15:36:21,067+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[79f71902] START, DeleteImageGroupVDSCommand(
DeleteImageGroupVDSCommandParameters:{storagePoolId='ef17cad6-7724-4cd8-96e3-9af6e529db51',
ignoreFailoverLimit='false',
storageDomainId='1de66cb8-c9f6-49cd-8a35-d5da8adad570',
imageGroupId='c9521211-8e24-46ae-aa2e-6f76503527dc',
postZeros='false', discard='false', forceDelete='false'}), log id:
5670f62c

2021-09-03 15:36:21,518+02 INFO
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-70)
[79f71902] Lock freed to object
'EngineLock:{exclusiveLocks='[c9521211-8e24-46ae-aa2e-6f76503527dc=DISK]',
sharedLocks=''}'

So both scratch disks were removed as expected, and the only issue is the backup
stuck in the finalizing state.

Because the root cause is no space on the database disk, caused by user error
(filling up engine disk by mistake), I don't think we can do much about this.

Comment 5 Nir Soffer 2021-09-09 12:34:18 UTC
Closing as not a bug, since filling engine disk is a user error.
Engine must have working database to make progress.


Note You need to log in before you can comment on or make changes to this bug.