Bug 1439683 - Guest agent fails to start after live/cold merge of a middle snapshot.
Summary: Guest agent fails to start after live/cold merge of a middle snapshot.
Keywords:
Status: CLOSED DUPLICATE of bug 1334726
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.1.8
Hardware: ppc64le
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Allon Mureinik
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-04-06 11:58 UTC by Carlos Mestre González
Modified: 2017-04-07 16:38 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-04-06 13:12:12 UTC
oVirt Team: Storage
Embargoed:


Attachments (Terms of Use)
engine.log, vdsm.log, journalctl output guest vm. (742.01 KB, application/x-gzip)
2017-04-06 11:58 UTC, Carlos Mestre González
no flags Details

Description Carlos Mestre González 2017-04-06 11:58:14 UTC
Created attachment 1269362 [details]
engine.log, vdsm.log, journalctl output guest vm.

Description of problem:
This probably has to be moved to the RHEL PPC but I was wondering if you guys can take a look in case I'm missing something.

So I have a clone VM with a boot OS and 6 other virtio disks (3 thin, 3 prealloc 1G) with a FS each. I create a file on each, add a snapshot and that 3 times. After I remove middle snapshot, preview the last snapshot, and the vm fails to start.


Version-Release number of selected component (if applicable):
rhevm-4.1.1.8-0.1.el7.noarch
guest agent:
Linux localhost.localdomain 3.10.0-514.el7.ppc64le #1 SMP Wed Oct 19 11:27:06 EDT 2016 ppc64le ppc64le ppc64le GNU/Linux


How reproducible:
100%

Steps to Reproduce:
1. Clone a vm from a template.
2. Attach, activate, and create an ext4 fs for 6 virtio disks (3 thin, 3 prealloc)
3. Mount the fs.
4. Create a file in all the disks and take a snapshot of all disks (x 3)
5. Remove the middle snapshot.
6. Preview last snapshot.

Actual results:
VM fails to start

Expected results:
VM starts, all files are there.

Additional info:

Apr 06 14:18:04 localhost.localdomain kernel: XFS (vdg2): Mounting V5 Filesystem
Apr 06 14:18:04 localhost.localdomain systemd[1]: Received SIGRTMIN+20 from PID 242 (plymouthd
).
Apr 06 14:18:04 localhost.localdomain mount[778]: mount: wrong fs type, bad option, bad superblock on /dev/vdg1,
Apr 06 14:18:04 localhost.localdomain mount[778]: missing codepage or helper program, or other error
Apr 06 14:18:04 localhost.localdomain mount[778]: In some cases useful info is found in syslog - try
Apr 06 14:18:04 localhost.localdomain mount[778]: dmesg | tail or so.
Apr 06 14:18:04 localhost.localdomain systemd[1]: mount\x2dpointbaf1293ec1383535c9f7e54b1ee1c2c1138b2b97.mount mount process exited, code=exited status=32
Apr 06 14:18:04 localhost.localdomain systemd[1]: Failed to mount /mount-pointbaf1293ec1383535c9f7e54b1ee1c2c1138b2b97.
[...]

Apr 06 14:20:27 localhost.localdomain kdumpctl[1861]: No memory reserved for crash kernel.
Apr 06 14:20:27 localhost.localdomain kdumpctl[1861]: Starting kdump: [FAILED]
Apr 06 14:20:27 localhost.localdomain systemd[1]: kdump.service: main process exited, code=exited, status=1/FAILURE
Apr 06 14:20:27 localhost.localdomain systemd[1]: Failed to start Crash recovery kernel arming.
Apr 06 14:20:27 localhost.localdomain systemd[1]: Unit kdump.service entered failed state.
Apr 06 14:20:27 localhost.localdomain systemd[1]: kdump.service failed.

Comment 2 Carlos Mestre González 2017-04-06 13:12:12 UTC
Seems a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1334726

I'm going to test the scenario with labels and update in case this still happens.

*** This bug has been marked as a duplicate of bug 1334726 ***


Note You need to log in before you can comment on or make changes to this bug.