RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1146477 - virt-v2v sometimes hangs with printing: Add. Sense: No additional sense information
Summary: virt-v2v sometimes hangs with printing: Add. Sense: No additional sense infor...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-25 10:16 UTC by zhoujunqin
Modified: 2015-09-24 07:32 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-24 07:32:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
log file (36.01 KB, text/plain)
2014-09-25 10:16 UTC, zhoujunqin
no flags Details
Detailed log file during conversion (216.05 KB, text/plain)
2014-09-26 09:00 UTC, tingting zheng
no flags Details
libguestfs-test-tool result (43.57 KB, text/plain)
2014-09-26 09:01 UTC, tingting zheng
no flags Details
guestfs log file (20.38 KB, text/plain)
2014-09-28 03:16 UTC, tingting zheng
no flags Details

Description zhoujunqin 2014-09-25 10:16:44 UTC
Created attachment 941037 [details]
log file

Description of problem:
virt-v2v sometimes hangs with continuous printing:
[   44.346818] sd 2:0:1:0: [sdb]  
[   44.347147] Sense Key : No Sense [current]
[   44.347737] sd 2:0:1:0: [sdb]  
[   44.348200] Add. Sense: No additional sense information

Version-Release number of selected component (if applicable):
virt-v2v-1.27.53-1.1.el7.x86_64
libguestfs-1.27.53-1.1.el7.x86_64

How reproducible:
30%

Steps to Reproduce:
1.Copy xen guest image and xml file from xen server to v2v server:

2.Run virt-v2v to connect a xen pv/hvm guest either by -i disk or -i libvirtxml.
# export LIBGUESTFS_BACKEND=direct
# virt-v2v -i libvirtxml -o local -os  /var/tmp/  rhel6.6-pv-x64-test.xml -on test33 -of raw -oa preallocated
[   0.0] Opening the source -i libvirtxml rhel6.6-pv-x64-test.xml
[   0.0] Creating an overlay to protect the source from being modified
[   0.0] Opening the overlay




^C


Actual results:
virt-v2v command will hang there for long time then use CTRL+C exit.
Rerun
# virt-v2v -i libvirtxml -o local -os  /var/tmp/  rhel6.6-pv-x64-test.xml -on test33 -of raw -oa preallocated -v -x |& tee 50.log again
we can see such message continuous printing:
[   44.346818] sd 2:0:1:0: [sdb]  
[   44.347147] Sense Key : No Sense [current]
[   44.347737] sd 2:0:1:0: [sdb]  
[   44.348200] Add. Sense: No additional sense information

Expected results:
virt-v2v command will run successfully.

Additional info:
Attached log.

Comment 2 Richard W.M. Jones 2014-09-25 10:29:15 UTC
The root cause are some errors on appliance /dev/sdb:

[    0.422501] EXT4-fs (sdb): mounting ext2 file system using the ext4 subsystem
[    0.431290] sd 2:0:1:0: [sdb]  
[    0.431575] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[    0.432044] sd 2:0:1:0: [sdb]  
[    0.432308] Sense Key : Aborted Command [current] 
[    0.432725] sd 2:0:1:0: [sdb]  
[    0.432991] Add. Sense: I/O process terminated
[    0.433378] sd 2:0:1:0: [sdb] CDB: 
[    0.433672] Write(10): 2a 00 00 00 00 00 00 00 08 00
[    0.434230] end_request: I/O error, dev sdb, sector 0
[    0.434650] Buffer I/O error on device sdb, logical block 0
[    0.435113] lost page write due to I/O error on sdb
[    0.435552] EXT4-fs (sdb): mounted filesystem without journal. Opts: 

What's particularly interesting is that /dev/sdb is the appliance
disk (ie. /var/tmp/.guestfs-0/appliance.d/root on the host).

Does 'libguestfs-test-tool' run OK on this machine?

Is /var/tmp on the host anything special?  eg: SSD, remote disk, slow
disk, a disk that has errors ...etc?

Also, what version of qemu/qemu-kvm/qemu-kvm-rhev are you running?

Comment 3 Richard W.M. Jones 2014-09-25 10:30:29 UTC
(In reply to Richard W.M. Jones from comment #2)
> Is /var/tmp on the host anything special?  eg: SSD, remote disk, slow
> disk, a disk that has errors ...etc?

Also: Plenty of free space on /tmp and /var/tmp?

Comment 4 tingting zheng 2014-09-26 08:59:07 UTC
(In reply to Richard W.M. Jones from comment #2)
> The root cause are some errors on appliance /dev/sdb:
> 
> [    0.422501] EXT4-fs (sdb): mounting ext2 file system using the ext4
> subsystem
> [    0.431290] sd 2:0:1:0: [sdb]  
> [    0.431575] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
> [    0.432044] sd 2:0:1:0: [sdb]  
> [    0.432308] Sense Key : Aborted Command [current] 
> [    0.432725] sd 2:0:1:0: [sdb]  
> [    0.432991] Add. Sense: I/O process terminated
> [    0.433378] sd 2:0:1:0: [sdb] CDB: 
> [    0.433672] Write(10): 2a 00 00 00 00 00 00 00 08 00
> [    0.434230] end_request: I/O error, dev sdb, sector 0
> [    0.434650] Buffer I/O error on device sdb, logical block 0
> [    0.435113] lost page write due to I/O error on sdb
> [    0.435552] EXT4-fs (sdb): mounted filesystem without journal. Opts: 
> 
> What's particularly interesting is that /dev/sdb is the appliance
> disk (ie. /var/tmp/.guestfs-0/appliance.d/root on the host).

> Does 'libguestfs-test-tool' run OK on this machine?

This bug can not be reproduced everytime,I met this error on my test machine,I will attatch related virt-v2v log and libguest-test-tool result,libguestfs-test-tool fails.

> 
> Is /var/tmp on the host anything special?  eg: SSD, remote disk, slow
> disk, a disk that has errors ...etc?

No.

> Also, what version of qemu/qemu-kvm/qemu-kvm-rhev are you running?
# rpm -qa qemu-kvm-rhev
qemu-kvm-rhev-2.1.0-3.rwmj3.el7.x86_64

Also I think they are enough free space on /.
# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/rhel00-root   94G   83G   12G  88% /
devtmpfs                 1.8G     0  1.8G   0% /dev
tmpfs                    1.9G  8.0K  1.9G   1% /dev/shm
tmpfs                    1.9G  9.4M  1.8G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda2                497M  159M  339M  32% /boot

Comment 5 tingting zheng 2014-09-26 09:00:27 UTC
Created attachment 941490 [details]
Detailed log file during conversion

Comment 6 tingting zheng 2014-09-26 09:01:11 UTC
Created attachment 941491 [details]
libguestfs-test-tool result

Comment 7 Richard W.M. Jones 2014-09-26 09:38:48 UTC
(In reply to tingting zheng from comment #6)
> Created attachment 941491 [details]
> libguestfs-test-tool result

This is not really a virt-v2v bug.  This is a very strange
libguestfs bug.  It doesn't happen for me.

Can you grab this file:

  /var/tmp/.guestfs-0/appliance.d/root

and save it somewhere.  That is the faulty appliance.

Then you can do:

  rm -r /var/tmp/.guestfs-0

and re-run libguestfs-test-tool (maybe several times).  If the
bug goes away, then it's something to do with the faulty appliance
captured above.  You can send me that and I'll take a look.

If the bug remains, then I don't know ...

Does this bug happen on other machines or just one machine?  If
the bug remains, but only happens on a single machine, then I would
suspect a faulty disk or hardware.

Comment 8 tingting zheng 2014-09-26 10:38:38 UTC
(In reply to Richard W.M. Jones from comment #7)
> (In reply to tingting zheng from comment #6)
> > Created attachment 941491 [details]
> > libguestfs-test-tool result
> 
> This is not really a virt-v2v bug.  This is a very strange
> libguestfs bug.  It doesn't happen for me.
> 
> Can you grab this file:
> 
>   /var/tmp/.guestfs-0/appliance.d/root
> 
> and save it somewhere.  That is the faulty appliance.
> 
> Then you can do:
> 
>   rm -r /var/tmp/.guestfs-0
> 
> and re-run libguestfs-test-tool (maybe several times).  If the
> bug goes away, then it's something to do with the faulty appliance
> captured above.  You can send me that and I'll take a look.
> 
> If the bug remains, then I don't know ...

I did the above steps and libguestfs-test-tool still fails.

> 
> Does this bug happen on other machines or just one machine?  If
> the bug remains, but only happens on a single machine, then I would
> suspect a faulty disk or hardware.

The bug happens on 3 hosts,but can not be reproduced every time.

Comment 9 Richard W.M. Jones 2014-09-26 12:17:08 UTC
I talked to Paolo about this, and we need to collect some more
information from when it fails.

 - - -

Firstly, if you look at the libguestfs-test-tool output, you will
see a line which looks like this:

  libguestfs: guest random name = guestfs-ahwo0tb0wyq3qce4

This ties to a qemu log file which will be located either in

  /var/log/libvirt/qemu/guestfs-ahwo0tb0wyq3qce4.log

or in

  $HOME/.cache/libvirt/qemu/log/guestfs-ahwo0tb0wyq3qce4.log

We need that log (from a case where it fails of course).

 - - -

Secondly, we need to see the strace of qemu when it fails so we
can tell which system call is failing and why.

The following command will run libguestfs-test-tool under strace (and
all subprocesses, including qemu):

  LIBGUESTFS_BACKEND=direct strace -f -o /tmp/strace libguestfs-test-tool 

and send us the trace file (/tmp/strace).  Note only do this if
the command fails as described in this bug.  "Good" traces aren't
of any use.

Note also this switches the backend to 'direct'.  I couldn't work
out how to collect strace of qemu when it is run by libvirtd.
Hopefully switching backend won't make the bug disappear ...

Comment 10 tingting zheng 2014-09-28 03:14:30 UTC
(In reply to Richard W.M. Jones from comment #9)
> I talked to Paolo about this, and we need to collect some more
> information from when it fails.
> 
>  - - -
> 
> Firstly, if you look at the libguestfs-test-tool output, you will
> see a line which looks like this:
> 
>   libguestfs: guest random name = guestfs-ahwo0tb0wyq3qce4
> 
> This ties to a qemu log file which will be located either in
> 
>   /var/log/libvirt/qemu/guestfs-ahwo0tb0wyq3qce4.log
> 
> or in
> 
>   $HOME/.cache/libvirt/qemu/log/guestfs-ahwo0tb0wyq3qce4.log
> 
> We need that log (from a case where it fails of course).

Found some I/O error as below,I will attch the detailed log.
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none TMPDIR=/var/tmp /usr/libexec/qemu-kvm -name guestfs-ahwo0tb0wyq3qce4 -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off -cpu host -m 500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 2c0d524a-dc22-4884-9be0-44e41e8a50ba -nographic -no-user-config -nodefaults -device sga -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/guestfs-ahwo0tb0wyq3qce4.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -no-acpi -boot strict=on -kernel /var/tmp/.guestfs-0/appliance.d/kernel -initrd /var/tmp/.guestfs-0/appliance.d/initrd -append panic=1 console=ttyS0 udevtimeout=6000 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=/tmp/libguestfsNCMI8a/scratch.1,if=none,id=drive-scsi0-0-0-0,format=raw,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive file=/tmp/libguestfsNCMI8a/overlay2,if=none,id=drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -chardev socket,id=charserial0,path=/tmp/libguestfsNCMI8a/console.sock -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/tmp/libguestfsNCMI8a/guestfsd.sock -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
Domain id=26 is tainted: custom-argv
Domain id=26 is tainted: host-cpu
block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)


> Secondly, we need to see the strace of qemu when it fails so we
> can tell which system call is failing and why.
> 
> The following command will run libguestfs-test-tool under strace (and
> all subprocesses, including qemu):
> 
>   LIBGUESTFS_BACKEND=direct strace -f -o /tmp/strace libguestfs-test-tool 
> 
> and send us the trace file (/tmp/strace).  Note only do this if
> the command fails as described in this bug.  "Good" traces aren't
> of any use.
> 
> Note also this switches the backend to 'direct'.  I couldn't work
> out how to collect strace of qemu when it is run by libvirtd.
> Hopefully switching backend won't make the bug disappear ...

I tried on my 2 hosts,unfortunatly that I can not reproduce this bug whether by backend is 'direct' or not.I will record the trace once I can reproduce this bug.

Comment 11 tingting zheng 2014-09-28 03:16:02 UTC
Created attachment 941933 [details]
guestfs log file

Comment 12 Richard W.M. Jones 2014-09-28 09:01:05 UTC
(In reply to tingting zheng from comment #10)
> block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
> block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
> block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
> block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
> block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)
> block I/O error in device 'drive-scsi0-0-1-0': Permission denied (13)

I bet this is SELinux.  Can you see if there are audit messages
coincident with these errors:

  # ausearch -m avc -ts recent

Also make sure 'selinux-policy' package is up to date.

If there are audit messages, then a bug should be filed against selinux-policy,
but see also bug 1145081 which might be the same thing (I was not able
to reproduce it).

> I tried on my 2 hosts,unfortunatly that I can not reproduce this bug whether
> by backend is 'direct' or not.I will record the trace once I can reproduce
> this bug.

That would be consistent with it being SELinux, since the 'direct' method
does not use SELinux.

Comment 13 Richard W.M. Jones 2014-09-28 09:39:55 UTC
By the way, although I say that the problem is selinux-policy,
in the discussion of bug 1145081 we thought it might be a libvirt
problem (not labelling the files reliably).  We'll see when
we can see the SELinux AVCs however.

Need to also know:

 - version of libvirt installed
 - version of selinux-policy installed

Comment 14 zhoujunqin 2014-09-28 10:33:52 UTC
(In reply to Richard W.M. Jones from comment #13)
> Need to also know:
> 
>  - version of libvirt installed

libvirt-1.2.8-3.el7.x86_64

>  - version of selinux-policy installed

selinux-policy-3.12.1-153.el7.noarch

Since i also cannot reproduce this issue on my host and tzheng's machine now, just get the two package version from /var/log/yum.log according to the system date.
# date 
Sun Sep 28 06:25:47 EDT 2014 (it's beijing time)

# grep selinux-policy /var/log/yum.log 
Mar 19 22:49:26 Updated: selinux-policy-targeted-3.12.1-140.el7.noarch
Mar 25 23:13:50 Updated: selinux-policy-3.12.1-145.el7.noarch
Mar 25 23:15:13 Updated: selinux-policy-targeted-3.12.1-145.el7.noarch
Apr 08 02:11:11 Updated: selinux-policy-3.12.1-151.el7.noarch
Apr 08 02:11:50 Updated: selinux-policy-targeted-3.12.1-151.el7.noarch
Jul 03 04:46:39 Updated: selinux-policy-3.12.1-153.el7.noarch
Jul 03 04:47:23 Updated: selinux-policy-targeted-3.12.1-153.el7.noarch

# grep libvirt /var/log/yum.log
....
Sep 19 02:09:11 Updated: libvirt-client-1.2.8-3.el7.x86_64
Sep 19 02:09:12 Updated: libvirt-daemon-1.2.8-3.el7.x86_64
Sep 19 02:09:12 Updated: libvirt-daemon-driver-network-1.2.8-3.el7.x86_64
Sep 19 02:09:13 Updated: libvirt-daemon-driver-nwfilter-1.2.8-3.el7.x86_64
Sep 19 02:09:13 Updated: libvirt-daemon-driver-qemu-1.2.8-3.el7.x86_64
Sep 19 02:09:14 Updated: libvirt-daemon-driver-storage-1.2.8-3.el7.x86_64
Sep 19 02:09:14 Updated: libvirt-daemon-driver-secret-1.2.8-3.el7.x86_64
Sep 19 02:09:14 Updated: libvirt-daemon-driver-interface-1.2.8-3.el7.x86_64
Sep 19 02:09:15 Updated: libvirt-daemon-driver-nodedev-1.2.8-3.el7.x86_64
Sep 19 02:09:15 Updated: libvirt-daemon-config-nwfilter-1.2.8-3.el7.x86_64
Sep 19 02:09:15 Updated: libvirt-daemon-driver-lxc-1.2.8-3.el7.x86_64
Sep 19 02:09:16 Updated: libvirt-daemon-config-network-1.2.8-3.el7.x86_64
Sep 19 02:09:16 Updated: libvirt-1.2.8-3.el7.x86_64
Sep 19 02:09:16 Updated: libvirt-daemon-kvm-1.2.8-3.el7.x86_64
Sep 19 02:09:23 libvirt-client-1.2.8-2.el7.x86_64: ts_done name in te is libvirt-daemon should be libvirt-client-1.2.8-2.el7.x86_64

Others you need i will attach when i reproduce again, sorry for it.

Comment 15 Richard W.M. Jones 2014-09-29 16:30:06 UTC
Just adding NEEDINFO again, so I remember what state this bug is in.

Comment 16 zhoujunqin 2014-09-30 01:28:02 UTC
(In reply to Richard W.M. Jones from comment #15)
> Just adding NEEDINFO again, so I remember what state this bug is in.

Yes, rjones, we can leave this bug as NEEDINFO status as a record, as Comment 10 and Comment 14 said, tzheng and me cannot reproduce it on our machine now, we will support your info the moment we reproduce, thanks for your understanding.

Comment 18 Richard W.M. Jones 2015-09-23 14:37:02 UTC
Interested to know if this bug has been seen since last year?

If not I think we can just close the bug, as being a strange
unexplained anomaly.

Comment 19 tingting zheng 2015-09-24 02:45:14 UTC
(In reply to Richard W.M. Jones from comment #18)
> Interested to know if this bug has been seen since last year?
> 
> If not I think we can just close the bug, as being a strange
> unexplained anomaly.

I didn't see this bug occured again,so pls close it.

Comment 20 Richard W.M. Jones 2015-09-24 07:32:54 UTC
Closing per comment 19.  Please reopen if the bug is seen again.


Note You need to log in before you can comment on or make changes to this bug.