Bug 490255 - libvir:Remote error : Failed to start SASL negotiation: -4 (SASL (-4) : no mechanism available: No worthy mechs found) when migrate the guest
Summary: libvir:Remote error : Failed to start SASL negotiation: -4 (SASL (-4) : no me...
Keywords:
Status: CLOSED DUPLICATE of bug 489250
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Daniel Veillard
QA Contact:
URL:
Whiteboard:
: 490419 (view as bug list)
Depends On:
Blocks: 489250
TreeView+ depends on / blocked
 
Reported: 2009-03-14 10:14 UTC by Vivian Bian
Modified: 2016-04-27 04:07 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2009-03-31 15:21:37 UTC
Embargoed:


Attachments (Terms of Use)

Description Vivian Bian 2009-03-14 10:14:35 UTC
+++ This bug was initially created as a clone of Bug #489250 +++

Created an attachment (id=334466)
the xml file to create the guest.

Description of problem:
tried to migrate living guest to other host,but got the error like this:
libvir:Remote error : Failed to start SASL negotiation: -4 (SASL (-4) : no mechanism available: No worhy mechs found) 

Version-Release number of selected component (if applicable):
ovirt-node-image-1.0-1.snap3.el5ovirt
libvirt-0.5.1-3.el5ovirt
two boxes with Intel-VT boxes

How reproducible:
Always

Steps to Reproduce:
[in the destination host]
1.#qemu-img create xxx.img XG (the image name and size are the same with the ones in source box)  
2.service libvirtd restart

[in the source box]
3.# virsh migrate --live single qemu+tcp://10.66.70.10/system
  
Actual results:
libvir:Remote error : Failed to start SASL negotiation: -4 (SASL (-4) : no mechanism available: No worhy mechs found) 

Expected results:
the guest could be migrated successfully


Additional info:
if there are two different company boxes (Intel and AMD) the migration won't succeed ,but will succeed in the same company boxes after shutoff the sasl authentication.

--- Additional comment from berrange on 2009-03-09 07:05:15 EDT ---

What SASL mechanism have you got enabled on the destination ?

I think the most likely cause of the problem is that virsh is not allowing for authentication credentials when connecting to the destination.

    /* Temporarily connect to the destination host. */
    dconn = virConnectOpen (desturi);
    if (!dconn) goto done;


When it should be using 

    ctl->conn = virConnectOpenAuth(ctl->name,
                                   virConnectAuthPtrDefault,
                                   ctl->readonly ? VIR_CONNECT_RO : 0);


to allow prompting of username + password for auth mechanisms which need it.

As it currently stands, only GSSAPI would work.

--- Additional comment from apevec on 2009-03-09 07:25:21 EDT ---

(In reply to comment #1)
> What SASL mechanism have you got enabled on the destination ?

Yeah, it's digest-md5 as configured in ovirt Node stand-alone mode.
 
> I think the most likely cause of the problem is that virsh is not allowing for
> authentication credentials when connecting to the destination.

yes, that was my conclusion after reading the source as well.

Changing to libvirt, do we also need to clone to Virt.Tools/libvirt or just change the Product to Virt.Tools?

--- Additional comment from pmyers on 2009-03-09 13:30:21 EDT ---

We shouldn't just change the product to Virt.Tools since we do need to track this for the RHEVH product.  But you can certainly clone this for Virt.Tools so that it can be tracked upstream.

Comment 1 Alan Pevec 2009-03-16 10:53:28 UTC
*** Bug 490419 has been marked as a duplicate of this bug. ***

Comment 2 Gerrit Slomma 2009-03-23 12:58:44 UTC
But migration of kvm-domains won't work!

[root@rr019 boinc]# cd
[root@rr019 ~]# virsh --connect qemu:///system nodeinfo
CPU-Modell:          x86_64
CPU(s):              8
CPU-Frequenz:        1700 MHz
CPU-Socket(s):       2
Kern(e) pro Socket:  4
Thread(s) pro Kern:  1
NUMA-Zelle(n):       1
Speichergrösse:     4047848 kB

[root@rr019 ~]# virsh --connect qemu+tcp://192.168.1.19:8002/system nodeinfo
Please enter your authentication name:root
Please enter your password:
CPU-Modell:          x86_64
CPU(s):              8
CPU-Frequenz:        1700 MHz
CPU-Socket(s):       2
Kern(e) pro Socket:  4
Thread(s) pro Kern:  1
NUMA-Zelle(n):       1
Speichergrösse:     4047848 kB

[root@rr019 ~]# virsh --connect qemu+tcp://192.168.1.19:8002/system
Please enter your authentication name:root
Please enter your password:
Willkommen bei virsh, dem interaktiven Virtualisierungsterminal.

Tippen Sie:  'help' für eine Hilfe zu den Befehlen
       'quit' zum Beenden

virsh # connect qemu+tcp://192.168.1.19:8002/system
Fehler: Verbindungsaufbau zum Hypervisor schlug fehl
Fehler: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)

virsh # nodeinfo
Fehler: keine gültige Verbindung

virsh # connect

virsh # nodeinfo
CPU-Modell:          x86_64
CPU(s):              8
CPU-Frequenz:        1700 MHz
CPU-Socket(s):       2
Kern(e) pro Socket:  4
Thread(s) pro Kern:  1
NUMA-Zelle(n):       1
Speichergrösse:     4047848 kB

virsh # quit

[root@rr019 ~]# LIBVIRT_DEBUG=1 virsh --connect qemu+tcp://192.168.1.19:8002/system nodeinfo
13:41:58.457: debug : virInitialize:287 : register drivers
13:41:58.457: debug : virRegisterDriver:660 : registering Test as driver 0
13:41:58.457: debug : virRegisterNetworkDriver:560 : registering Test as network driver 0
13:41:58.457: debug : virRegisterStorageDriver:591 : registering Test as storage driver 0
13:41:58.457: debug : virRegisterDeviceMonitor:622 : registering Test as device driver 0
13:41:58.457: debug : virRegisterDriver:660 : registering Xen as driver 1
13:41:58.457: debug : virRegisterDriver:660 : registering OPENVZ as driver 2
13:41:58.457: debug : virRegisterDriver:660 : registering remote as driver 3
13:41:58.457: debug : virRegisterNetworkDriver:560 : registering remote as network driver 1
13:41:58.457: debug : virRegisterStorageDriver:591 : registering remote as storage driver 1
13:41:58.457: debug : virRegisterDeviceMonitor:622 : registering remote as device driver 1
13:41:58.457: debug : virConnectOpenAuth:1089 : name=qemu+tcp://192.168.1.19:8002/system, auth=0x309949ca40, flags=0
(...)
virsh# connect qemu+tcp://192.168.1.19:8002/system
13:43:05.273: debug : virConnectClose:1107 : conn=0x63c4120
13:43:05.273: debug : call:6462 : Doing call 2 (nil)
13:43:05.273: debug : call:6532 : We have the buck 2 0x642d340 0x642d340
13:43:05.274: debug : processCallRecvLen:6120 : Got length, now need 28 total (24 more)
13:43:05.274: debug : processCalls:6388 : Giving up the buck 2 0x642d340 (nil)
13:43:05.274: debug : call:6563 : All done with our call 2 (nil) 0x642d340
13:43:05.274: debug : virUnrefConnect:226 : unref connection 0x63c4120 1
13:43:05.274: debug : virReleaseConnect:187 : release connection 0x63c4120
13:43:05.275: debug : virConnectOpen:1039 : name=qemu+tcp://192.168.1.19:8002/system
(...)
Fehler: Verbindungsaufbau zum Hypervisor schlug fehl
Fehler: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)

virsh #

Seems like the connect in virsh should call virConnectOpenAuth instead of virConnectOpen. So a live migration of a kvm-domain won't work - or how is this inteded to work?

[root@rr019 ~]# yum list libvirt kvm qemu
(...)
Installed Packages
kvm.x86_64                                               84-1.el5                                               installed
libvirt.x86_64                                           0.6.1-1                                                installed
qemu.x86_64                                              0.9.1-11.el5                                           installed

Packages kvm an libvirt are compiled from sources of the respective vendors (linux-kvm.org & libvirt.org), qemu and qemu-img are from EPEL.

Comment 3 Gerrit Slomma 2009-03-28 12:32:45 UTC
Problem exists with -smp 1 for the virtual machine too.

[root@rr019v2 ~]# dmesg
BUG: soft lockup - CPU#0 stuck for 10s! [hald-addon-stor:1568]
CPU 0:
Modules linked in: ipv6 xfrm_nalgo crypto_api dm_mirror dm_multipath scsi_dh video hwmon backlight sbs i2c_ec button battery asus_acpi acpi_memhotplug ac lp floppy i2c_piix4 ide_cd pcspkr 8139too cdrom i2c_core 8139cp parport_pc mii virtio_pci parport serio_raw virtio_ring virtio dm_raid45 dm_message dm_region_hash dm_log dm_mod dm_mem_cache ata_piix libata sd_mod scsi_mod ext3 jbd uhci_hcd ohci_hcd ehci_hcd
Pid: 1568, comm: hald-addon-stor Not tainted 2.6.18-128.el5 #1
RIP: 0010:[<ffffffff8000ec28>]  [<ffffffff8000ec28>] ide_do_request+0x30f/0x78d
RSP: 0018:ffffffff80425d78  EFLAGS: 00000246
RAX: 0000000000204108 RBX: ffff81003fd18480 RCX: ffff81003fd18480
RDX: ffff810000000000 RSI: ffff81003fd18480 RDI: 000000000000000f
RBP: ffffffff80425cf0 R08: 000000003ff98000 R09: 0000000000000000
R10: ffff81003fd18480 R11: 0000000000000110 R12: ffffffff8005dc8e
R13: ffffffff804cb918 R14: ffffffff800774da R15: ffffffff80425cf0
FS:  00002b7ca95d36e0(0000) GS:ffffffff803ac000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00002aded67c7000 CR3: 000000003aed4000 CR4: 00000000000006e0

Call Trace:
 <IRQ>  [<ffffffff8000eba8>] ide_do_request+0x28f/0x78d
 [<ffffffff88206103>] :ide_cd:cdrom_decode_status+0x31c/0x347
 [<ffffffff882066fe>] :ide_cd:cdrom_pc_intr+0x27/0x21c
 [<ffffffff8000d4f5>] ide_intr+0x1af/0x1df
 [<ffffffff80010a46>] handle_IRQ_event+0x51/0xa6
 [<ffffffff800b7ade>] __do_IRQ+0xa4/0x103
 [<ffffffff8006c95d>] do_IRQ+0xe7/0xf5
 [<ffffffff8005d615>] ret_from_intr+0x0/0xa
 [<ffffffff80059cd6>] ide_outsw+0x0/0x9
 [<ffffffff80011f84>] __do_softirq+0x51/0x133
 [<ffffffff8005e2fc>] call_softirq+0x1c/0x28
 [<ffffffff8006cada>] do_softirq+0x2c/0x85
 [<ffffffff8005dc8e>] apic_timer_interrupt+0x66/0x6c
 <EOI>  [<ffffffff80059cd6>] ide_outsw+0x0/0x9
 [<ffffffff80059cde>] ide_outsw+0x8/0x9
 [<ffffffff801cd618>] atapi_output_bytes+0x23/0x5e
 [<ffffffff882066d7>] :ide_cd:cdrom_pc_intr+0x0/0x21c
 [<ffffffff88206d19>] :ide_cd:cdrom_transfer_packet_command+0xb0/0xdb
 [<ffffffff88206d91>] :ide_cd:cdrom_do_pc_continuation+0x0/0x2b
 [<ffffffff882055ad>] :ide_cd:cdrom_start_packet_command+0x14f/0x15b
 [<ffffffff8000eec4>] ide_do_request+0x5ab/0x78d
 [<ffffffff8013cb30>] elv_insert+0xd6/0x1f7
 [<ffffffff800414bc>] ide_do_drive_cmd+0xc0/0x116
 [<ffffffff882035f7>] :ide_cd:cdrom_queue_packet_command+0x46/0xe2
 [<ffffffff80059cde>] ide_outsw+0x8/0x9
 [<ffffffff801cc987>] ide_init_drive_cmd+0x10/0x24
 [<ffffffff88203b0f>] :ide_cd:cdrom_check_status+0x62/0x71
 [<ffffffff8013e034>] blk_end_sync_rq+0x0/0x2e
 [<ffffffff88203b3a>] :ide_cd:ide_cdrom_check_media_change_real+0x1c/0x37
 [<ffffffff881d7076>] :cdrom:media_changed+0x44/0x74
 [<ffffffff800df8d7>] check_disk_change+0x1f/0x50
 [<ffffffff881db33b>] :cdrom:cdrom_open+0x8ef/0x93c
 [<ffffffff8000cbb6>] do_lookup+0x65/0x1e6
 [<ffffffff8000d0d4>] dput+0x2c/0x114
 [<ffffffff8000a3be>] __link_path_walk+0xdf8/0xf42
 [<ffffffff8002c77e>] mntput_no_expire+0x19/0x89
 [<ffffffff8000e881>] link_path_walk+0xd3/0xe5
 [<ffffffff80063db6>] do_nanosleep+0x47/0x70
 [<ffffffff8000d0d4>] dput+0x2c/0x114
 [<ffffffff80057987>] kobject_get+0x12/0x17
 [<ffffffff80140caf>] get_disk+0x3f/0x81
 [<ffffffff8005a659>] exact_lock+0xc/0x14
 [<ffffffff801b8f11>] kobj_lookup+0x132/0x19b
 [<ffffffff88203e8d>] :ide_cd:idecd_open+0x9f/0xd0
 [<ffffffff800dff49>] do_open+0xa2/0x30f
 [<ffffffff800e040a>] blkdev_open+0x0/0x4f
 [<ffffffff800e042d>] blkdev_open+0x23/0x4f
 [<ffffffff8001e4f2>] __dentry_open+0xd9/0x1dc
 [<ffffffff80026f1f>] do_filp_open+0x2a/0x38
 [<ffffffff80063db6>] do_nanosleep+0x47/0x70
 [<ffffffff8000d0d4>] dput+0x2c/0x114
 [<ffffffff800198ab>] do_sys_open+0x44/0xbe
 [<ffffffff8005d116>] system_call+0x7e/0x83

but CPU does not go up to 100% and is useable.
Migrating back from B to A also works, the lockup is thrown one, the virtual machine is still useable.
Maybe there is a problem with the threads and they should be locked to a definite cpu?

Comment 4 Gerrit Slomma 2009-03-28 12:34:28 UTC
argl wrong bug for comment number 3, please delete it.

Comment 5 Daniel Berrangé 2009-03-31 15:21:37 UTC

*** This bug has been marked as a duplicate of bug 489250 ***


Note You need to log in before you can comment on or make changes to this bug.