Bug 490255
| Summary: | libvir:Remote error : Failed to start SASL negotiation: -4 (SASL (-4) : no mechanism available: No worthy mechs found) when migrate the guest | ||
|---|---|---|---|
| Product: | [Community] Virtualization Tools | Reporter: | Vivian Bian <vbian> |
| Component: | libvirt | Assignee: | Daniel Veillard <veillard> |
| Status: | CLOSED DUPLICATE | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | low | ||
| Version: | unspecified | CC: | apevec, berrange, crobinso, gerrit.slomma, virt-maint |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2009-03-31 15:21:37 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 489250 | ||
|
Description
Vivian Bian
2009-03-14 10:14:35 UTC
*** Bug 490419 has been marked as a duplicate of this bug. *** But migration of kvm-domains won't work!
[root@rr019 boinc]# cd
[root@rr019 ~]# virsh --connect qemu:///system nodeinfo
CPU-Modell: x86_64
CPU(s): 8
CPU-Frequenz: 1700 MHz
CPU-Socket(s): 2
Kern(e) pro Socket: 4
Thread(s) pro Kern: 1
NUMA-Zelle(n): 1
Speichergrösse: 4047848 kB
[root@rr019 ~]# virsh --connect qemu+tcp://192.168.1.19:8002/system nodeinfo
Please enter your authentication name:root
Please enter your password:
CPU-Modell: x86_64
CPU(s): 8
CPU-Frequenz: 1700 MHz
CPU-Socket(s): 2
Kern(e) pro Socket: 4
Thread(s) pro Kern: 1
NUMA-Zelle(n): 1
Speichergrösse: 4047848 kB
[root@rr019 ~]# virsh --connect qemu+tcp://192.168.1.19:8002/system
Please enter your authentication name:root
Please enter your password:
Willkommen bei virsh, dem interaktiven Virtualisierungsterminal.
Tippen Sie: 'help' für eine Hilfe zu den Befehlen
'quit' zum Beenden
virsh # connect qemu+tcp://192.168.1.19:8002/system
Fehler: Verbindungsaufbau zum Hypervisor schlug fehl
Fehler: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)
virsh # nodeinfo
Fehler: keine gültige Verbindung
virsh # connect
virsh # nodeinfo
CPU-Modell: x86_64
CPU(s): 8
CPU-Frequenz: 1700 MHz
CPU-Socket(s): 2
Kern(e) pro Socket: 4
Thread(s) pro Kern: 1
NUMA-Zelle(n): 1
Speichergrösse: 4047848 kB
virsh # quit
[root@rr019 ~]# LIBVIRT_DEBUG=1 virsh --connect qemu+tcp://192.168.1.19:8002/system nodeinfo
13:41:58.457: debug : virInitialize:287 : register drivers
13:41:58.457: debug : virRegisterDriver:660 : registering Test as driver 0
13:41:58.457: debug : virRegisterNetworkDriver:560 : registering Test as network driver 0
13:41:58.457: debug : virRegisterStorageDriver:591 : registering Test as storage driver 0
13:41:58.457: debug : virRegisterDeviceMonitor:622 : registering Test as device driver 0
13:41:58.457: debug : virRegisterDriver:660 : registering Xen as driver 1
13:41:58.457: debug : virRegisterDriver:660 : registering OPENVZ as driver 2
13:41:58.457: debug : virRegisterDriver:660 : registering remote as driver 3
13:41:58.457: debug : virRegisterNetworkDriver:560 : registering remote as network driver 1
13:41:58.457: debug : virRegisterStorageDriver:591 : registering remote as storage driver 1
13:41:58.457: debug : virRegisterDeviceMonitor:622 : registering remote as device driver 1
13:41:58.457: debug : virConnectOpenAuth:1089 : name=qemu+tcp://192.168.1.19:8002/system, auth=0x309949ca40, flags=0
(...)
virsh# connect qemu+tcp://192.168.1.19:8002/system
13:43:05.273: debug : virConnectClose:1107 : conn=0x63c4120
13:43:05.273: debug : call:6462 : Doing call 2 (nil)
13:43:05.273: debug : call:6532 : We have the buck 2 0x642d340 0x642d340
13:43:05.274: debug : processCallRecvLen:6120 : Got length, now need 28 total (24 more)
13:43:05.274: debug : processCalls:6388 : Giving up the buck 2 0x642d340 (nil)
13:43:05.274: debug : call:6563 : All done with our call 2 (nil) 0x642d340
13:43:05.274: debug : virUnrefConnect:226 : unref connection 0x63c4120 1
13:43:05.274: debug : virReleaseConnect:187 : release connection 0x63c4120
13:43:05.275: debug : virConnectOpen:1039 : name=qemu+tcp://192.168.1.19:8002/system
(...)
Fehler: Verbindungsaufbau zum Hypervisor schlug fehl
Fehler: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)
virsh #
Seems like the connect in virsh should call virConnectOpenAuth instead of virConnectOpen. So a live migration of a kvm-domain won't work - or how is this inteded to work?
[root@rr019 ~]# yum list libvirt kvm qemu
(...)
Installed Packages
kvm.x86_64 84-1.el5 installed
libvirt.x86_64 0.6.1-1 installed
qemu.x86_64 0.9.1-11.el5 installed
Packages kvm an libvirt are compiled from sources of the respective vendors (linux-kvm.org & libvirt.org), qemu and qemu-img are from EPEL.
Problem exists with -smp 1 for the virtual machine too. [root@rr019v2 ~]# dmesg BUG: soft lockup - CPU#0 stuck for 10s! [hald-addon-stor:1568] CPU 0: Modules linked in: ipv6 xfrm_nalgo crypto_api dm_mirror dm_multipath scsi_dh video hwmon backlight sbs i2c_ec button battery asus_acpi acpi_memhotplug ac lp floppy i2c_piix4 ide_cd pcspkr 8139too cdrom i2c_core 8139cp parport_pc mii virtio_pci parport serio_raw virtio_ring virtio dm_raid45 dm_message dm_region_hash dm_log dm_mod dm_mem_cache ata_piix libata sd_mod scsi_mod ext3 jbd uhci_hcd ohci_hcd ehci_hcd Pid: 1568, comm: hald-addon-stor Not tainted 2.6.18-128.el5 #1 RIP: 0010:[<ffffffff8000ec28>] [<ffffffff8000ec28>] ide_do_request+0x30f/0x78d RSP: 0018:ffffffff80425d78 EFLAGS: 00000246 RAX: 0000000000204108 RBX: ffff81003fd18480 RCX: ffff81003fd18480 RDX: ffff810000000000 RSI: ffff81003fd18480 RDI: 000000000000000f RBP: ffffffff80425cf0 R08: 000000003ff98000 R09: 0000000000000000 R10: ffff81003fd18480 R11: 0000000000000110 R12: ffffffff8005dc8e R13: ffffffff804cb918 R14: ffffffff800774da R15: ffffffff80425cf0 FS: 00002b7ca95d36e0(0000) GS:ffffffff803ac000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00002aded67c7000 CR3: 000000003aed4000 CR4: 00000000000006e0 Call Trace: <IRQ> [<ffffffff8000eba8>] ide_do_request+0x28f/0x78d [<ffffffff88206103>] :ide_cd:cdrom_decode_status+0x31c/0x347 [<ffffffff882066fe>] :ide_cd:cdrom_pc_intr+0x27/0x21c [<ffffffff8000d4f5>] ide_intr+0x1af/0x1df [<ffffffff80010a46>] handle_IRQ_event+0x51/0xa6 [<ffffffff800b7ade>] __do_IRQ+0xa4/0x103 [<ffffffff8006c95d>] do_IRQ+0xe7/0xf5 [<ffffffff8005d615>] ret_from_intr+0x0/0xa [<ffffffff80059cd6>] ide_outsw+0x0/0x9 [<ffffffff80011f84>] __do_softirq+0x51/0x133 [<ffffffff8005e2fc>] call_softirq+0x1c/0x28 [<ffffffff8006cada>] do_softirq+0x2c/0x85 [<ffffffff8005dc8e>] apic_timer_interrupt+0x66/0x6c <EOI> [<ffffffff80059cd6>] ide_outsw+0x0/0x9 [<ffffffff80059cde>] ide_outsw+0x8/0x9 [<ffffffff801cd618>] atapi_output_bytes+0x23/0x5e [<ffffffff882066d7>] :ide_cd:cdrom_pc_intr+0x0/0x21c [<ffffffff88206d19>] :ide_cd:cdrom_transfer_packet_command+0xb0/0xdb [<ffffffff88206d91>] :ide_cd:cdrom_do_pc_continuation+0x0/0x2b [<ffffffff882055ad>] :ide_cd:cdrom_start_packet_command+0x14f/0x15b [<ffffffff8000eec4>] ide_do_request+0x5ab/0x78d [<ffffffff8013cb30>] elv_insert+0xd6/0x1f7 [<ffffffff800414bc>] ide_do_drive_cmd+0xc0/0x116 [<ffffffff882035f7>] :ide_cd:cdrom_queue_packet_command+0x46/0xe2 [<ffffffff80059cde>] ide_outsw+0x8/0x9 [<ffffffff801cc987>] ide_init_drive_cmd+0x10/0x24 [<ffffffff88203b0f>] :ide_cd:cdrom_check_status+0x62/0x71 [<ffffffff8013e034>] blk_end_sync_rq+0x0/0x2e [<ffffffff88203b3a>] :ide_cd:ide_cdrom_check_media_change_real+0x1c/0x37 [<ffffffff881d7076>] :cdrom:media_changed+0x44/0x74 [<ffffffff800df8d7>] check_disk_change+0x1f/0x50 [<ffffffff881db33b>] :cdrom:cdrom_open+0x8ef/0x93c [<ffffffff8000cbb6>] do_lookup+0x65/0x1e6 [<ffffffff8000d0d4>] dput+0x2c/0x114 [<ffffffff8000a3be>] __link_path_walk+0xdf8/0xf42 [<ffffffff8002c77e>] mntput_no_expire+0x19/0x89 [<ffffffff8000e881>] link_path_walk+0xd3/0xe5 [<ffffffff80063db6>] do_nanosleep+0x47/0x70 [<ffffffff8000d0d4>] dput+0x2c/0x114 [<ffffffff80057987>] kobject_get+0x12/0x17 [<ffffffff80140caf>] get_disk+0x3f/0x81 [<ffffffff8005a659>] exact_lock+0xc/0x14 [<ffffffff801b8f11>] kobj_lookup+0x132/0x19b [<ffffffff88203e8d>] :ide_cd:idecd_open+0x9f/0xd0 [<ffffffff800dff49>] do_open+0xa2/0x30f [<ffffffff800e040a>] blkdev_open+0x0/0x4f [<ffffffff800e042d>] blkdev_open+0x23/0x4f [<ffffffff8001e4f2>] __dentry_open+0xd9/0x1dc [<ffffffff80026f1f>] do_filp_open+0x2a/0x38 [<ffffffff80063db6>] do_nanosleep+0x47/0x70 [<ffffffff8000d0d4>] dput+0x2c/0x114 [<ffffffff800198ab>] do_sys_open+0x44/0xbe [<ffffffff8005d116>] system_call+0x7e/0x83 but CPU does not go up to 100% and is useable. Migrating back from B to A also works, the lockup is thrown one, the virtual machine is still useable. Maybe there is a problem with the threads and they should be locked to a definite cpu? argl wrong bug for comment number 3, please delete it. *** This bug has been marked as a duplicate of bug 489250 *** |