Created attachment 1014110 [details] logs and steps Description of problem: i am running Jonu on Redhat7.0. however when I attempt to do a migration (nova live-migration --block-migrate x), i receive the following: 2015-04-13 23:12:56.707 23280 ERROR nova.virt.libvirt.driver [-] [instance: 74f067a4-5cff-4912-b6ea-a4e75a41f125] Live Migration failure: internal error: unable to execute QEMU command 'migrate': this feature or command is not currently supported 2015-04-13 23:12:57.202 23280 ERROR nova.virt.libvirt.driver [-] [instance: 74f067a4-5cff-4912-b6ea-a4e75a41f125] Migration operation has aborted 2015-04-13 23:12:57.737 23280 WARNING nova.virt.libvirt.driver [-] [instance: 74f067a4-5cff-4912-b6ea-a4e75a41f125] Error monitoring migration: Remote error: ProcessExecutionError Unexpected error while running command. Command: sudo nova-rootwrap /etc/nova/rootwrap.conf sginfo -r Exit code: 96 Stdout: u'' Stderr: u'/usr/bin/nova-rootwrap: Executable not found: sginfo (filter match = sginfo)\n' the version i have: [root@mac001ec9f72532 ~]# rpm -qa | grep qemu qemu-kvm-common-rhev-2.1.2-23.el7.x86_64 ipxe-roms-qemu-20130517-6.gitc4bce43.el7.noarch qemu-kvm-rhev-2.1.2-23.el7.x86_64 qemu-img-rhev-2.1.2-23.el7.x86_64 libvirt-daemon-driver-qemu-1.2.8-16.el7_1.1.x86_64 the configuration i did: ->Edit /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 auth_tcp = “none” ->Edit /etc/sysconfig/libvirtd LIBVIRTD_ARGS=”–listen” ->Restart libvirtd service libvirtd restart ->Edit /etc/nova/nova.conf, add following line: live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED Steps to Reproduce: 1.install one instance 2.do block live migrate i attached my log and steps
You have not specified the important piece of information, which is the value of 'block_migration_flag' you have used in your nova.conf on your Compute nodes. If you have a pre-existing block_migration_flag set in nova.conf, then please remove the VIR_MIGRATE_TUNNELLED flag and re-test. If you _don't_ have 'block_migration_flag' set, then, please set it as below and re-test. block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_NON_SHARED_INC Rationale, from a similar bug[1] (closed as NOTABUG): "That is the problem. You have requested the TUNELLED flag and that does not work with the new style NBD based block migration code in QEMU. As such libvirt is trying to fallback to the old style block migration code, but this is disabled in RHEL. The solution here is to not requested TUNNELLED migration, though this does mean the migration stream will be run in clear text rather than encrypted." Closing this bug as a duplicate, per the above rationale. If you're able to reproduce it without VIR_MIGRATE_TUNNELLED flag set in block_migration_flag config attribute in nova.conf, then reopen with all relevant info/logs. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1201880#c9
Created attachment 1015104 [details] latest log and configuration file i added line "block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE, VIR_MIGRATE_NON_SHARED_INC" into cinder.conf. but migration is hunging on "migrating" status. i attached all of logs some things to make you more understanding: i deployed two compute nodes and one controller node, i only added that line on compute nodes. but on controller node.
(In reply to kevin from comment #6) > Created attachment 1015104 [details] > latest log and configuration file > > i added line "block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, > VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE, VIR_MIGRATE_NON_SHARED_INC" into > cinder.conf. It's not cinder.conf, it is nova.conf on Compute nodes (also, please attach the nova.conf files). If you haven't done so, then please redo the test by adding the line in nova.conf. And, if possible, perform the test with *contextual* debug logs for libvirt: https://kashyapc.fedorapeople.org/virt/openstack/request-nova-libvirt-qemu-debug-logs.txt [. . .]
sorry, migrate is successful after some mins. i will close this ticket