Bug 1275476 - auto convergence not working
Summary: auto convergence not working
Keywords:
Status: CLOSED DUPLICATE of bug 1252426
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: 4.17.9
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Martin Betak
QA Contact: meital avital
URL:
Whiteboard: virt
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-27 03:01 UTC by 马立克
Modified: 2016-01-29 12:36 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-29 12:36:24 UTC
oVirt Team: Virt
Embargoed:
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

Description 马立克 2015-10-27 03:01:47 UTC
Description of problem:
i enabled the auto convergence both in cluster and vm settings, and in the vm i ran a program which wrote the memory frequently. then i started to migrate the vm. but the migration just hang there in the ovirt-engine. the status of vm always be 'migrating from: 0%'. in the vdsm there are logs as following:

Thread-122::DEBUG::2015-10-27 10:28:53,331::migration::147::virt.vm::(_setupVdsConnection) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::Initiating connection with destination
Thread-122::DEBUG::2015-10-27 10:28:53,341::migration::159::virt.vm::(_setupVdsConnection) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::Destination server is: 10.1.110.99:54321
Thread-122::DEBUG::2015-10-27 10:28:53,343::migration::202::virt.vm::(_prepareGuest) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::Migration started
Thread-122::DEBUG::2015-10-27 10:28:53,356::migration::286::virt.vm::(run) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::migration semaphore acquired after 0 seconds
Thread-122::DEBUG::2015-10-27 10:28:53,369::stompreactor::377::jsonrpc.AsyncoreClient::(send) Sending response
Thread-122::INFO::2015-10-27 10:28:53,508::migration::335::virt.vm::(_startUnderlyingMigration) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::Creation of destination VM took: 0 seconds
Thread-122::INFO::2015-10-27 10:28:53,508::migration::354::virt.vm::(_startUnderlyingMigration) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::starting migration to qemu+tls://10.1.110.99/system with miguri tcp://10.1.110.99
Thread-123::DEBUG::2015-10-27 10:28:53,508::migration::443::virt.vm::(run) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::migration downtime thread started (10 steps)
Thread-124::DEBUG::2015-10-27 10:28:53,509::migration::500::virt.vm::(monitor_migration) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::starting migration monitor thread
Thread-123::DEBUG::2015-10-27 10:28:53,509::migration::466::virt.vm::(_set_downtime) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::setting migration downtime to 51
...
...
...
Thread-124::INFO::2015-10-27 10:29:03,512::migration::555::virt.vm::(monitor_migration) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::Migration Progress: 9 seconds elapsed, 68% of data processed
...
...
...
Thread-124::WARNING::2015-10-27 10:29:53,525::migration::548::virt.vm::(monitor_migration) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::Migration stalling: remaining (82MiB) > lowmark (26MiB). Refer to RHBZ#919201.
Thread-124::INFO::2015-10-27 10:29:53,525::migration::555::virt.vm::(monitor_migration) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::Migration Progress: 59 seconds elapsed, 99% of data processed
...
...
...
Thread-124::WARNING::2015-10-27 10:32:34,407::migration::535::virt.vm::(monitor_migration) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::Migration is stuck: Hasn't progressed in 150.677314043 seconds. Aborting.
Thread-124::DEBUG::2015-10-27 10:32:34,409::migration::558::virt.vm::(stop) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::stopping migration monitor thread
...
...
...
Thread-122::DEBUG::2015-10-27 10:32:41,558::migration::558::virt.vm::(stop) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::stopping migration monitor thread
Thread-122::DEBUG::2015-10-27 10:32:41,559::migration::453::virt.vm::(stop) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::stopping migration downtime thread
Thread-123::DEBUG::2015-10-27 10:32:41,561::migration::450::virt.vm::(run) vmId=`b19803f1-844a-422b-9a60-5c1381999fc6`::migration downtime thread exiting

Version-Release number of selected component (if applicable):
4.17.9

How reproducible:
always

Steps to Reproduce:
1.enable auto convergence both in cluster settings and vm settings
2.run a program which writes memory frequently in the vm
3.migrate the vm during the program is running

Actual results:
migration aborted in the vdsm, and the vm is hang in ovirt-engine

Expected results:
migration succeed due to auto convergence


Additional info:

Comment 1 马立克 2015-10-27 03:04:02 UTC
vm hangs in "migrating from" status could be another bug of ovirt-engine
because the migration is already aborted in vdsm

Comment 2 Yaniv Kaul 2015-10-27 07:29:14 UTC
Can you provide more details, on your host, VM and benchmark:
What is the host, its networking, the VM's memory and CPU, the benchmark being used to 'dirty' the memory, qemu-kvm version, etc.?

Comment 3 马立克 2015-10-27 07:47:58 UTC
(In reply to Yaniv Kaul from comment #2)
> Can you provide more details, on your host, VM and benchmark:
> What is the host, its networking, the VM's memory and CPU, the benchmark
> being used to 'dirty' the memory, qemu-kvm version, etc.?

The source host and destination host has same hardware and softeware configuration:
CPU: Intel(R) Xeon(R) E5-2603 @ 1.8GHz
MEMORY: 40GB
NETWORKING: 1Gig NICs between hosts
OS: RHEL 7.2 beta
qemu-kvm: 2.3.0-29.1
libvirt-daemon: 1.2.17-5

VM's configuration:
CPU: 4 vcpu(4 sockets, 1 core per socket)
Defined Memory: 4096 MB
Physical Memory Guaranteed: 1024 MB

the benchmark:
mmaptest.c:
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <pthread.h>
#include <sys/mman.h>
#include <sys/time.h>

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define LENGTH (256*1024*1024)
#define THREAD_NUM 100
#define MAX (5000*10000)

int main(int argc, char *argv[])
{
    int i, j, fd;
    int number[THREAD_NUM];
    char filename[256];
    void *addr;
    long *array;
    long ran;
    struct timeval begin, end;

    if (argc < 2) {
        fprintf(stderr, "usages:%s filename\n",argv[1]);
        return -1;
    }

    sprintf(filename, "%s", argv[1]);
    fd = open(filename, O_CREAT|O_RDWR, 0666);
    if (fd > 0)
        ftruncate(fd, LENGTH);

    addr = mmap(0, LENGTH, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
    memset(addr, 0, LENGTH);
    array = addr;

    gettimeofday(&begin, NULL);
    for (j = 0; j < 50; j++) {
        for (i = 0; i < MAX; i++) {
            ran = random();
            array[ran % (LENGTH/sizeof(long))] = ran;
        }
    }
    gettimeofday(&end, NULL);
    printf("time:%llu\n", (end.tv_sec*1000000+end.tv_usec) - (begin.tv_sec*1000000+begin.tv_usec));

    return 0;
}

Comment 4 马立克 2015-10-27 08:00:16 UTC
and i get some infomation during the migration(virsh domjobinfo <domain>):
Job type:         Unbounded
Time elapsed:     136453  ms
Data processed:   4.164 GiB
Data remaining:   20.633 MiB
Data total:       4.095 GiB
Memory processed: 4.164 GiB
Memory remaining: 20.633 MiB
Memory total:     4.095 GiB
Memory bandwidth: 32.016 MiB/s
Constant pages:   905653
Normal pages:     1035977
Normal data:      3.952 GiB
Expected downtime:1225 ms
Setup time:       16 ms
Compression cache:64.000 MiB
Compressed data:  201.476 MiB
Compressed pages: 215384
Compressed cache misses: 867613
Compression overflows: 0

The value of data remaining will increase to aroud 250-300 MiB after it is near to zero.

Comment 5 Yaniv Kaul 2015-10-27 08:06:39 UTC
(In reply to 马立克 from comment #3)
> (In reply to Yaniv Kaul from comment #2)
> > Can you provide more details, on your host, VM and benchmark:
> > What is the host, its networking, the VM's memory and CPU, the benchmark
> > being used to 'dirty' the memory, qemu-kvm version, etc.?
> 
> The source host and destination host has same hardware and softeware
> configuration:
> CPU: Intel(R) Xeon(R) E5-2603 @ 1.8GHz
> MEMORY: 40GB
> NETWORKING: 1Gig NICs between hosts

Regardless of the possible other issues, 1Gb is problematic. It will allow ~100+MB/sec transfer speed of memory contents. If you 'dirty' the memory in higher speed than that, certainly there's little chance for migration to converge...

> OS: RHEL 7.2 beta
> qemu-kvm: 2.3.0-29.1
> libvirt-daemon: 1.2.17-5
> 
> VM's configuration:
> CPU: 4 vcpu(4 sockets, 1 core per socket)
> Defined Memory: 4096 MB
> Physical Memory Guaranteed: 1024 MB
> 
> the benchmark:
> mmaptest.c:
> #include <sys/types.h>
> #include <sys/stat.h>
> #include <unistd.h>
> #include <fcntl.h>
> #include <pthread.h>
> #include <sys/mman.h>
> #include <sys/time.h>
> 
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> 
> #define LENGTH (256*1024*1024)
> #define THREAD_NUM 100
> #define MAX (5000*10000)
> 
> int main(int argc, char *argv[])
> {
>     int i, j, fd;
>     int number[THREAD_NUM];
>     char filename[256];
>     void *addr;
>     long *array;
>     long ran;
>     struct timeval begin, end;
> 
>     if (argc < 2) {
>         fprintf(stderr, "usages:%s filename\n",argv[1]);
>         return -1;
>     }
> 
>     sprintf(filename, "%s", argv[1]);
>     fd = open(filename, O_CREAT|O_RDWR, 0666);
>     if (fd > 0)
>         ftruncate(fd, LENGTH);
> 
>     addr = mmap(0, LENGTH, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
>     memset(addr, 0, LENGTH);
>     array = addr;
> 
>     gettimeofday(&begin, NULL);
>     for (j = 0; j < 50; j++) {
>         for (i = 0; i < MAX; i++) {
>             ran = random();
>             array[ran % (LENGTH/sizeof(long))] = ran;
>         }
>     }
>     gettimeofday(&end, NULL);
>     printf("time:%llu\n", (end.tv_sec*1000000+end.tv_usec) -
> (begin.tv_sec*1000000+begin.tv_usec));
> 
>     return 0;
> }

What is the speed of memory dirtying this program is doing?

Comment 6 马立克 2015-10-27 08:23:14 UTC
(In reply to Yaniv Kaul from comment #5)
> (In reply to 马立克 from comment #3)
> > (In reply to Yaniv Kaul from comment #2)
> > > Can you provide more details, on your host, VM and benchmark:
> > > What is the host, its networking, the VM's memory and CPU, the benchmark
> > > being used to 'dirty' the memory, qemu-kvm version, etc.?
> > 
> > The source host and destination host has same hardware and softeware
> > configuration:
> > CPU: Intel(R) Xeon(R) E5-2603 @ 1.8GHz
> > MEMORY: 40GB
> > NETWORKING: 1Gig NICs between hosts
> 
> Regardless of the possible other issues, 1Gb is problematic. It will allow
> ~100+MB/sec transfer speed of memory contents. If you 'dirty' the memory in
> higher speed than that, certainly there's little chance for migration to
> converge...

i think what you said is the reason of why the auto convergence feature is designed.
because the 'dirty' speed is higher than transfer speed, so as you say it's almost impossible to converge. that's why we need auto convergence feature, we want this feature slow down the cpu of vm, so that migration have chance to converge.

Maybe i'm wrong about the usage of auto convergence 
 
> 
> > OS: RHEL 7.2 beta
> > qemu-kvm: 2.3.0-29.1
> > libvirt-daemon: 1.2.17-5
> > 
> > VM's configuration:
> > CPU: 4 vcpu(4 sockets, 1 core per socket)
> > Defined Memory: 4096 MB
> > Physical Memory Guaranteed: 1024 MB
> > 
> > the benchmark:
> > mmaptest.c:
> > #include <sys/types.h>
> > #include <sys/stat.h>
> > #include <unistd.h>
> > #include <fcntl.h>
> > #include <pthread.h>
> > #include <sys/mman.h>
> > #include <sys/time.h>
> > 
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <string.h>
> > 
> > #define LENGTH (256*1024*1024)
> > #define THREAD_NUM 100
> > #define MAX (5000*10000)
> > 
> > int main(int argc, char *argv[])
> > {
> >     int i, j, fd;
> >     int number[THREAD_NUM];
> >     char filename[256];
> >     void *addr;
> >     long *array;
> >     long ran;
> >     struct timeval begin, end;
> > 
> >     if (argc < 2) {
> >         fprintf(stderr, "usages:%s filename\n",argv[1]);
> >         return -1;
> >     }
> > 
> >     sprintf(filename, "%s", argv[1]);
> >     fd = open(filename, O_CREAT|O_RDWR, 0666);
> >     if (fd > 0)
> >         ftruncate(fd, LENGTH);
> > 
> >     addr = mmap(0, LENGTH, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
> >     memset(addr, 0, LENGTH);
> >     array = addr;
> > 
> >     gettimeofday(&begin, NULL);
> >     for (j = 0; j < 50; j++) {
> >         for (i = 0; i < MAX; i++) {
> >             ran = random();
> >             array[ran % (LENGTH/sizeof(long))] = ran;
> >         }
> >     }
> >     gettimeofday(&end, NULL);
> >     printf("time:%llu\n", (end.tv_sec*1000000+end.tv_usec) -
> > (begin.tv_sec*1000000+begin.tv_usec));
> > 
> >     return 0;
> > }
> 
> What is the speed of memory dirtying this program is doing?

this program cost above 400 seconds

Comment 7 Michal Skrivanek 2015-11-24 11:21:04 UTC
there are known issues with the current (qemu 2.3) autoconvergence algorithm in qemu-kvm. It's not magic. 

Based on your info in comment #4, you can also tune other migration values in vdsm.conf(it would be relevant only for this artificial case though, it may not be valid for any real load)
You can increase migration_downtime parameter due to Expected downtime:1225 ms
You can extend the "stuck" detection from 150s to e.g. 400s since your program runs 400s by using migration_progress_timeout. 
The easiest to change is migration_max_bandwidth which is limiting you a lot...try 100(default is 32) which should still be fine for one migration at a time. The Extpected downtime as observed in virsh should decrease significantly

Comment 8 马立克 2015-12-07 02:35:59 UTC
(In reply to Michal Skrivanek from comment #7)
> there are known issues with the current (qemu 2.3) autoconvergence algorithm
> in qemu-kvm. It's not magic. 
> 
> Based on your info in comment #4, you can also tune other migration values
> in vdsm.conf(it would be relevant only for this artificial case though, it
> may not be valid for any real load)
> You can increase migration_downtime parameter due to Expected downtime:1225
> ms
> You can extend the "stuck" detection from 150s to e.g. 400s since your
> program runs 400s by using migration_progress_timeout. 
> The easiest to change is migration_max_bandwidth which is limiting you a
> lot...try 100(default is 32) which should still be fine for one migration at
> a time. The Extpected downtime as observed in virsh should decrease
> significantly

I tested again just like what you suggested, but the migration still failed. i understand that auto convergence is not magic. In fact this workload is not important, what i want is a test case to show the value of auto convergence function. It means if i disable auto convergence, the test case will fail; and if i enable auto convergence, the test case will succeed. do you have any suggestions about such test case?

Comment 9 Michal Skrivanek 2016-01-29 12:36:24 UTC
I suggest to compare some real life load instead of this artificial test. I would expect it dirties memory way too fast. You can watch for dirty memory size in libvirt migration stats during the migration, but again, I don't think autoconvergence would have significant effect on a simple "trash memory as fast as you can" app.
General enhancements tracked in bug 1252426

*** This bug has been marked as a duplicate of bug 1252426 ***


Note You need to log in before you can comment on or make changes to this bug.