Bug 1802401 - The actual size of backup image bigger than base image after dd data file in guest
Summary: The actual size of backup image bigger than base image after dd data file in ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.0
Assignee: Virtualization Maintenance
QA Contact: aihua liang
URL:
Whiteboard:
: 1814664 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-13 04:46 UTC by qdong@redhat.com
Modified: 2020-09-16 05:04 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-11 09:49:05 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description qdong@redhat.com 2020-02-13 04:46:46 UTC
Description of problem:
     The actual size of backup image bigger than base image after dd data file in guest

Version-Release number of selected component (if applicable):
     qemu-kvm version:qemu-kvm-4.2.0-8.module+el8.2.0+5607+dc756904.x86_64
     kernal version:4.18.0-176.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1.boot guest with a data disk
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device pcie-root-port,id=pcie-root-port-5,slot=5,chassis=5,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-6,slot=6,chassis=6,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-7,slot=7,chassis=7,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-8,slot=8,chassis=8,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-9,slot=9,chassis=9,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-10,slot=10,chassis=10,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-11,slot=11,chassis=11,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-12,slot=12,chassis=12,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-13,slot=13,chassis=13,bus=pcie.0  \
    -device pcie-root-port,id=pcie-root-port-22,slot=22,chassis=22,bus=pcie.0  \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-6  \
    -nodefaults \
    -vga std \
    -m 7168  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20200207-010852-GmpzVm5h,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20200207-010852-GmpzVm5h,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -chardev socket,nowait,server,path=/var/tmp/serial-serial0-20200207-010852-GmpzVm5h,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20200207-010852-GmpzVm5h,path=/var/tmp/seabios-20200207-010852-GmpzVm5h,server,nowait \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-9,addr=0x0 \
-blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/rhel820-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bus=pcie-root-port-10,addr=0x0 \
    -device virtio-net-pci,mac=9a:3d:27:ae:23:e5,id=iduWeS2J,netdev=idQEZXaJ,bus=pcie-root-port-11,addr=0x0  \
    -netdev tap,id=idQEZXaJ,vhost=on \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=d,strict=off  \
    -monitor stdio \
    -qmp tcp:0:3002,server,nowait  \
    -vnc :11  \
    -blockdev driver=file,node-name=file_data,filename=/home/qdong/data.qcow2 \
    -blockdev driver=qcow2,file=file_data,node-name=drive_data1 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on,bus=pcie-root-port-12,addr=0x0 \

2.Create backup image sn1 and sn2 via blockdev-create 
     {'execute':'blockdev-create','arguments':{'options': 
     {'driver':'file','filename':'/root/sn1','size':2147483648},'job-id':'job1'}}
     {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}}
     {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':2147483648},'job-id':'job2'}}
     {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}}
     {'execute':'job-dismiss','arguments':{'id':'job1'}}
     {'execute':'job-dismiss','arguments':{'id':'job2'}}

     {'execute':'blockdev-create','arguments':{'options': 
     {'driver':'file','filename':'/root/sn2','size':2147483648},'job-id':'job1'}}
     {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn2','filename':'/root/sn2'}}
     {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn2','size':2147483648},'job-id':'job2'}}
     {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn2','file':'drive_sn2'}}
     {'execute':'job-dismiss','arguments':{'id':'job1'}}
     {'execute':'job-dismiss','arguments':{'id':'job2'}}

3.Check image size of "drive_data1" and sn1 and sn2.
     #qemu-img info /home/qdong/data.qcow2 -U
         image: /home/qdong/data.qcow2
         file format: qcow2
         virtual size: 2 GiB (2147483648 bytes)
         disk size: 196 KiB
         cluster_size: 65536
         Format specific information:
             compat: 1.1
             lazy refcounts: false
             refcount bits: 16
             corrupt: false
    # qemu-img info /root/sn1 -U
         image: /root/sn1
         file format: qcow2
         virtual size: 2 GiB (2147483648 bytes)
         disk size: 196 KiB
         cluster_size: 65536
         Format specific information:
             compat: 1.1
             lazy refcounts: false
             refcount bits: 16
             corrupt: false
    # qemu-img info /root/sn2 -U
         image: /root/sn2
         file format: qcow2
         virtual size: 2 GiB (2147483648 bytes)
         disk size: 196 KiB
         cluster_size: 65536
         Format specific information:
             compat: 1.1
             lazy refcounts: false
             refcount bits: 16
             corrupt: false

4.dd big file in data disk in guest
    # mkfs.ext4 /dev/vdb
    # mount /dev/vdb /mnt
    # cd /mnt   
    # dd if=/dev/urandom of=1 bs=1M count=500

5.check the image size of data disk 
    # qemu-img info /home/qdong/data.qcow2 -U
        image: /home/qdong/data.qcow2
        file format: qcow2
        virtual size: 2 GiB (2147483648 bytes)
        disk size: 67.9 MiB
        cluster_size: 65536
        Format specific information:
            compat: 1.1
            lazy refcounts: false
            refcount bits: 16
            corrupt: false

6.Do backup from "drive_data1" to sn1
     { "execute": "blockdev-backup", "arguments": { "device": "drive_data1", "sync":"full","target":"sn1","job-id":"j1"}}

7.check the image size of sn1
     # qemu-img info /root/sn1 -U
         image: /root/sn1
         file format: qcow2
         virtual size: 2 GiB (2147483648 bytes)
         disk size: 2 GiB
         cluster_size: 65536
         Format specific information:
             compat: 1.1
             lazy refcounts: false
             refcount bits: 16
             corrupt: false

8.Do backup from "drive_data1" to sn2 with "compress" true
     { "execute": "blockdev-backup", "arguments": { "device": "drive_data1", "sync":"full","target":"sn2","compress":true,"job-id":"j2"}}

9.check the image size of sn2
     # qemu-img info /root/sn2 -U
        image: /root/sn2
        file format: qcow2
        virtual size: 2 GiB (2147483648 bytes)
        disk size: 502 MiB
        cluster_size: 65536
        Format specific information:
            compat: 1.1
            lazy refcounts: false
            refcount bits: 16
            corrupt: false
Actual results:
     after backup from "drive_data1" to sn1,check the image size of sn1,
sn1_actual_size > data actual_size 

Expected results:
     after backup from "drive_data1" to sn1,check the image size of sn1,
sn1_actual_size <= data actual_size 

Additional info:

Comment 1 qdong@redhat.com 2020-02-13 05:51:39 UTC
Is this bug the same as another Bug 1248996 - The Allocation size of guest qcow2 image file equals to its Capacity size on target host after migration with non-shared storage with full disk copy. 

Bug 1248996 link: https://bugzilla.redhat.com/show_bug.cgi?id=1248996

Comment 2 qdong@redhat.com 2020-02-13 09:32:14 UTC
qemu-kvm-4.2.0-0.module+el8.2.0+4714+8670762e.x86_64 meet the same issue.

Comment 3 John Snow 2020-03-16 19:24:56 UTC
(In reply to qdong from comment #0)
> Description of problem:
>      The actual size of backup image bigger than base image after dd data
> file in guest
> 
> Version-Release number of selected component (if applicable):
>      qemu-kvm version:qemu-kvm-4.2.0-8.module+el8.2.0+5607+dc756904.x86_64
>      kernal version:4.18.0-176.el8.x86_64
> 
> How reproducible:
> 100%
> 
> Steps to Reproduce:

In summary, we are using:

- /home/kvm_autotest_root/images/rhel820-64-virtio.qcow2 as the OS disk (file-node: file_image1; format-node: drive_image1)
- /home/qdong/data.qcow2 as the data disk (file-node: file_data; format-node: drive_data1)
- /root/sn1 as a backup target. (file-node: drive_sn1; format-node: sn1)
- /root/sn2 as a backup target. (file-node: drive_sn2; format-node: sn2)

> 
> 3.Check image size of "drive_data1" and sn1 and sn2.

- data.qcow2, sn1, and sn2 are all qcow2 compat=1.1 images.
- all three are 2GiB qcow2 images with only 196KiB data allocated in them.

> 
> 4.dd big file in data disk in guest

You copy 500 MiB of data onto these disks.

> 5.check the image size of data disk 
>     # qemu-img info /home/qdong/data.qcow2 -U
>         image: /home/qdong/data.qcow2
>         file format: qcow2
>         virtual size: 2 GiB (2147483648 bytes)
>         disk size: 67.9 MiB

^ Has the data fully flushed yet? I expect BS=1M count=500 to copy 500MB of data. 67MB looks like too little.

>         cluster_size: 65536
>         Format specific information:
>             compat: 1.1
>             lazy refcounts: false
>             refcount bits: 16
>             corrupt: false
> 



> 6.Do backup from "drive_data1" to sn1

blockdev-backup from drive_data1 to sn1; sync=full.

> 
> 7.check the image size of sn1
>      # qemu-img info /root/sn1 -U
>          image: /root/sn1
>          file format: qcow2
>          virtual size: 2 GiB (2147483648 bytes)
>          disk size: 2 GiB

^ Whoops, this image appears fully allocated.

>          cluster_size: 65536
>          Format specific information:
>              compat: 1.1
>              lazy refcounts: false
>              refcount bits: 16
>              corrupt: false
> 
> 8.Do backup from "drive_data1" to sn2 with "compress" true
>      { "execute": "blockdev-backup", "arguments": { "device": "drive_data1",
> "sync":"full","target":"sn2","compress":true,"job-id":"j2"}}
> 
> 9.check the image size of sn2
>      # qemu-img info /root/sn2 -U
>         image: /root/sn2
>         file format: qcow2
>         virtual size: 2 GiB (2147483648 bytes)
>         disk size: 502 MiB

^ Whoops, this is also presumably fully allocated, but compressed?

>         cluster_size: 65536
>         Format specific information:
>             compat: 1.1
>             lazy refcounts: false
>             refcount bits: 16
>             corrupt: false
> Actual results:
>      after backup from "drive_data1" to sn1,check the image size of sn1,
> sn1_actual_size > data actual_size 
> 
> Expected results:
>      after backup from "drive_data1" to sn1,check the image size of sn1,
> sn1_actual_size <= data actual_size 
> 

I think this is a new / legitimate bug with blockdev-backup not preserving sparseness. You are copying to qcow2 1.1 images, I think that should work.


1. Does drive-backup preserve sparseness?
2. What about qemu-kvm based on 4.1? Is this a regression in 4.2?

Comment 4 aihua liang 2020-03-17 01:41:04 UTC
Test with qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae + -drive, don't hit this issue.

Test Steps:
  1. Start guest with qemu cmds:
      /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1 \
    -m 14336  \
    -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2  \
    -cpu 'EPYC',+kvm_pv_unhalt  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20200203-033416-61dmcn92,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20200203-033416-61dmcn92,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idy8YPXp \
    -chardev socket,path=/var/tmp/serial-serial0-20200203-033416-61dmcn92,server,nowait,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20200203-033416-61dmcn92,path=/var/tmp/seabios-20200203-033416-61dmcn92,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20200203-033416-61dmcn92,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -drive id=drive_image1,aio=threads,file=/home/kvm_autotest_root/images/rhel820-64-virtio-scsi.qcow2,cache=none,if=none,format=qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie.0-root-port-3,iothread=iothread0 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -drive if=none,id=drive_data1,aio=threads,file=/home/data.qcow2,cache=none \
    -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on,bus=pcie.0-root-port-6 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:6c:ca:b7:36:85,id=idz4QyVp,netdev=idNnpx5D,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idNnpx5D,vhost=on \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

 2. dd 500M file on data disk in guest
    (guest)# dd if=/dev/urandom of=test bs=1M count=500

 3. Do full backup
     { "execute": "drive-backup", "arguments": { "device": "drive_data1", "target": "full_backup.img", "sync": "full", "format": "qcow2" } }
{"timestamp": {"seconds": 1584409051, "microseconds": 796635}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_data1"}}
{"timestamp": {"seconds": 1584409051, "microseconds": 796715}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_data1"}}
{"timestamp": {"seconds": 1584409051, "microseconds": 796752}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "drive_data1"}}
{"timestamp": {"seconds": 1584409051, "microseconds": 796788}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_data1"}}
{"return": {}}
{"timestamp": {"seconds": 1584409052, "microseconds": 184966}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "drive_data1"}}
{"timestamp": {"seconds": 1584409052, "microseconds": 185021}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "drive_data1"}}
{"timestamp": {"seconds": 1584409052, "microseconds": 185075}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_data1", "len": 2147483648, "offset": 2147483648, "speed": 0, "type": "backup"}}
{"timestamp": {"seconds": 1584409052, "microseconds": 185124}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive_data1"}}
{"timestamp": {"seconds": 1584409052, "microseconds": 185158}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive_data1"}}

  4. Check backup image info
# qemu-img info full_backup.img 
image: full_backup.img
file format: qcow2
virtual size: 2 GiB (2147483648 bytes)
disk size: 502 MiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

Comment 5 aihua liang 2020-03-17 01:47:45 UTC
Test with qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae + -blockdev, still hit this issue.

Comment 6 aihua liang 2020-03-17 06:37:40 UTC
Test with qemu-kvm-4.1.0-23.module+el8.1.1+5938+f5e53076.2 + -blockdev, it works ok,
 
image info after backup:
 qemu-img info /root/sn1 -U
image: /root/sn1
file format: qcow2
virtual size: 2 GiB (2147483648 bytes)
disk size: 512 MiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

Comment 7 aihua liang 2020-03-17 07:15:34 UTC
Test on qemu-kvm-4.2.0-0.module+el8.2.0+4714+8670762e with -blockdev, also hit this issue. So the bug is not a regression bug.

image info after backup:
#qemu-img info /root/sn1 -U
image: /root/sn1
file format: qcow2
virtual size: 2 GiB (2147483648 bytes)
disk size: 2 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

Comment 8 John Snow 2020-03-17 14:02:23 UTC
"So the bug is not a regression bug" -- Am I misunderstanding? If it worked correctly in 4.1.x and incorrectly in 4.2.x, this appears to be a regression in upstream code.

It's likely due to the block job rewrite.

However, I have to admit that the fact it works OK with drive-backup in 4.2.x is surprising. This is a great starting point though, thank you for your research.

Comment 9 aihua liang 2020-09-11 09:49:05 UTC
Test on qemu-kvm-5.1.0-5.module+el8.3.0+7975+b80d25f1, don't hit this issue any more. so will close it as currentrelease.

Test steps:
  1.Start guest with qemu cmds:
   ...
   -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=6 \
   -blockdev node-name=file_data1,driver=file,aio=threads,filename=/mnt/data.qcow2,cache.direct=on,cache.no-flush=off \
   -blockdev node-name=drive_data1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_data1 \
   -device virtio-blk-pci,id=data1,drive=drive_data1,write-cache=on,bus=pcie-root-port-4,iothread=iothread1 \
  ...

 2.Create backup target sn1
   {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':2147483648},'job-id':'job1'}}
   {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}}
   {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':2147483648},'job-id':'job2'}}
   {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}}
   {'execute':'job-dismiss','arguments':{'id':'job1'}}
   {'execute':'job-dismiss','arguments':{'id':'job2'}}

 3.DD a file in guest data disk
   (guest)#mkfs.ext4 /dev/vdb
          #mount /dev/vdb /mnt
          #cd /mnt
          #dd if=/dev/urandom of=test bs=1M count=1000
          #md5sum test > sum1

 4.Check image info of data disk
   # qemu-img info /mnt/data.qcow2 -U
image: /mnt/data.qcow2
file format: qcow2
virtual size: 2 GiB (2147483648 bytes)
disk size: 1.01 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

 5.Do full backup to sn1
   { "execute": "blockdev-backup", "arguments": { "device": "drive_data1", "sync":"full","target":"sn1","job-id":"j1"}}
{"timestamp": {"seconds": 1599816871, "microseconds": 976123}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1599816871, "microseconds": 976205}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}
{"timestamp": {"seconds": 1599816905, "microseconds": 171886}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}}
{"timestamp": {"seconds": 1599816905, "microseconds": 171981}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}}
{"timestamp": {"seconds": 1599816905, "microseconds": 172072}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 2147483648, "offset": 2147483648, "speed": 0, "type": "backup"}}

 6.Check backup image info
    # qemu-img info /root/sn1 -U
image: /root/sn1
file format: qcow2
virtual size: 2 GiB (2147483648 bytes)
disk size: 0.979 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

Comment 10 aihua liang 2020-09-16 05:04:29 UTC
*** Bug 1814664 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.