Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1585931

Summary: The memory device in the domain xml can't be attached to vm successfully
Product: Red Hat Enterprise Linux Advanced Virtualization Reporter: Jing Qi <jinqi>
Component: libvirtAssignee: Peter Krempa <pkrempa>
Status: CLOSED NOTABUG QA Contact: Jing Qi <jinqi>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.0CC: dyuan, jinqi, lmen, pkrempa, xuzhang, yalzhang
Target Milestone: rc   
Target Release: 8.1   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-25 05:26:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
the domain xml
none
The running xml
none
The running xml after detach
none
the inactive xml after detach none

Description Jing Qi 2018-06-05 06:59:09 UTC
Description of problem:
Cold unplug memory device in a running vm, the currentMemory is reduced.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Start a vm and dumpxml
  <maxMemory slots='8' unit='KiB'>2560000</maxMemory>
  <memory unit='KiB'>1036288</memory>
  <currentMemory unit='KiB'>1036288</currentMemory>
  <vcpu placement='static' cpuset='0-3'>4</vcpu>
  .....
   <cpu>
    <numa>
      <cell id='0' cpus='0-1' memory='512000' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
  ....
   <devices>
  ....
   <memory model='dimm'>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <address type='dimm' slot='0'/>
    </memory>
  </devices>


2. Prepare a xml file mem1.xml
   <memory model='dimm'>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <alias name='dimm0'/>
</memory>

3. Run command:  
   #virsh detach-device avocado-vt-vm-numa2 mem1.xml --config
    Device detached successfully
   #virsh dumpxml avocado-vt-vm-numa2 |grep -i memory -A5
  <maxMemory slots='8' unit='KiB'>2560000</maxMemory>
  <memory unit='KiB'>1036288</memory>
  <currentMemory unit='KiB'>512000</currentMemory>
  <vcpu placement='static' cpuset='0-3'>4</vcpu>
  <resouces>
     <partition>/machine</partition>
  </resources>
  <os>
  --
   
      <cell id='0' cpus='0-1' memory='512000' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
  <clock offset='utc'>
  <timer name='rtc' tickpolicy='catchup' track='guest'>
    <catchup threshold='123' slew='120' limit='10000'/>
--
   <memory model='dimm'>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <alias name='dimm0'/>
--

Actual results:
The currentMemory is reduced to 512000 in the active xml.

Expected results:
The currentMemory should not be affected by cold unplug.

Additional info:
The inactive xml is correct.
# virsh dumpxml avocado-vt-vm-numa2 --inactive|grep -i memory -A2
  <maxMemory slots='8' unit='KiB'>2560000</maxMemory>
  <memory unit='KiB'>512000</memory>
  <currentMemory unit='KiB'>512000</currentMemory>
 <vcpu placement='static' cpuset='0-3'>4</vcpu>
 --
 <cell id='0' cpus='0-1' memory='512000' unit='KiB' memAccess='shared'/>
    </numa>

Comment 2 Peter Krempa 2018-06-05 07:44:31 UTC
(In reply to Jing Qi from comment #0)
> Description of problem:
> Cold unplug memory device in a running vm, the currentMemory is reduced.
> 
> Version-Release number of selected component (if applicable):
> 
> 
> How reproducible:
> 
> 
> Steps to Reproduce:
> 1. Start a vm and dumpxml
>   <maxMemory slots='8' unit='KiB'>2560000</maxMemory>
>   <memory unit='KiB'>1036288</memory>
>   <currentMemory unit='KiB'>1036288</currentMemory>
>   <vcpu placement='static' cpuset='0-3'>4</vcpu>
>   .....
>    <cpu>
>     <numa>
>       <cell id='0' cpus='0-1' memory='512000' unit='KiB' memAccess='shared'/>
>     </numa>
>   </cpu>
>   ....
>    <devices>
>   ....
>    <memory model='dimm'>
>       <target>
>         <size unit='KiB'>524288</size>
>         <node>0</node>
>       </target>
>       <address type='dimm' slot='0'/>

This does not look like a live XML as the <alias> element is missing.

>     </memory>
>   </devices>

Please attach untruncated XMLs

Comment 3 Jing Qi 2018-06-05 07:57:21 UTC
Created attachment 1447752 [details]
the domain xml

Comment 4 Peter Krempa 2018-06-05 10:14:46 UTC
(In reply to Jing Qi from comment #3)
> Created attachment 1447752 [details]
> the domain xml

That is not a live XML.

Please attach live XML, after you start up the VM and both live and config/inactive XML after you unplug the device.

Comment 5 Jing Qi 2018-06-07 02:52:04 UTC
Created attachment 1448566 [details]
The running xml

Comment 6 Jing Qi 2018-06-07 02:52:42 UTC
Created attachment 1448567 [details]
The running xml after detach

Comment 7 Jing Qi 2018-06-07 02:53:13 UTC
Created attachment 1448569 [details]
the inactive xml after detach

Comment 8 Jing Qi 2018-06-11 04:06:11 UTC
I need to correct the description of the bug.  The correct steps are as below.
1. Start a vm and dumpxml
  <maxMemory slots='8' unit='KiB'>2560000</maxMemory>
  <memory unit='KiB'>1036288</memory>
  <currentMemory unit='KiB'>512000</currentMemory>
  <vcpu placement='static' cpuset='0-3'>4</vcpu>
  .....
   <cpu>
    <numa>
      <cell id='0' cpus='0-1' memory='512000' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
  ....
   <devices>
  ....
   <memory model='dimm'>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <address type='dimm' slot='0'/>
    </memory>
  </devices>


2. Then do "virsh dumpxml" for several times. At first, the currentMemory is "1036288"

#virsh dumpxml avocado-vt-vm-numa2 |grep -i memory -A5
  <maxMemory slots='8' unit='KiB'>2560000</maxMemory>
  <memory unit='KiB'>1036288</memory>
  <currentMemory unit='KiB'>1036288</currentMemory>
  <vcpu placement='static' cpuset='0-3'>4</vcpu>
  <resouces>
     <partition>/machine</partition>
  </resources>
  <os>
  
  Run the "virsh dumpxml" for several times or wait a while. the currentMemory become "512000".
 #virsh dumpxml avocado-vt-vm-numa2 |grep -i memory -A5
  <maxMemory slots='8' unit='KiB'>2560000</maxMemory>
  <memory unit='KiB'>1036288</memory>
  <currentMemory unit='KiB'>512000</currentMemory>
  <vcpu placement='static' cpuset='0-3'>4</vcpu>
  <resouces>
     <partition>/machine</partition>
  </resources>
  <os>

 It wasn't affected by the cold unplug, and it's decreased automatically. It may be as expected. Can you please help to confirm?

Comment 9 Jing Qi 2018-07-19 01:54:13 UTC
And if hotplug the memory after the machine is started, the currentmemory will increase.

b.xml
<memory model='dimm'>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <address type='dimm' slot='0'/>
    </memory>

# virsh attach-device avocado-vt-vm-numa2 b.xml

And dumpxml the active domain, 
 #virsh dumpxml avocado-vt-vm-numa2 |grep -i memory -A5
  <maxMemory slots='8' unit='KiB'>2560000</maxMemory>
  <memory unit='KiB'>1036288</memory>
  <currentMemory unit='KiB'>1036288</currentMemory>
  <vcpu placement='static' cpuset='0-3'>4</vcpu>
  <resouces>
     <partition>/machine</partition>
  </resources>
  <os>

The behaviors are different when the above xml is attached and it's in the domain xml before started. Is it as expected?

Comment 10 Jaroslav Suchanek 2019-04-24 12:40:28 UTC
This bug is going to be addressed in the next major release.

Comment 11 Jing Qi 2019-07-25 05:26:21 UTC
With some investigation, the bug need be closed since the issue in the description can't be reproduced in current release.
And the questions in the comment 8 & 9 should be expected.  

libvirt-5.5.0-1.virtcov.el8.x86_64