RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1299680 - [RFE] Memory hot unplug on powerpc platform - libvirt
Summary: [RFE] Memory hot unplug on powerpc platform - libvirt
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: ppc64le
OS: Linux
unspecified
high
Target Milestone: rc
: 7.4
Assignee: Andrea Bolognani
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1248279
Blocks: 1299988 RHV4.1PPC
TreeView+ depends on / blocked
 
Reported: 2016-01-19 00:40 UTC by Karen Noel
Modified: 2017-08-02 07:44 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 1248279
Environment:
Last Closed: 2017-08-02 07:44:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 146290 0 None None None 2016-10-17 09:39:19 UTC

Comment 1 Andrea Bolognani 2016-08-01 13:09:16 UTC
Moving to 7.4 as Bug 1248279 (QEMU part) has
already been deferred.

Comment 2 Xuesong Zhang 2016-10-18 14:55:10 UTC
hi, Hanns-Joachim Uhl,

I saw you change the target release of this bug from 7.4 to 7.3. 
As you can see, this is one testonly bug, no patches will be added to fixed this bug.
Maybe you can update the target release of the depend on qemu BZ 1248279 to 7.3 if needed. Thanks.

Comment 3 Hanns-Joachim Uhl 2016-10-18 15:05:45 UTC
(In reply to Xuesong Zhang from comment #2)
> hi, Hanns-Joachim Uhl,
> 
> I saw you change the target release of this bug from 7.4 to 7.3. 
> As you can see, this is one testonly bug, no patches will be added to fixed
> this bug.
> Maybe you can update the target release of the depend on qemu BZ 1248279 to
> 7.3 if needed. Thanks.
.
Hi,
... good catch ... 
... because this bugzilla is dependent on Bug 1248279 (the QEMU part ...)
and because this bugzilla is a 'TestOnly' bugzilla
you are correct that this bugzilla can first be tested 
in the RHEL7.4 timeframe ...
.. adjusting this Red Hat bugzilla accordingly.
Thanks for your attention and support.

Comment 4 Dan Zheng 2017-04-14 01:41:39 UTC
Test with packages:
qemu-kvm-rhev-2.8.0-6.el7.ppc64le
kernel-3.10.0-628.el7.ppc64le
libvirt-3.2.0-1.el7.ppc64le


Case 1. Start with 0 memory device, hotplug a memory device, hot-unplug a memory device  ---Pass
1. Start guest with below configration
<maxMemory slots='16' unit='KiB'>2524288</maxMemory> 
<memory unit='KiB'>1048576</memory> 
<currentMemory unit='KiB'>1048576</currentMemory> 
<vcpu placement='static'>4</vcpu> 
<os> 
<type arch='ppc64le' machine='pseries-rhel7.4.0'>hvm</type> 
<boot dev='hd'/> 
</os> 
<cpu> 
<numa> 
<cell id='0' cpus='0-1' memory='524288' unit='KiB'/> 
<cell id='1' cpus='2-3' memory='524288' unit='KiB'/> 
</numa> 
</cpu> 

2. Check guest memory within guest 
# cat /proc/meminfo|grep Mem 
MemTotal: 1003328 kB 
MemFree: 579136 kB 
MemAvailable: 737344 kB 


3. Hotplug a memory device 

memdevice xml: 
<memory model='dimm'>
<target>
<size unit='KiB'>512000</size>
<node>0</node>
</target>
</memory>

# virsh attach-device dzhvm memdevice.xml 
Device attached successfully 

4. Check guest XML 
# virsh dumpxml dzhvm|grep memory -A3 
<memory unit='KiB'>1572864</memory> (1048576 + 524288 = 1572864) OK 
<currentMemory unit='KiB'>1548288</currentMemory> 
<vcpu placement='static'>4</vcpu> 
<resource> 
-- 
<cell id='0' cpus='0-1' memory='524288' unit='KiB'/> 
<cell id='1' cpus='2-3' memory='524288' unit='KiB'/> 
</numa> 
</cpu> 
<clock offset='utc'/> 
-- 
<memory model='dimm'> 
<target> 
<size unit='KiB'>524288</size> 
<node>0</node> 
</target> 
<alias name='dimm0'/> 
<address type='dimm' slot='0' base='0x40000000'/> 
</memory> 

5. Check memory in guest

# cat /proc/meminfo|grep Mem 
MemTotal: 1527616 kB (1003328 + 524288 = 1527616) OK 
MemFree: 1087168 kB 
MemAvailable: 1257280 kB 

6. Hotunplug memory device
# virsh detach-device dzhvm memdevice.xml 
Device detached successfully 

7. Check guest xml
# virsh dumpxml dzhvm|grep emory 
<maxMemory slots='16' unit='KiB'>2621440</maxMemory> 
<memory unit='KiB'>1048576</memory> 
<currentMemory unit='KiB'>1024000</currentMemory> 
<cell id='0' cpus='0-1' memory='524288' unit='KiB'/> 
<cell id='1' cpus='2-3' memory='524288' unit='KiB'/> 

8. Check memory updated in guest
# cat /proc/meminfo|grep Mem 
MemTotal: 1003328 kB 
MemFree: 571072 kB 
MemAvailable: 744704 kB

Comment 5 Dan Zheng 2017-05-09 02:53:31 UTC
Test with packages:
kernel-3.10.0-657.el7.ppc64le
libvirt-3.2.0-4.el7.ppc64le
qemu-kvm-rhev-2.9.0-2.el7.ppc64le


Case 2: Hotplug memory and hotunplug memory device with hugepage enabled
1. Start guest without memory device using below XML part.
  <maxMemory slots='16' unit='KiB'>25427968</maxMemory>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4048896</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
  <cpu>
    <numa>
      <cell id='0' cpus='0-1' memory='2097152' unit='KiB'/>
      <cell id='1' cpus='2-3' memory='2097152' unit='KiB'/>
    </numa>
  </cpu>

2. Memory in guest
[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3406         179        2946          12         280        2776
3. Attach a memory device
# virsh attach-device vm1 memdevice.xml 
Device attached successfully
# cat memdevice.xml
    <memory model='dimm'>
       <source>
         <pagesize unit='KiB'>16384</pagesize>
         <nodemask>0</nodemask>
       </source>
       <target>
         <size unit='KiB'>524288</size>
         <node>0</node>
       </target>
    </memory>
4. Check memory in guest and dumpxml guest
[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3918         183        3435          12         298        3613

# virsh dumpxml vm1 |grep mem -A10
  <memory unit='KiB'>4718592</memory>
  <currentMemory unit='KiB'>4573184</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
    ...
      <cell id='0' cpus='0-1' memory='2097152' unit='KiB'/>
      <cell id='1' cpus='2-3' memory='2097152' unit='KiB'/>

...
    <memory model='dimm'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>16384</pagesize>
      </source>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <alias name='dimm0'/>
      <address type='dimm' slot='0' base='0x100000000'/>
    </memory>


5. Detach the memory device
# virsh detach-device vm1 detach-mem.xml 
Device detached successfully

6. Check memory in guest
[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3406         180        2933          12         292        3112

7. repeat hotplug and hotunplug for many times. It is ok.


Case 3: Hotplug 3 memory devices, reboot, and hotunplug one memory device with hugepage enabled, re-hotplug it.

1. Start a guest with hugepage
Attach memory device for three times.
2. Check in guest
# free -m
              total        used        free      shared  buff/cache   available
Mem:           4942         194        4467          12         281        4182

3. Reboot guest and recheck memory in guest and get same result.
4. Hotunplug the second memory device
    <memory model='dimm'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>16384</pagesize>
      </source>
      <target>
        <size unit='KiB'>524288</size>
        <node>0</node>
      </target>
      <alias name='dimm1'/>
      <address type='dimm' slot='1' base='0x120000000'/>
    </memory>
# virsh detach-device vm1 detach-mem2.xml 
Device detached successfully

5. Dumpxml guest and memory in guest are correct.

6. Re-hotplug this memory device and memory in guest and dumpxml are both correct.


Case 4: With PCI <hostdev>, hotplug and hot-unplug without hugepage

Start guest with 4G guest memory
  <maxMemory slots='16' unit='KiB'>25427968</maxMemory>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4048896</currentMemory>
  <cpu>
    <numa>
      <cell id='0' cpus='0-1' memory='2097152' unit='KiB'/>
      <cell id='1' cpus='2-3' memory='2097152' unit='KiB'/>
    </numa>
  </cpu>
...
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0003' bus='0x0f' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>

hotplug a memory device and hotunplug are both ok.


Note You need to log in before you can comment on or make changes to this bug.