Bug 1359309

Summary: [RFE] Reclaim active Livbirt sockets when inode changes
Product: Red Hat Enterprise Linux 6 Reporter: Jon Thomas <jthomas>
Component: libvirtAssignee: Libvirt Maintainers <libvirt-maint>
Status: CLOSED CANTFIX QA Contact: Virtualization Bugs <virt-bugs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 6.4CC: apalanisamy, berrange, chhu, dyuan, nashok, rbalakri, tarmstro, vcojot
Target Milestone: rcKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-07-25 09:54:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 4 nijin ashok 2016-07-23 00:50:19 UTC
What is the nature and description of the request?  

Libvirt instantiates a socket at e.x.: /var/lib/libvirt/qemu/70ac05b1-220d-4ec2-8c23-3b1e4771f17c.monitor .

BASE_PATH= /var/lib/libvirt/qemu/<uuid>/<socket>

When this BASE_PATH folder is moved to a new volume, the File Descriptor is still present in memory: 

lsof | head -n 1;lsof | grep monitor | head -n 2
COMMAND     PID      USER   FD      TYPE             DEVICE    SIZE/OFF       NODE NAME
qemu-kvm   2557      qemu    3u     unix 0xffff884035c780c0         0t0 3667135014 /var/lib/libvirt/qemu/70ac05b1-220d-4ec2-8c23-3b1e4771f17c.monitor
qemu-kvm   2557      qemu   22u     unix 0xffff881e10b73cc0         0t0 3667135325 /var/lib/libvirt/qemu/70ac05b1-220d-4ec2-8c23-3b1e4771f17c.monitor

When the BASE_PATH is moved to a new volume, the inode reference stays in memory:

# mv /var/lib/libvirt/qemu /x/
# lsof | head -n 1;lsof | grep monitor | head -n 2
COMMAND     PID      USER   FD      TYPE             DEVICE    SIZE/OFF       NODE NAME
qemu-kvm   2557      qemu    3u     unix 0xffff884035c780c0         0t0 3667135014 /var/lib/libvirt/qemu/70ac05b1-220d-4ec2-8c23-3b1e4771f17c.monitor
qemu-kvm   2557      qemu   22u     unix 0xffff881e10b73cc0         0t0 3667135325 /var/lib/libvirt/qemu/70ac05b1-220d-4ec2-8c23-3b1e4771f17c.monitor

It sets libvirt up to fail when 'killproc' is sent to it (service libvirtd restart (init script))

# service libvirtd restart
...
# lsof | head -n 1;lsof | grep monitor | head -n 2
COMMAND     PID      USER   FD      TYPE             DEVICE    SIZE/OFF       NODE NAME
monitor    2638      root  cwd       DIR              253,0        4096          2 /
monitor    2638      root  rtd       DIR              253,0        4096          2 /


This will kill VMs.

suggestion from customer: 

==
Instead of doing this, why not

- First, accept() the incoming connection. 
           The accepting process now has a handle to the listening socket, and the newly accepted socket.
- Fork and:
            In the child:
                 Close the listening socket.
                 Libvirt action with the accepted socket
In the parent:
            Close the accepted socket.
            Resume the accept loop.

This way you could handle the inode change in a sane way, and no VMs would need to die.
===

Comment 6 Daniel Berrangé 2016-07-25 09:28:18 UTC
(In reply to nijin ashok from comment #4)
> What is the nature and description of the request?  
> 
> Libvirt instantiates a socket at e.x.:
> /var/lib/libvirt/qemu/70ac05b1-220d-4ec2-8c23-3b1e4771f17c.monitor .
> 
> BASE_PATH= /var/lib/libvirt/qemu/<uuid>/<socket>
> 
> When this BASE_PATH folder is moved to a new volume, the File Descriptor is
> still present in memory: 

The obvious answer to this problem is to *NOT* move this folder out from under running libvirt/QEMU.

If you need extra space for save images / snapshots / core dumps, then just bind mount the sub-directories where space is needed to a larger volume.

For example: 

  for dir in save snapshot dump
  do
      mkdir /x/$dir
      nfiles=`ls /var/lib/libvirt/qemu/$dir/ | wc -l`
      if test "$nfiles" != "0"
      then
          mv /var/lib/libvirt/qemu/$dir/* /x/$dir/
      fi
      mount --bind /x/$dir /var/lib/libvirt/qemu/$dir
  done

Looking at the result

  # cd /var/lib/libvirt/qemu
  # df .
  Filesystem           1K-blocks    Used Available Use% Mounted on
  /dev/mapper/VolGroup-lv_root
                        19700396 3713880  14979108  20% /

  # df save
  Filesystem     1K-blocks  Used Available Use% Mounted on
  /dev/vdb         1032088  1300    978360   1% /x


This avoids any potential for disruption to the QEMU monitor sockets and does not even require a libvirtd restart.

Comment 7 Daniel Berrangé 2016-07-25 09:54:25 UTC
WRT the actual RFE here, there is nothing libvirt can do as libvirt does not own the UNIX domain sockets in question. 

QEMU is the process which holds open the socket, listening for client connections. When restarted, libvirt attempts to connect to the socket, but the kernel is returning ECONNREFUSED due to fact that the socket was moved and the inode in the filesystem no longer matches the inode of the socket that QEMU is listening on. There's nothing libvirt can do to make the kernel allow the connection once again at this point, as we can't fix the inode on the socket in the filesystem, nor can we get QEMU to close and re-open its monitor socket listener. The only option is to restart QEMU at this point.

As noted previously, this entire situation is easily avoided by not moving directories containing the monitor sockets in the first place.