Bug 674658 - Create virtio-serial channels various guest agents by default in kvm guests
Create virtio-serial channels various guest agents by default in kvm guests
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: python-virtinst (Show other bugs)
6.1
Unspecified Unspecified
high Severity medium
: rc
: ---
Assigned To: virt-mgr-maint
Virtualization Bugs
:
Depends On: 674660
Blocks: 767461 840699
  Show dependency treegraph
 
Reported: 2011-02-02 15:29 EST by Perry Myers
Modified: 2015-09-27 22:04 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 674660 767461 (view as bug list)
Environment:
Last Closed: 2012-09-11 21:24:42 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Perry Myers 2011-02-02 15:29:39 EST
Description of problem:
For Matahari usage we need a virtio-serial device created by default (along with the channel) so that the user doesn't have to worry about configuring this manually.

Channel name should be "org.apache.qpid.matahari.0"

The initial guest OS types that should have this device/channel by default are:

RHEL5
RHEL6
Fedora 14+ (older Fedora's support virtio-serial but we don't anticipate porting the AMQP/virtio-serial transport plugin back further than F14)
Windows (all types)
Comment 2 Perry Myers 2011-07-13 09:13:55 EDT
In addition to creating a virtio-serial device for use by matahari by default, we also need a default virtio-serial device (or port on the same device) for virt-agent usage.

cc'ing Dor to provide specifics of what virt-agent needs from guests here.

And cc'ing crolke to verify the channel name listed in the Description.
Comment 3 Perry Myers 2011-07-13 09:50:00 EDT
@Dor, right now the virtio-serial device that is used by qemu's guest agent is simply named 'qga' We should try to move this to be something a little more informative like: org.qemu.guestagent.0. Can you work with the qemu upstream to see if we can get something more sensible from a naming perspective pushed forward here?

@Cole, so what we need here is a single virtio-serial device, with multiple channels. Ideally the four channel names are:

org.apache.qpid.matahari.0
org.qemu.guestagent.0
org.libguestfs.channel.0 [1]
com.redhat.spice.0

Relaying from DanB.... We need to have metadata in libosinfo as well so that we know _which_ guests to create these virtio-serial devices and channels in, since not every guest will have libguestfs live support (only newer Fedora, etc).  But we should have an option to override libosinfo mappings to force creation of a specific channel (matahari/qga/libguestfs)

[1] - https://rwmj.wordpress.com/2011/07/06/libguestfs-live
Comment 4 Perry Myers 2011-07-13 13:55:49 EDT
From upstream qemu-devel list, mroth from IBM cleared this up.  The convention for the qemu guest agent will be a channel named:

org.qemu.guest_agent.0
Comment 5 Cole Robinson 2011-07-22 20:14:24 EDT
Perry, thanks for the info. But AIUI there is two parts to virtio serial port config:

1) the guest side, which is the channel names you provided. good to go.

2) and the host side, as in how to apps running on the host connect to the channel. this can be a psuedo tty, unix socket, etc.

Of the four ports you listed, the only one I know what to set the host side for is spice, which has it's own host-side interface called spicevmc.

what is the story with the rest of these? I see in rjones' post he sets up a unix socket at /var/lib/libvirt/qemu/$GUESTNAME.libguestfs, but that path won't work for qemu:///session guests. easiest thing from virt-* side is to just use a psuedo tty, since qemu will autoallocate us a free one. that will eat up ptys fast though, no idea if there is a hard limit on those.

for matahari and qemu agent, what is going to be talking to these programs on the host side? tools above libvirt? libvirt itself? qemu directly?
Comment 6 Richard W.M. Jones 2011-07-23 02:23:23 EDT
For systemd (ie. Fedora 15+, not RHEL) you might be interested
in the rules we worked out for automatically having guestfsd start
up inside the guest only when the virtio-serial channel is
detected:

http://pkgs.fedoraproject.org/gitweb/?p=libguestfs.git;a=commitdiff;h=d673388dfc3ffd1bc14ebf8dca72a6c3d212980f
http://pkgs.fedoraproject.org/gitweb/?p=libguestfs.git;a=commitdiff;h=37722b554a6cf3405315edbae786539f75ee6a21
Comment 7 Perry Myers 2011-07-25 10:13:54 EDT
> 2) and the host side, as in how to apps running on the host connect to the
> channel. this can be a psuedo tty, unix socket, etc.
> 
> Of the four ports you listed, the only one I know what to set the host side for
> is spice, which has it's own host-side interface called spicevmc.

To get the answers to those questions, you'll need to reach out to the respective dev teams.  For the vios-proxy, talk to Chuck Rolke (crolke).  vios-proxy will be used for matahari

For the qemu guest agent, talk to Dor
 
> what is the story with the rest of these? I see in rjones' post he sets up a
> unix socket at /var/lib/libvirt/qemu/$GUESTNAME.libguestfs, but that path won't
> work for qemu:///session guests. easiest thing from virt-* side is to just use
> a psuedo tty, since qemu will autoallocate us a free one. that will eat up ptys
> fast though, no idea if there is a hard limit on those.

Check with Rich on what he wants to do here
 
> for matahari and qemu agent, what is going to be talking to these programs on
> the host side? tools above libvirt? libvirt itself? qemu directly?

For matahari, the vios-proxy will talk to the host side virtio-serial port.  And then a qpid broker will connect to the vios-proxy

For the qemu agent, I think it's qemu itself that will connect to the virtio-serial port
Comment 8 Richard W.M. Jones 2011-07-25 10:21:49 EDT
(In reply to comment #7)
> > what is the story with the rest of these? I see in rjones' post he sets up a
> > unix socket at /var/lib/libvirt/qemu/$GUESTNAME.libguestfs, but that path won't
> > work for qemu:///session guests. easiest thing from virt-* side is to just use
> > a psuedo tty, since qemu will autoallocate us a free one. that will eat up ptys
> > fast though, no idea if there is a hard limit on those.
> 
> Check with Rich on what he wants to do here

No idea.  Does qemu:///session actually work?  It's fine to just
ignore that case as far as I'm concerned.
Comment 9 Cole Robinson 2011-07-25 20:34:25 EDT
(In reply to comment #8)
> (In reply to comment #7)
> > > what is the story with the rest of these? I see in rjones' post he sets up a
> > > unix socket at /var/lib/libvirt/qemu/$GUESTNAME.libguestfs, but that path won't
> > > work for qemu:///session guests. easiest thing from virt-* side is to just use
> > > a psuedo tty, since qemu will autoallocate us a free one. that will eat up ptys
> > > fast though, no idea if there is a hard limit on those.
> > 
> > Check with Rich on what he wants to do here
> 
> No idea.  Does qemu:///session actually work?  It's fine to just
> ignore that case as far as I'm concerned.

I was just pointing out potential issues. qemu:///session will probably be much more important in the future since it solves lot's of desktop integration issues.

The more important thing is that there is a host side connection which is the 'sanctioned' libguestfs location that we are confident isn't going to need changing in the future. If this has to change later virt-install is kinda screwed since it's not going to have any idea what version of libguestfs is going to be talking to the guest.
Comment 10 Richard W.M. Jones 2011-07-26 03:31:05 EDT
(In reply to comment #9)
> The more important thing is that there is a host side connection which is the
> 'sanctioned' libguestfs location that we are confident isn't going to need
> changing in the future. If this has to change later virt-install is kinda
> screwed since it's not going to have any idea what version of libguestfs is
> going to be talking to the guest.

libguestfs finds the location of the socket by looking at the
XML, so the socket can go anywhere.
/var/lib/libvirt/qemu/$GUESTNAME.libguestfs is just an example.
When I'm testing this, I usually just put the socket in /tmp.
Comment 11 Daniel Berrange 2011-07-26 05:51:11 EDT
In theory the socket can go anyway, but in practice we need to be aware of the SELinux policy constraints. For this reason I strongly recommend that the sockets for any virtio serial channel be placed in the same directory as the QEMU monitor socket, hence /var/lib/libvirt/qemu/$GUESTNAME.XXXXXX  for qemu:///system  or  $HOME/.libvirt/qemu/lib/$GUESTNAME.XXXXXX  for qemu:///session. For namespacing I'd recommend the XXXX part match the 'name' attribute from the <channel> XML. eg   '/var/lib/libvirt/qemu/$GUESTNAME.org.libguestfs.channel.0'
Comment 12 Cole Robinson 2011-07-26 10:04:04 EDT
(In reply to comment #11)
> In theory the socket can go anyway, but in practice we need to be aware of the
> SELinux policy constraints. For this reason I strongly recommend that the
> sockets for any virtio serial channel be placed in the same directory as the
> QEMU monitor socket, hence /var/lib/libvirt/qemu/$GUESTNAME.XXXXXX  for
> qemu:///system  or  $HOME/.libvirt/qemu/lib/$GUESTNAME.XXXXXX  for
> qemu:///session. For namespacing I'd recommend the XXXX part match the 'name'
> attribute from the <channel> XML. eg  
> '/var/lib/libvirt/qemu/$GUESTNAME.org.libguestfs.channel.0'

Sounds good, but to do this properly we are going to need this directory exposed in capabilities XML. Alternative would be to add some sort of way to have libvirt allocate a socket for us automatically. Maybe if we do <channel type='unix'> but don't specify any <source> element, libvirt fills in a socket path for us (similar to type='pty')
Comment 13 Cole Robinson 2011-10-13 15:34:12 EDT
So how important is this for 6.2 vs. 6.3?

It's easy enough to just throw a quick patch in to enable all these ports, but to do it correctly in virtinst, which means giving users a way to disable the auto-additions is gonna be a bit more work, and it's getting late for 6.2. I'd also like to get those libvirt additions in that I mentioned in comment #12

Also, from my understanding, vios-proxy is still being worked on and not yet packaged for RHEL, and still requires several manual steps outside of just installing the packages in the guest.

And I know danpb has done some work upstream to have libvirt talk to qemu guest agent, but that work won't be in 6.2 and without it qemu agent isn't much use with libvirt guests.

The spice channel stuff is already upstream and queued for 6.2. Sounds like we could add the libguestfs port but it's not clear to me if it's something we want to add to all guests... are we expecting people to want to run live libguestfs on all their vms, or does it make sense to keep it as something users opt into on a case by case basis?

Either way it's getting pretty late for 6.2 and it sounds like vios-proxy and qemu-ga aren't there yet for RHEL, so I'm deferring to 6.3. Please yell if that's a problem
Comment 14 Richard W.M. Jones 2011-10-13 16:31:38 EDT
For the libguestfs-live channel:

 - the functionality won't be ready until RHEL 6.3 (bug 719879)

 - we will likely disable it in RHEL 6.3 (too experimental to
   push to customers right now), maybe RHEL 6.4 material

 - in any case, we would never want it to be enabled by default,
   host administrator needs to take positive action to enable it
Comment 16 Cole Robinson 2011-12-09 18:24:03 EST
We currently have reduced capacity for virt-manager/virtinst features in RHEL. Obviously getting this into RHEL has some importance, but I don't think the situation has changed much from Comment #13. Setting conditional nack 'capacity' for now.
Comment 19 RHEL Product and Program Management 2012-07-10 03:46:27 EDT
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
Comment 20 RHEL Product and Program Management 2012-07-10 21:59:19 EDT
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Note You need to log in before you can comment on or make changes to this bug.