1. Proposed title of this feature request Virtual serial console access to openstack instances 2. Who is the customer behind the request? Account name: CME Group Customer segment: FSI TAM/SRM customer: yes Strategic Customer: yes 3. What is the nature and description of the request? We currently have all of our physical and virtual servers connected to either physical or virtual serial ports for console access. It seems that openstack nova does not support this out of the box and we are required to use a VNC console. We are aware that nova automatically configures the hypervisor to redirect the serial port to a log file (which can be viewed via "nova console-log INSTANCE"), but this does not give us an interactive terminal. This feature request is for an interactive serial console that works with openstack instances (preferably with similar functionality to "nova get-vnc-console" for VNC). 4. Why does the customer need this? (List the business requirements here) They use conserver to access all of their servers throug physcial or virtual ports. None of the service processors, ilo/drac/..., have IP connectivity. Currently we rely on RS-232 serial connectivity from a terminal server to the server's external serial port. The service processor is accessed by invoking its shell via special escape sequence over the serial port, e.g. HP and Dell use <ESC>+( . conserver connects to the serial ports via ssh to the terminal server(dedicated tcp port per serial port). 5. How would the customer like to achieve this? (List the functional requirements here) RH would setup an environment to simulate the customers. We already use conserver in house, so this shouldn't be too hard. RH would implement functionality in OpenStack nova to allow communication via serial port and allow for that interaction to be interactive. 6. For each functional requirement listed in question 5, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. Basically implement and test what was described in Step 5. 7. Is there already an existing RFE upstream or in Red Hat bugzilla? Not that I am aware of. 8. Does the customer have any specific timeline dependencies? They are in initial stage of looking to move their existing virt/libvirt/kvm environment to OpenStack. That goal is probably a year away, but they would need/want this functionaly fairly soon to test out and validate it meets their needs. 9. Is the sales team involved in this request and do they have any additional input? Not really. 10. List any affected packages or components. OpenStack nova 11. Would the customer be able to assist in testing this functionality if implemented? yes
Upstream blueprint updated with proposed specification.
The specification detail is here for review: https://docs.google.com/document/d/1wqHoFGSdfy6VI5upQRpmXAqwVkBnuHzlehFnAv9Nxns/edit
The current direction of that blueprint is to expose the serial port via a browser accessible page, similar to noVNC (it also currently doesn't appear to handle disconnect but that's another matter). This doesn't seem to facilitate the use case described in this BZ, so I guess the question is how does the customer really need the serial ports exposed? As ports to SSH to on the compute nodes, or proxied somehow to a more central location?
Removing the blueprint link as it was deemed inapplicable to the customers use case at least in its existing form.
Created attachment 867464 [details] screenshot of a conserver client (console) accessing a VM's serial console output throught TCP I tried the nova libvirt setup with conserver and it seems to work. I only hope I've done everything right. If someone has more experience with conserver please let me know if I failed at something. I did the following thigs: - set up a vm to do what nova should do in the libvirt config (bottom right) - configured a conserver console to connect on a TCP socket (top left) - started conserver (bottom left) - and got the output (top right)
*** Bug 1041409 has been marked as a duplicate of this bug. ***
*** Bug 1114875 has been marked as a duplicate of this bug. ***
----- Forwarded Message ----- > From: "Amit Shah" <amit.shah> > To: "Zhang Haoyu" <zhanghy> > Cc: "qemu-devel" <qemu-devel>, "kvm" <kvm.org> > Sent: Friday, August 29, 2014 10:38:49 AM > Subject: Re: [Qemu-devel] [question] virtio-blk performance degradation happened with virito-serial > > On (Fri) 29 Aug 2014 [15:45:30], Zhang Haoyu wrote: > > Hi, all > > > > I start a VM with virtio-serial (default ports number: 31), and found that > > virtio-blk performance degradation happened, about 25%, this problem can > > be reproduced 100%. > > without virtio-serial: > > 4k-read-random 1186 IOPS > > with virtio-serial: > > 4k-read-random 871 IOPS > > > > but if use max_ports=2 option to limit the max number of virio-serial > > ports, then the IO performance degradation is not so serious, about 5%. > > > > And, ide performance degradation does not happen with virtio-serial. > > Pretty sure it's related to MSI vectors in use. It's possible that > the virtio-serial device takes up all the avl vectors in the guests, > leaving old-style irqs for the virtio-blk device. > > If you restrict the number of vectors the virtio-serial device gets > (using the -device virtio-serial-pci,vectors= param), does that make > things better for you? > > > Amit > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >
As part of QE need to confirm that we do not hit this performance degradation in OpenStack implementation when attaching serial and using virtio-blk.
I am completely confused about final implementation of this feature in Juno. Is there any valid documentation to the current implementation?
Look at Comment #13 (uses conserver client side entry point) and this one below https://docs.google.com/document/d/1ftVIfXZgb52CwJ0enyPNiqgnh8wfUgRQWnPOU6ppncQ/edit# I think those two will get what you want, although I haven't had a chance to walk through it yet myself.
On a fresh fedora 20 INSTALL 1. yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm 2. yum install -y openstack-packstack 3. packstack --gen-answer-file=answers.txt 4. ''you can have to update some configurations like the network interface'' 5. packstack --answer-file=answers.txt CONF 1. start the process `nova-serialproxy` (we probably have to create service in the package) 2. update the section serial_console in nova.conf to enable the feature serial console [serial_console] enabled = true 3. restart services `openstack-nova-api`, `openstack-nova-compute` systemctl restart openstack-nova-compute systemctl restart openstack-nova-api USE 1. start an instance `nova boot --flavor 1 --image cirros test` 2. get a websocket url `nova get-serial-console test` 3. now you need a ws client to connect to this url - you can use this one for test purpose: https://gist.github.com/sahid/894c31f306bebacb2207
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2015-0152.html
(In reply to Vladan Popovic from comment #13) > Created attachment 867464 [details] > screenshot of a conserver client (console) accessing a VM's serial console > output throught TCP > > I tried the nova libvirt setup with conserver and it seems to work. I only > hope I've done everything right. If someone has more experience with > conserver please let me know if I failed at something. > > I did the following thigs: > - set up a vm to do what nova should do in the libvirt config (bottom right) > - configured a conserver console to connect on a TCP socket (top left) > - started conserver (bottom left) > - and got the output (top right) Hi Vladan, this setup seems to be promising. Can you inbox me full details on how you carried out this setup ? thanks.