Bug 1126332
Summary: | Unable to connect to instances via VNC, because compute nodes are using the wrong network | |||
---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Toni Freger <tfreger> | |
Component: | rubygem-staypuft | Assignee: | Scott Seago <sseago> | |
Status: | CLOSED ERRATA | QA Contact: | Ofer Blaut <oblaut> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | Foreman (RHEL 6) | CC: | aberezin, ajeain, lars, mburns, mlopes, morazi, oblaut, rbalakri, rhos-maint, sclewis, tfreger, yeylon | |
Target Milestone: | z1 | |||
Target Release: | Installer | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | ruby193-rubygem-staypuft-0.3.4-2.el6ost | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1126583 (view as bug list) | Environment: | ||
Last Closed: | 2014-10-01 13:25:48 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1126583 |
Description
Toni Freger
2014-08-04 08:17:45 UTC
Yes, nova-novncproxy running on the controller. Staypuft is a misconfiguring the thing. In my opinion, the novnc and dashboard should listen on the public/external network while Staypuft/Foreman might only be listening on the management network (192.168.x.x). In order to do so, one has to specify ServerAlias 10.35.x.x in the /etc/httpd/config.d/15-horizon_vhost.conf and add the same address to ALLOWED_HOSTS to /etc/openstack-dashboard/local_settings and novncproxy_base_url in nova.conf on the computes should probably also contain 10.35.x.x. the vncserver_listen si imho configured well -- 192.168.x.x novnc is innocent here: lsof -ni | grep novn nova-novn 14126 nova 3u IPv4 160500 0t0 TCP *:6080 (LISTEN) Toni, Can you add the output of "iptables -S" on your compute node to this report? In at least one other instance of this problem it was actually a firewall issue rather than a nova configuration issue. Also, please attach /etc/nova/nova.conf from your controller and /etc/nova/nova.conf from your compute node. Thanks! ohochman gave me access to this environment. There are two issues: (a) The vnc configuration on the compute nodes has them listening on the "wrong" address. In the configuration file is: vncserver_listen=192.168.100.253 vncserver_proxyclient_address=192.168.100.253 Where 192.168.100.0/24 is the address range for the tenant overlay network. These should be listening on the management network, which on these systems is 192.168.0.0/24. (b) The iptables configuration on the compute nodes does not permit VNC access. VNC servers start at port 5900 and then go up, so you need to open ports 5900-(5900 + max number of instances you expect to be running on this server). Or just "all traffic from the controller". With both of these issues fixed, it is possible to connect to a VNC console through the gui. my take on this is that we need to split into 2 bugs, 1 for (a) and 1 for (b) in comment 6. I will clone and change components correctly. Moving to staypuft to track addition of new params needed there Staypuft fix is here: https://github.com/theforeman/staypuft/pull/265 *** Bug 1129970 has been marked as a duplicate of this bug. *** Currently the link that is being created by attempting to open VNC console is pointing to internal network IP like 192.168.0.6 , this should be replace by external network IP to be able to open the console. should be fixed for A1 verified - ruby193-rubygem-staypuft-0.3.5-1.el6ost.noarch I have tested with NON HA setup and my publicAPI was on the same network as tenant network ( which is not not the PXE/Provision one) I was able to access horizon and use VMs consoles ( which are on compute hosts) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1350.html |